Jan 29 15:27:34 crc systemd[1]: Starting Kubernetes Kubelet... Jan 29 15:27:35 crc restorecon[4686]: Relabeled /var/lib/kubelet/config.json from system_u:object_r:unlabeled_t:s0 to system_u:object_r:container_var_lib_t:s0 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/device-plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/device-plugins/kubelet.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/volumes/kubernetes.io~configmap/nginx-conf/..2025_02_23_05_40_35.4114275528/nginx.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/22e96971 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/21c98286 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/0f1869e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/46889d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/5b6a5969 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/6c7921f5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4804f443 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/2a46b283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/a6b5573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4f88ee5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/5a4eee4b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/cd87c521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/38602af4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/1483b002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/0346718b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/d3ed4ada not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/3bb473a5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/8cd075a9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/00ab4760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/54a21c09 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/70478888 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/43802770 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/955a0edc not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/bca2d009 not reset as customized by admin to system_u:object_r:container_file_t:s0:c140,c1009 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/b295f9bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/bc46ea27 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5731fc1b not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5e1b2a3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/943f0936 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/3f764ee4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/8695e3f9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/aed7aa86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/c64d7448 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/0ba16bd2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/207a939f not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/54aa8cdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/1f5fa595 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/bf9c8153 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/47fba4ea not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/7ae55ce9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7906a268 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/ce43fa69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7fc7ea3a not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/d8c38b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/9ef015fb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/b9db6a41 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/b1733d79 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/afccd338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/9df0a185 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/18938cf8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/7ab4eb23 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/56930be6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_35.630010865 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/0d8e3722 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/d22b2e76 not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/e036759f not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/2734c483 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/57878fe7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/3f3c2e58 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/375bec3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/7bc41e08 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/48c7a72d not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/4b66701f not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/a5a1c202 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_40.1388695756 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/26f3df5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/6d8fb21d not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/50e94777 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208473b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/ec9e08ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3b787c39 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208eaed5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/93aa3a2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3c697968 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/ba950ec9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/cb5cdb37 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/f2df9827 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/fedaa673 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/9ca2df95 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/b2d7460e not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2207853c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/241c1c29 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2d910eaf not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/c6c0f2e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/399edc97 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8049f7cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/0cec5484 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/312446d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c406,c828 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8e56a35d not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/2d30ddb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/eca8053d not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/c3a25c9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c168,c522 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/b9609c22 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/e8b0eca9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/b36a9c3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/38af7b07 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/ae821620 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/baa23338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/2c534809 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/59b29eae not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/c91a8e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/4d87494a not reset as customized by admin to system_u:object_r:container_file_t:s0:c442,c857 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/1e33ca63 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/8dea7be2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d0b04a99 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d84f01e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/4109059b not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/a7258a3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/05bdf2b6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/f3261b51 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/315d045e not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/5fdcf278 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/d053f757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/c2850dc7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fcfb0b2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c7ac9b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fa0c0d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c609b6ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/2be6c296 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/89a32653 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/4eb9afeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/13af6efa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/b03f9724 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/e3d105cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/3aed4d83 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/0765fa6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/2cefc627 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/3dcc6345 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/365af391 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b1130c0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/236a5913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b9432e26 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/5ddb0e3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/986dc4fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/8a23ff9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/9728ae68 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/665f31d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/136c9b42 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/98a1575b not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/cac69136 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/5deb77a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/2ae53400 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/e46f2326 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/dc688d3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/3497c3cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/177eb008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/af5a2afa not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/d780cb1f not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/49b0f374 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/26fbb125 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/cf14125a not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/b7f86972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/e51d739c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/88ba6a69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/669a9acf not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/5cd51231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/75349ec7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/15c26839 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/45023dcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/2bb66a50 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/64d03bdd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/ab8e7ca0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/bb9be25f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/9a0b61d3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/d471b9d2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/8cb76b8e not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/11a00840 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/ec355a92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/992f735e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d59cdbbc not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/72133ff0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/c56c834c not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d13724c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/0a498258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa471982 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fc900d92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa7d68da not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/4bacf9b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/424021b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/fc2e31a3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/f51eefac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/c8997f2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/7481f599 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/fdafea19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/d0e1c571 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/ee398915 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/682bb6b8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a3e67855 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a989f289 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/915431bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/7796fdab not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/dcdb5f19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/a3aaa88c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/5508e3e6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/160585de not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/e99f8da3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/8bc85570 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/a5861c91 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/84db1135 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/9e1a6043 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/c1aba1c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/d55ccd6d not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/971cc9f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/8f2e3dcf not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/ceb35e9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/1c192745 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/5209e501 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/f83de4df not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/e7b978ac not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/c64304a1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/5384386b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/cce3e3ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/8fb75465 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/740f573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/32fd1134 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/0a861bd3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/80363026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/bfa952a8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..2025_02_23_05_33_31.333075221 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/793bf43d not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/7db1bb6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/4f6a0368 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/c12c7d86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/36c4a773 not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/4c1e98ae not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/a4c8115c not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/setup/7db1802e not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver/a008a7ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-syncer/2c836bac not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-regeneration-controller/0ce62299 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-insecure-readyz/945d2457 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-check-endpoints/7d5c1dd8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/index.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/bundle-v1.15.0.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/channel.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/package.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/bc8d0691 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/6b76097a not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/34d1af30 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/312ba61c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/645d5dd1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/16e825f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/4cf51fc9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/2a23d348 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/075dbd49 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/dd585ddd not reset as customized by admin to system_u:object_r:container_file_t:s0:c377,c642 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/17ebd0ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c343 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/005579f4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_23_11.1287037894 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/bf5f3b9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/af276eb7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/ea28e322 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/692e6683 not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/871746a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/4eb2e958 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/ca9b62da not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/0edd6fce not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/containers/controller-manager/89b4555f not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/655fcd71 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/0d43c002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/e68efd17 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/9acf9b65 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/5ae3ff11 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/1e59206a not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/27af16d1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c304,c1017 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/7918e729 not reset as customized by admin to system_u:object_r:container_file_t:s0:c853,c893 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/5d976d0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c585,c981 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/d7f55cbb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/f0812073 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/1a56cbeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/7fdd437e not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/cdfb5652 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/fix-audit-permissions/fb93119e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver/f1e8fc0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver-check-endpoints/218511f3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server/serving-certs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/ca8af7b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/72cc8a75 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/6e8a3760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4c3455c0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/2278acb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4b453e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/3ec09bda not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2/cacerts.bin not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java/cacerts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl/ca-bundle.trust.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/email-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/objsign-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2ae6433e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fde84897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75680d2e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/openshift-service-serving-signer_1740288168.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/facfc4fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f5a969c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CFCA_EV_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9ef4a08a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ingress-operator_1740288202.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2f332aed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/248c8271.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d10a21f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ACCVRAIZ1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a94d09e5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c9a4d3b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40193066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd8c0d63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b936d1c6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CA_Disig_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4fd49c6c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM_SERVIDORES_SEGUROS.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b81b93f0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f9a69fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b30d5fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ANF_Secure_Server_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b433981b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93851c9e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9282e51c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7dd1bc4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Actalis_Authentication_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/930ac5d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f47b495.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e113c810.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5931b5bc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Commercial.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2b349938.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e48193cf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/302904dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a716d4ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Networking.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93bc0acc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/86212b19.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b727005e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbc54cab.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f51bb24c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c28a8a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9c8dfbd4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ccc52f49.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cb1c3204.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ce5e74ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd08c599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6d41d539.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb5fa911.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e35234b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8cb5ee0f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a7c655d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f8fc53da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/de6d66f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d41b5e2a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/41a3f684.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1df5a75f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_2011.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e36a6752.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b872f2b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9576d26b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/228f89db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_ECC_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb717492.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d21b73c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b1b94ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/595e996b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_RSA_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b46e03d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/128f4b91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_3_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81f2d2b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Autoridad_de_Certificacion_Firmaprofesional_CIF_A62634068.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3bde41ac.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d16a5865.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_EC-384_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0179095f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ffa7f1eb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9482e63a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4dae3dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e359ba6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7e067d03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/95aff9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7746a63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Baltimore_CyberTrust_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/653b494a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3ad48a91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_2_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/54657681.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/82223c44.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8de2f56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d9dafe4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d96b65e2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee64a828.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40547a79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5a3f0ff8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a780d93.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/34d996fb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/eed8c118.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/89c02a45.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b1159c4c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d6325660.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4c339cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8312c4c1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_E1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8508e720.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5fdd185d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48bec511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/69105f4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b9bc432.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/32888f65.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b03dec0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/219d9499.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5acf816d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbf06781.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc99f41e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AAA_Certificate_Services.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/985c1f52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8794b4e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_BR_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7c037b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ef954a4e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_EV_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2add47b6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/90c5a3c8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0f3e76e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/53a1b57a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_EV_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5ad8a5d6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/68dd7389.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d04f354.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d6437c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/062cdee6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bd43e1dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7f3d5d1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c491639e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3513523f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/399e7759.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/feffd413.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d18e9066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/607986c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c90bc37d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1b0f7e5c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e08bfd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dd8e9d41.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed39abd0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a3418fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bc3f2570.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_High_Assurance_EV_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/244b5494.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81b9768f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4be590e0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_ECC_P384_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9846683b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/252252d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e8e7201.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_RSA4096_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d52c538d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c44cc0c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Trusted_Root_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75d1b2ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a2c66da8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ecccd8db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust.net_Certification_Authority__2048_.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/aee5f10d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e7271e8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0e59380.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4c3982f2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b99d060.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf64f35b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0a775a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/002c0b4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cc450945.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_EC1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/106f3e4d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b3fb433b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4042bcee.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/02265526.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/455f1b52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0d69c7e1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9f727ac7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5e98733a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0cd152c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc4d6a89.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6187b673.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/FIRMAPROFESIONAL_CA_ROOT-A_WEB.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ba8887ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/068570d1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f081611a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48a195d8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GDCA_TrustAUTH_R5_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f6fa695.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab59055e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b92fd57f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GLOBALTRUST_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fa5da96b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ec40989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7719f463.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1001acf7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f013ecaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/626dceaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c559d742.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1d3472b9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9479c8c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a81e292b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4bfab552.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e071171e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/57bcb2da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_ECC_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab5346f4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5046c355.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_RSA_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/865fbdf9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da0cfd1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/85cde254.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_ECC_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbb3f32b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureSign_RootCA11.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5860aaa6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/31188b5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HiPKI_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c7f1359b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f15c80c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hongkong_Post_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/09789157.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/18856ac4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e09d511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Commercial_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cf701eeb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d06393bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Public_Sector_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/10531352.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Izenpe.com.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureTrust_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0ed035a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsec_e-Szigno_Root_CA_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8160b96c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8651083.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2c63f966.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_ECC_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d89cda1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/01419da9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_RSA_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7a5b843.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_RSA_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf53fb88.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9591a472.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3afde786.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Gold_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NAVER_Global_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3fb36b73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d39b0a2c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a89d74c2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd58d51e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7db1890.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NetLock_Arany__Class_Gold__F__tan__s__tv__ny.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/988a38cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/60afe812.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f39fc864.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5443e9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GB_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e73d606e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dfc0fe80.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b66938e9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e1eab7c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GC_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/773e07ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c899c73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d59297b8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ddcda989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_1_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/749e9e03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/52b525c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7e8dc79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a819ef2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/08063a00.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b483515.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/064e0aa9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1f58a078.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6f7454b3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7fa05551.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76faf6c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9339512a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f387163d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee37c333.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e18bfb83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e442e424.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fe8a2cd8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/23f4c490.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5cd81ad7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0c70a8d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7892ad52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SZAFIR_ROOT_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4f316efb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_RSA_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/06dc52d5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/583d0756.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0bf05006.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/88950faa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9046744a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c860d51.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_RSA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6fa5da56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/33ee480d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Secure_Global_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/63a2c897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_ECC_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bdacca6f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ff34af3f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbff3a01.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_ECC_RootCA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_C1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/406c9bb1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_C3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Services_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Silver_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/99e1b953.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/14bc7599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TUBITAK_Kamu_SM_SSL_Kok_Sertifikasi_-_Surum_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a3adc42.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f459871d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_ECC_Root_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_RSA_Root_2023.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TeliaSonera_Root_CA_v1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telia_Root_CA_v2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f103249.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f058632f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-certificates.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9bf03295.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/98aaf404.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1cef98f5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/073bfcc5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2923b3f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f249de83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/edcbddb5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P256_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b5697b0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ae85e5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b74d2bd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P384_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d887a5bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9aef356c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TunTrust_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd64f3fc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e13665f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Extended_Validation_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f5dc4f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da7377f6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Global_G2_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c01eb047.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/304d27c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed858448.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f30dd6ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/04f60c28.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_ECC_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fc5a8f99.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/35105088.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee532fd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/XRamp_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/706f604c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76579174.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d86cdd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/882de061.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f618aec.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a9d40e02.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e-Szigno_Root_CA_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e868b802.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/83e9984f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ePKI_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca6e4ad9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d6523ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4b718d9b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/869fbf79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/containers/registry/f8d22bdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/6e8bbfac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/54dd7996 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/a4f1bb05 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/207129da not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/c1df39e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/15b8f1cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/77bd6913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/2382c1b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/704ce128 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/70d16fe0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/bfb95535 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/57a8e8e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/1b9d3e5e not reset as customized by admin to system_u:object_r:container_file_t:s0:c107,c917 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/fddb173c not reset as customized by admin to system_u:object_r:container_file_t:s0:c202,c983 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/95d3c6c4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/bfb5fff5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/2aef40aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/c0391cad not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/1119e69d not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/660608b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/8220bd53 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/85f99d5c not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/4b0225f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/9c2a3394 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/e820b243 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/1ca52ea0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/e6988e45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/6655f00b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/98bc3986 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/08e3458a not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/2a191cb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/6c4eeefb not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/f61a549c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/24891863 not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/fbdfd89c not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/9b63b3bc not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/8acde6d6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/node-driver-registrar/59ecbba3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/csi-provisioner/685d4be3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/containers/route-controller-manager/feaea55e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/63709497 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/d966b7fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/f5773757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/81c9edb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/57bf57ee not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/86f5e6aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/0aabe31d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/d2af85c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/09d157d9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:35 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:36 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:36 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:36 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:36 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:36 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:36 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:36 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:36 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:36 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:36 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:36 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:36 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:36 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:36 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:36 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:36 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:36 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:36 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:36 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:36 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:36 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:36 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:36 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:36 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:36 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:36 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:36 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:36 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:36 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:36 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:36 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:36 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:36 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:36 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:36 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:36 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:36 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:36 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:36 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:36 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:36 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:36 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:36 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:36 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:36 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:36 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:36 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:36 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:36 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:36 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:36 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:36 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:36 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:36 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:36 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:36 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:36 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:36 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:36 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:36 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:36 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:36 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:36 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:36 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:36 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:36 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:36 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:36 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:36 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:36 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:36 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:36 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:36 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:36 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:36 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:36 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:36 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:36 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:36 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:36 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:36 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:36 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:36 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:36 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:36 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:36 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:36 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:36 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:36 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:36 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:36 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:36 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:36 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:36 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:36 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:36 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:36 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:36 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:36 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:36 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:36 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:36 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:36 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:36 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:36 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:36 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:36 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:36 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:36 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:36 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:36 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:36 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:36 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:36 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:36 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:36 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:36 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:36 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:36 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:36 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:36 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:36 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:36 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:36 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:36 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:36 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:36 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:36 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:36 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:36 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:36 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:36 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:36 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:36 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:36 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:36 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:36 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:36 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:36 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:36 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:36 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:36 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:36 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:36 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:36 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:36 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:36 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:36 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:36 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:36 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:36 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:36 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:36 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:36 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:36 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:36 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:36 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:36 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:36 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:36 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:36 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:36 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:36 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:36 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:36 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:36 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:36 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:36 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:36 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:36 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:36 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:36 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:36 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:36 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:36 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:36 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:36 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:36 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:36 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:36 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:36 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:36 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:36 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:36 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:36 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:36 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:36 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:36 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:36 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:36 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:36 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:36 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:36 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:36 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:36 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:36 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:36 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:36 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:36 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:36 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:36 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:36 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:36 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:36 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:36 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:36 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:36 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:36 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:36 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:36 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:36 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:36 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:36 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:36 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:36 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:36 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:36 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:36 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:36 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:36 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:36 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:36 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:36 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:36 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:36 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:36 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:36 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:36 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:36 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:36 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:36 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:36 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:36 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:36 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:36 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:36 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:36 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:36 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:36 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:36 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:36 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:36 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:36 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:36 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:36 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:36 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:36 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:36 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:36 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:36 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:36 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:36 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:36 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:36 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:36 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:36 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:36 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:36 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:36 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:36 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:36 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:36 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:36 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:36 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:36 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:36 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:36 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:36 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:36 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:36 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:36 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:36 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:36 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:36 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:36 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:36 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:36 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:36 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:36 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:36 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:36 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:36 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:36 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:36 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:36 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:36 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:36 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:36 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:36 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:36 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:36 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:36 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:36 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:36 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:36 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:36 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:36 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:36 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:36 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:36 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:36 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:36 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:36 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:36 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:36 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:36 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:36 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:36 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:36 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:36 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:36 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:36 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:36 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:36 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:36 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:36 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:36 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:36 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:36 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:36 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:36 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:36 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:36 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:36 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:36 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:36 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:36 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:36 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:36 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:36 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:36 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:36 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:36 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:36 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:36 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:36 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:36 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:36 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:36 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:36 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:36 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:36 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:36 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:36 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:36 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:36 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:36 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:36 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:36 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:36 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:36 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:36 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:36 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:36 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:36 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:36 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:36 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:36 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:36 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:36 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:36 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:36 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:36 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:36 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:36 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:36 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:36 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:36 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c0fe7256 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:36 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c30319e4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:36 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/e6b1dd45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:36 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/2bb643f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:36 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/920de426 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:36 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/70fa1e87 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:36 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/a1c12a2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:36 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/9442e6c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:36 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/5b45ec72 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:36 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:36 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:36 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:36 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:36 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:36 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:36 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:36 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:36 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:36 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:36 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:36 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:36 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:36 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:36 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:36 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:36 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:36 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:36 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:36 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:36 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:36 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:36 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:36 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:36 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:36 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:36 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:36 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:36 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:36 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:36 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:36 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:36 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:36 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:36 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:36 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:36 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:36 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:36 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:36 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:36 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:36 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:36 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:36 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:36 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:36 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:36 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:36 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:36 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:36 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:36 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:36 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:36 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:36 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:36 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:36 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:36 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:36 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:36 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:36 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:36 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:36 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:36 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:36 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:36 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:36 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:36 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:36 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:36 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:36 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:36 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:36 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:36 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:36 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:36 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:36 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:36 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:36 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:36 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:36 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:36 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:36 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:36 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:36 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:36 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:36 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:36 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:36 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:36 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:36 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:36 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:36 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/3c9f3a59 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:36 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/1091c11b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:36 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/9a6821c6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:36 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/ec0c35e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:36 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/517f37e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:36 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/6214fe78 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:36 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/ba189c8b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:36 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/351e4f31 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:36 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/c0f219ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:27:36 crc restorecon[4686]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 29 15:27:36 crc restorecon[4686]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/8069f607 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Jan 29 15:27:36 crc restorecon[4686]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/559c3d82 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Jan 29 15:27:36 crc restorecon[4686]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/605ad488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 29 15:27:36 crc restorecon[4686]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/148df488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Jan 29 15:27:36 crc restorecon[4686]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/3bf6dcb4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Jan 29 15:27:36 crc restorecon[4686]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/022a2feb not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 29 15:27:36 crc restorecon[4686]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/938c3924 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Jan 29 15:27:36 crc restorecon[4686]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/729fe23e not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Jan 29 15:27:36 crc restorecon[4686]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/1fd5cbd4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 29 15:27:36 crc restorecon[4686]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/a96697e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Jan 29 15:27:36 crc restorecon[4686]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/e155ddca not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Jan 29 15:27:36 crc restorecon[4686]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/10dd0e0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 29 15:27:36 crc restorecon[4686]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 29 15:27:36 crc restorecon[4686]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 29 15:27:36 crc restorecon[4686]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 29 15:27:36 crc restorecon[4686]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 29 15:27:36 crc restorecon[4686]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 29 15:27:36 crc restorecon[4686]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 29 15:27:36 crc restorecon[4686]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 29 15:27:36 crc restorecon[4686]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 29 15:27:36 crc restorecon[4686]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 29 15:27:36 crc restorecon[4686]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 29 15:27:36 crc restorecon[4686]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 29 15:27:36 crc restorecon[4686]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 29 15:27:36 crc restorecon[4686]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 29 15:27:36 crc restorecon[4686]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 29 15:27:36 crc restorecon[4686]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 29 15:27:36 crc restorecon[4686]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 29 15:27:36 crc restorecon[4686]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 29 15:27:36 crc restorecon[4686]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 29 15:27:36 crc restorecon[4686]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 29 15:27:36 crc restorecon[4686]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 29 15:27:36 crc restorecon[4686]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 29 15:27:36 crc restorecon[4686]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/6f2c8392 not reset as customized by admin to system_u:object_r:container_file_t:s0:c267,c588 Jan 29 15:27:36 crc restorecon[4686]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/bd241ad9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 29 15:27:36 crc restorecon[4686]: /var/lib/kubelet/plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 29 15:27:36 crc restorecon[4686]: /var/lib/kubelet/plugins/csi-hostpath not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 29 15:27:36 crc restorecon[4686]: /var/lib/kubelet/plugins/csi-hostpath/csi.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 29 15:27:36 crc restorecon[4686]: /var/lib/kubelet/plugins/kubernetes.io not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 29 15:27:36 crc restorecon[4686]: /var/lib/kubelet/plugins/kubernetes.io/csi not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 29 15:27:36 crc restorecon[4686]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 29 15:27:36 crc restorecon[4686]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983 not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 29 15:27:36 crc restorecon[4686]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 29 15:27:36 crc restorecon[4686]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/vol_data.json not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 29 15:27:36 crc restorecon[4686]: /var/lib/kubelet/plugins_registry not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 29 15:27:36 crc restorecon[4686]: Relabeled /var/usrlocal/bin/kubenswrapper from system_u:object_r:bin_t:s0 to system_u:object_r:kubelet_exec_t:s0 Jan 29 15:27:36 crc kubenswrapper[5008]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 29 15:27:36 crc kubenswrapper[5008]: Flag --minimum-container-ttl-duration has been deprecated, Use --eviction-hard or --eviction-soft instead. Will be removed in a future version. Jan 29 15:27:36 crc kubenswrapper[5008]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 29 15:27:36 crc kubenswrapper[5008]: Flag --register-with-taints has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 29 15:27:36 crc kubenswrapper[5008]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 29 15:27:36 crc kubenswrapper[5008]: Flag --system-reserved has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 29 15:27:36 crc kubenswrapper[5008]: I0129 15:27:36.976413 5008 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 29 15:27:36 crc kubenswrapper[5008]: W0129 15:27:36.987865 5008 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Jan 29 15:27:36 crc kubenswrapper[5008]: W0129 15:27:36.987902 5008 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Jan 29 15:27:36 crc kubenswrapper[5008]: W0129 15:27:36.987915 5008 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Jan 29 15:27:36 crc kubenswrapper[5008]: W0129 15:27:36.987924 5008 feature_gate.go:330] unrecognized feature gate: Example Jan 29 15:27:36 crc kubenswrapper[5008]: W0129 15:27:36.987934 5008 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Jan 29 15:27:36 crc kubenswrapper[5008]: W0129 15:27:36.987943 5008 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Jan 29 15:27:36 crc kubenswrapper[5008]: W0129 15:27:36.987953 5008 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Jan 29 15:27:36 crc kubenswrapper[5008]: W0129 15:27:36.987962 5008 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 29 15:27:36 crc kubenswrapper[5008]: W0129 15:27:36.987972 5008 feature_gate.go:330] unrecognized feature gate: SignatureStores Jan 29 15:27:36 crc kubenswrapper[5008]: W0129 15:27:36.987981 5008 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Jan 29 15:27:36 crc kubenswrapper[5008]: W0129 15:27:36.987990 5008 feature_gate.go:330] unrecognized feature gate: OVNObservability Jan 29 15:27:36 crc kubenswrapper[5008]: W0129 15:27:36.987998 5008 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Jan 29 15:27:36 crc kubenswrapper[5008]: W0129 15:27:36.988007 5008 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Jan 29 15:27:36 crc kubenswrapper[5008]: W0129 15:27:36.988015 5008 feature_gate.go:330] unrecognized feature gate: PinnedImages Jan 29 15:27:36 crc kubenswrapper[5008]: W0129 15:27:36.988031 5008 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Jan 29 15:27:36 crc kubenswrapper[5008]: W0129 15:27:36.988040 5008 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Jan 29 15:27:36 crc kubenswrapper[5008]: W0129 15:27:36.988048 5008 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Jan 29 15:27:36 crc kubenswrapper[5008]: W0129 15:27:36.988057 5008 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Jan 29 15:27:36 crc kubenswrapper[5008]: W0129 15:27:36.988066 5008 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Jan 29 15:27:36 crc kubenswrapper[5008]: W0129 15:27:36.988075 5008 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Jan 29 15:27:36 crc kubenswrapper[5008]: W0129 15:27:36.988084 5008 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Jan 29 15:27:36 crc kubenswrapper[5008]: W0129 15:27:36.988093 5008 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Jan 29 15:27:36 crc kubenswrapper[5008]: W0129 15:27:36.988101 5008 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Jan 29 15:27:36 crc kubenswrapper[5008]: W0129 15:27:36.988110 5008 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Jan 29 15:27:36 crc kubenswrapper[5008]: W0129 15:27:36.988119 5008 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Jan 29 15:27:36 crc kubenswrapper[5008]: W0129 15:27:36.988127 5008 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Jan 29 15:27:36 crc kubenswrapper[5008]: W0129 15:27:36.988135 5008 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 29 15:27:36 crc kubenswrapper[5008]: W0129 15:27:36.988143 5008 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Jan 29 15:27:36 crc kubenswrapper[5008]: W0129 15:27:36.988152 5008 feature_gate.go:330] unrecognized feature gate: PlatformOperators Jan 29 15:27:36 crc kubenswrapper[5008]: W0129 15:27:36.988161 5008 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Jan 29 15:27:36 crc kubenswrapper[5008]: W0129 15:27:36.988169 5008 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Jan 29 15:27:36 crc kubenswrapper[5008]: W0129 15:27:36.988177 5008 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Jan 29 15:27:36 crc kubenswrapper[5008]: W0129 15:27:36.988185 5008 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Jan 29 15:27:36 crc kubenswrapper[5008]: W0129 15:27:36.988194 5008 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Jan 29 15:27:36 crc kubenswrapper[5008]: W0129 15:27:36.988202 5008 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 29 15:27:36 crc kubenswrapper[5008]: W0129 15:27:36.988211 5008 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Jan 29 15:27:36 crc kubenswrapper[5008]: W0129 15:27:36.988219 5008 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Jan 29 15:27:36 crc kubenswrapper[5008]: W0129 15:27:36.988235 5008 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Jan 29 15:27:36 crc kubenswrapper[5008]: W0129 15:27:36.988249 5008 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Jan 29 15:27:36 crc kubenswrapper[5008]: W0129 15:27:36.988261 5008 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Jan 29 15:27:36 crc kubenswrapper[5008]: W0129 15:27:36.988274 5008 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Jan 29 15:27:36 crc kubenswrapper[5008]: W0129 15:27:36.988286 5008 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Jan 29 15:27:36 crc kubenswrapper[5008]: W0129 15:27:36.988304 5008 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 29 15:27:36 crc kubenswrapper[5008]: W0129 15:27:36.988316 5008 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 29 15:27:36 crc kubenswrapper[5008]: W0129 15:27:36.988325 5008 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Jan 29 15:27:36 crc kubenswrapper[5008]: W0129 15:27:36.988334 5008 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Jan 29 15:27:36 crc kubenswrapper[5008]: W0129 15:27:36.988342 5008 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Jan 29 15:27:36 crc kubenswrapper[5008]: W0129 15:27:36.988350 5008 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Jan 29 15:27:36 crc kubenswrapper[5008]: W0129 15:27:36.988358 5008 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Jan 29 15:27:36 crc kubenswrapper[5008]: W0129 15:27:36.988368 5008 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Jan 29 15:27:36 crc kubenswrapper[5008]: W0129 15:27:36.988376 5008 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Jan 29 15:27:36 crc kubenswrapper[5008]: W0129 15:27:36.988384 5008 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Jan 29 15:27:36 crc kubenswrapper[5008]: W0129 15:27:36.988392 5008 feature_gate.go:330] unrecognized feature gate: NewOLM Jan 29 15:27:36 crc kubenswrapper[5008]: W0129 15:27:36.988400 5008 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Jan 29 15:27:36 crc kubenswrapper[5008]: W0129 15:27:36.988411 5008 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 29 15:27:36 crc kubenswrapper[5008]: W0129 15:27:36.988419 5008 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Jan 29 15:27:36 crc kubenswrapper[5008]: W0129 15:27:36.988430 5008 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Jan 29 15:27:36 crc kubenswrapper[5008]: W0129 15:27:36.988441 5008 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Jan 29 15:27:36 crc kubenswrapper[5008]: W0129 15:27:36.988449 5008 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Jan 29 15:27:36 crc kubenswrapper[5008]: W0129 15:27:36.988458 5008 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Jan 29 15:27:36 crc kubenswrapper[5008]: W0129 15:27:36.988466 5008 feature_gate.go:330] unrecognized feature gate: GatewayAPI Jan 29 15:27:36 crc kubenswrapper[5008]: W0129 15:27:36.988475 5008 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Jan 29 15:27:36 crc kubenswrapper[5008]: W0129 15:27:36.988487 5008 feature_gate.go:330] unrecognized feature gate: InsightsConfig Jan 29 15:27:36 crc kubenswrapper[5008]: W0129 15:27:36.988498 5008 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 29 15:27:36 crc kubenswrapper[5008]: W0129 15:27:36.988509 5008 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Jan 29 15:27:36 crc kubenswrapper[5008]: W0129 15:27:36.988519 5008 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Jan 29 15:27:36 crc kubenswrapper[5008]: W0129 15:27:36.988530 5008 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Jan 29 15:27:36 crc kubenswrapper[5008]: W0129 15:27:36.988541 5008 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Jan 29 15:27:36 crc kubenswrapper[5008]: W0129 15:27:36.988552 5008 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Jan 29 15:27:36 crc kubenswrapper[5008]: W0129 15:27:36.988561 5008 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Jan 29 15:27:36 crc kubenswrapper[5008]: W0129 15:27:36.988570 5008 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Jan 29 15:27:36 crc kubenswrapper[5008]: I0129 15:27:36.988881 5008 flags.go:64] FLAG: --address="0.0.0.0" Jan 29 15:27:36 crc kubenswrapper[5008]: I0129 15:27:36.988908 5008 flags.go:64] FLAG: --allowed-unsafe-sysctls="[]" Jan 29 15:27:36 crc kubenswrapper[5008]: I0129 15:27:36.988936 5008 flags.go:64] FLAG: --anonymous-auth="true" Jan 29 15:27:36 crc kubenswrapper[5008]: I0129 15:27:36.988954 5008 flags.go:64] FLAG: --application-metrics-count-limit="100" Jan 29 15:27:36 crc kubenswrapper[5008]: I0129 15:27:36.988971 5008 flags.go:64] FLAG: --authentication-token-webhook="false" Jan 29 15:27:36 crc kubenswrapper[5008]: I0129 15:27:36.988983 5008 flags.go:64] FLAG: --authentication-token-webhook-cache-ttl="2m0s" Jan 29 15:27:36 crc kubenswrapper[5008]: I0129 15:27:36.988999 5008 flags.go:64] FLAG: --authorization-mode="AlwaysAllow" Jan 29 15:27:36 crc kubenswrapper[5008]: I0129 15:27:36.989014 5008 flags.go:64] FLAG: --authorization-webhook-cache-authorized-ttl="5m0s" Jan 29 15:27:36 crc kubenswrapper[5008]: I0129 15:27:36.989027 5008 flags.go:64] FLAG: --authorization-webhook-cache-unauthorized-ttl="30s" Jan 29 15:27:36 crc kubenswrapper[5008]: I0129 15:27:36.989039 5008 flags.go:64] FLAG: --boot-id-file="/proc/sys/kernel/random/boot_id" Jan 29 15:27:36 crc kubenswrapper[5008]: I0129 15:27:36.989049 5008 flags.go:64] FLAG: --bootstrap-kubeconfig="/etc/kubernetes/kubeconfig" Jan 29 15:27:36 crc kubenswrapper[5008]: I0129 15:27:36.989060 5008 flags.go:64] FLAG: --cert-dir="/var/lib/kubelet/pki" Jan 29 15:27:36 crc kubenswrapper[5008]: I0129 15:27:36.989070 5008 flags.go:64] FLAG: --cgroup-driver="cgroupfs" Jan 29 15:27:36 crc kubenswrapper[5008]: I0129 15:27:36.989080 5008 flags.go:64] FLAG: --cgroup-root="" Jan 29 15:27:36 crc kubenswrapper[5008]: I0129 15:27:36.989090 5008 flags.go:64] FLAG: --cgroups-per-qos="true" Jan 29 15:27:36 crc kubenswrapper[5008]: I0129 15:27:36.989110 5008 flags.go:64] FLAG: --client-ca-file="" Jan 29 15:27:36 crc kubenswrapper[5008]: I0129 15:27:36.989120 5008 flags.go:64] FLAG: --cloud-config="" Jan 29 15:27:36 crc kubenswrapper[5008]: I0129 15:27:36.989132 5008 flags.go:64] FLAG: --cloud-provider="" Jan 29 15:27:36 crc kubenswrapper[5008]: I0129 15:27:36.989141 5008 flags.go:64] FLAG: --cluster-dns="[]" Jan 29 15:27:36 crc kubenswrapper[5008]: I0129 15:27:36.989156 5008 flags.go:64] FLAG: --cluster-domain="" Jan 29 15:27:36 crc kubenswrapper[5008]: I0129 15:27:36.989167 5008 flags.go:64] FLAG: --config="/etc/kubernetes/kubelet.conf" Jan 29 15:27:36 crc kubenswrapper[5008]: I0129 15:27:36.989180 5008 flags.go:64] FLAG: --config-dir="" Jan 29 15:27:36 crc kubenswrapper[5008]: I0129 15:27:36.989192 5008 flags.go:64] FLAG: --container-hints="/etc/cadvisor/container_hints.json" Jan 29 15:27:36 crc kubenswrapper[5008]: I0129 15:27:36.989205 5008 flags.go:64] FLAG: --container-log-max-files="5" Jan 29 15:27:36 crc kubenswrapper[5008]: I0129 15:27:36.989232 5008 flags.go:64] FLAG: --container-log-max-size="10Mi" Jan 29 15:27:36 crc kubenswrapper[5008]: I0129 15:27:36.989242 5008 flags.go:64] FLAG: --container-runtime-endpoint="/var/run/crio/crio.sock" Jan 29 15:27:36 crc kubenswrapper[5008]: I0129 15:27:36.989252 5008 flags.go:64] FLAG: --containerd="/run/containerd/containerd.sock" Jan 29 15:27:36 crc kubenswrapper[5008]: I0129 15:27:36.989262 5008 flags.go:64] FLAG: --containerd-namespace="k8s.io" Jan 29 15:27:36 crc kubenswrapper[5008]: I0129 15:27:36.989272 5008 flags.go:64] FLAG: --contention-profiling="false" Jan 29 15:27:36 crc kubenswrapper[5008]: I0129 15:27:36.989282 5008 flags.go:64] FLAG: --cpu-cfs-quota="true" Jan 29 15:27:36 crc kubenswrapper[5008]: I0129 15:27:36.989292 5008 flags.go:64] FLAG: --cpu-cfs-quota-period="100ms" Jan 29 15:27:36 crc kubenswrapper[5008]: I0129 15:27:36.989302 5008 flags.go:64] FLAG: --cpu-manager-policy="none" Jan 29 15:27:36 crc kubenswrapper[5008]: I0129 15:27:36.989311 5008 flags.go:64] FLAG: --cpu-manager-policy-options="" Jan 29 15:27:36 crc kubenswrapper[5008]: I0129 15:27:36.989323 5008 flags.go:64] FLAG: --cpu-manager-reconcile-period="10s" Jan 29 15:27:36 crc kubenswrapper[5008]: I0129 15:27:36.989391 5008 flags.go:64] FLAG: --enable-controller-attach-detach="true" Jan 29 15:27:36 crc kubenswrapper[5008]: I0129 15:27:36.989403 5008 flags.go:64] FLAG: --enable-debugging-handlers="true" Jan 29 15:27:36 crc kubenswrapper[5008]: I0129 15:27:36.989414 5008 flags.go:64] FLAG: --enable-load-reader="false" Jan 29 15:27:36 crc kubenswrapper[5008]: I0129 15:27:36.989424 5008 flags.go:64] FLAG: --enable-server="true" Jan 29 15:27:36 crc kubenswrapper[5008]: I0129 15:27:36.989434 5008 flags.go:64] FLAG: --enforce-node-allocatable="[pods]" Jan 29 15:27:36 crc kubenswrapper[5008]: I0129 15:27:36.989448 5008 flags.go:64] FLAG: --event-burst="100" Jan 29 15:27:36 crc kubenswrapper[5008]: I0129 15:27:36.989459 5008 flags.go:64] FLAG: --event-qps="50" Jan 29 15:27:36 crc kubenswrapper[5008]: I0129 15:27:36.989469 5008 flags.go:64] FLAG: --event-storage-age-limit="default=0" Jan 29 15:27:36 crc kubenswrapper[5008]: I0129 15:27:36.989479 5008 flags.go:64] FLAG: --event-storage-event-limit="default=0" Jan 29 15:27:36 crc kubenswrapper[5008]: I0129 15:27:36.989489 5008 flags.go:64] FLAG: --eviction-hard="" Jan 29 15:27:36 crc kubenswrapper[5008]: I0129 15:27:36.989501 5008 flags.go:64] FLAG: --eviction-max-pod-grace-period="0" Jan 29 15:27:36 crc kubenswrapper[5008]: I0129 15:27:36.989511 5008 flags.go:64] FLAG: --eviction-minimum-reclaim="" Jan 29 15:27:36 crc kubenswrapper[5008]: I0129 15:27:36.989521 5008 flags.go:64] FLAG: --eviction-pressure-transition-period="5m0s" Jan 29 15:27:36 crc kubenswrapper[5008]: I0129 15:27:36.989532 5008 flags.go:64] FLAG: --eviction-soft="" Jan 29 15:27:36 crc kubenswrapper[5008]: I0129 15:27:36.989542 5008 flags.go:64] FLAG: --eviction-soft-grace-period="" Jan 29 15:27:36 crc kubenswrapper[5008]: I0129 15:27:36.989551 5008 flags.go:64] FLAG: --exit-on-lock-contention="false" Jan 29 15:27:36 crc kubenswrapper[5008]: I0129 15:27:36.989562 5008 flags.go:64] FLAG: --experimental-allocatable-ignore-eviction="false" Jan 29 15:27:36 crc kubenswrapper[5008]: I0129 15:27:36.989573 5008 flags.go:64] FLAG: --experimental-mounter-path="" Jan 29 15:27:36 crc kubenswrapper[5008]: I0129 15:27:36.989583 5008 flags.go:64] FLAG: --fail-cgroupv1="false" Jan 29 15:27:36 crc kubenswrapper[5008]: I0129 15:27:36.989592 5008 flags.go:64] FLAG: --fail-swap-on="true" Jan 29 15:27:36 crc kubenswrapper[5008]: I0129 15:27:36.989602 5008 flags.go:64] FLAG: --feature-gates="" Jan 29 15:27:36 crc kubenswrapper[5008]: I0129 15:27:36.989614 5008 flags.go:64] FLAG: --file-check-frequency="20s" Jan 29 15:27:36 crc kubenswrapper[5008]: I0129 15:27:36.989624 5008 flags.go:64] FLAG: --global-housekeeping-interval="1m0s" Jan 29 15:27:36 crc kubenswrapper[5008]: I0129 15:27:36.989634 5008 flags.go:64] FLAG: --hairpin-mode="promiscuous-bridge" Jan 29 15:27:36 crc kubenswrapper[5008]: I0129 15:27:36.989646 5008 flags.go:64] FLAG: --healthz-bind-address="127.0.0.1" Jan 29 15:27:36 crc kubenswrapper[5008]: I0129 15:27:36.989659 5008 flags.go:64] FLAG: --healthz-port="10248" Jan 29 15:27:36 crc kubenswrapper[5008]: I0129 15:27:36.989671 5008 flags.go:64] FLAG: --help="false" Jan 29 15:27:36 crc kubenswrapper[5008]: I0129 15:27:36.989684 5008 flags.go:64] FLAG: --hostname-override="" Jan 29 15:27:36 crc kubenswrapper[5008]: I0129 15:27:36.989696 5008 flags.go:64] FLAG: --housekeeping-interval="10s" Jan 29 15:27:36 crc kubenswrapper[5008]: I0129 15:27:36.989709 5008 flags.go:64] FLAG: --http-check-frequency="20s" Jan 29 15:27:36 crc kubenswrapper[5008]: I0129 15:27:36.989721 5008 flags.go:64] FLAG: --image-credential-provider-bin-dir="" Jan 29 15:27:36 crc kubenswrapper[5008]: I0129 15:27:36.989733 5008 flags.go:64] FLAG: --image-credential-provider-config="" Jan 29 15:27:36 crc kubenswrapper[5008]: I0129 15:27:36.989743 5008 flags.go:64] FLAG: --image-gc-high-threshold="85" Jan 29 15:27:36 crc kubenswrapper[5008]: I0129 15:27:36.989753 5008 flags.go:64] FLAG: --image-gc-low-threshold="80" Jan 29 15:27:36 crc kubenswrapper[5008]: I0129 15:27:36.989763 5008 flags.go:64] FLAG: --image-service-endpoint="" Jan 29 15:27:36 crc kubenswrapper[5008]: I0129 15:27:36.989807 5008 flags.go:64] FLAG: --kernel-memcg-notification="false" Jan 29 15:27:36 crc kubenswrapper[5008]: I0129 15:27:36.989819 5008 flags.go:64] FLAG: --kube-api-burst="100" Jan 29 15:27:36 crc kubenswrapper[5008]: I0129 15:27:36.989829 5008 flags.go:64] FLAG: --kube-api-content-type="application/vnd.kubernetes.protobuf" Jan 29 15:27:36 crc kubenswrapper[5008]: I0129 15:27:36.989839 5008 flags.go:64] FLAG: --kube-api-qps="50" Jan 29 15:27:36 crc kubenswrapper[5008]: I0129 15:27:36.989849 5008 flags.go:64] FLAG: --kube-reserved="" Jan 29 15:27:36 crc kubenswrapper[5008]: I0129 15:27:36.989859 5008 flags.go:64] FLAG: --kube-reserved-cgroup="" Jan 29 15:27:36 crc kubenswrapper[5008]: I0129 15:27:36.989868 5008 flags.go:64] FLAG: --kubeconfig="/var/lib/kubelet/kubeconfig" Jan 29 15:27:36 crc kubenswrapper[5008]: I0129 15:27:36.989879 5008 flags.go:64] FLAG: --kubelet-cgroups="" Jan 29 15:27:36 crc kubenswrapper[5008]: I0129 15:27:36.989888 5008 flags.go:64] FLAG: --local-storage-capacity-isolation="true" Jan 29 15:27:36 crc kubenswrapper[5008]: I0129 15:27:36.989898 5008 flags.go:64] FLAG: --lock-file="" Jan 29 15:27:36 crc kubenswrapper[5008]: I0129 15:27:36.989909 5008 flags.go:64] FLAG: --log-cadvisor-usage="false" Jan 29 15:27:36 crc kubenswrapper[5008]: I0129 15:27:36.989923 5008 flags.go:64] FLAG: --log-flush-frequency="5s" Jan 29 15:27:36 crc kubenswrapper[5008]: I0129 15:27:36.989935 5008 flags.go:64] FLAG: --log-json-info-buffer-size="0" Jan 29 15:27:36 crc kubenswrapper[5008]: I0129 15:27:36.989956 5008 flags.go:64] FLAG: --log-json-split-stream="false" Jan 29 15:27:36 crc kubenswrapper[5008]: I0129 15:27:36.989968 5008 flags.go:64] FLAG: --log-text-info-buffer-size="0" Jan 29 15:27:36 crc kubenswrapper[5008]: I0129 15:27:36.989981 5008 flags.go:64] FLAG: --log-text-split-stream="false" Jan 29 15:27:36 crc kubenswrapper[5008]: I0129 15:27:36.989992 5008 flags.go:64] FLAG: --logging-format="text" Jan 29 15:27:36 crc kubenswrapper[5008]: I0129 15:27:36.990002 5008 flags.go:64] FLAG: --machine-id-file="/etc/machine-id,/var/lib/dbus/machine-id" Jan 29 15:27:36 crc kubenswrapper[5008]: I0129 15:27:36.990013 5008 flags.go:64] FLAG: --make-iptables-util-chains="true" Jan 29 15:27:36 crc kubenswrapper[5008]: I0129 15:27:36.990023 5008 flags.go:64] FLAG: --manifest-url="" Jan 29 15:27:36 crc kubenswrapper[5008]: I0129 15:27:36.990032 5008 flags.go:64] FLAG: --manifest-url-header="" Jan 29 15:27:36 crc kubenswrapper[5008]: I0129 15:27:36.990045 5008 flags.go:64] FLAG: --max-housekeeping-interval="15s" Jan 29 15:27:36 crc kubenswrapper[5008]: I0129 15:27:36.990056 5008 flags.go:64] FLAG: --max-open-files="1000000" Jan 29 15:27:36 crc kubenswrapper[5008]: I0129 15:27:36.990068 5008 flags.go:64] FLAG: --max-pods="110" Jan 29 15:27:36 crc kubenswrapper[5008]: I0129 15:27:36.990079 5008 flags.go:64] FLAG: --maximum-dead-containers="-1" Jan 29 15:27:36 crc kubenswrapper[5008]: I0129 15:27:36.990089 5008 flags.go:64] FLAG: --maximum-dead-containers-per-container="1" Jan 29 15:27:36 crc kubenswrapper[5008]: I0129 15:27:36.990099 5008 flags.go:64] FLAG: --memory-manager-policy="None" Jan 29 15:27:36 crc kubenswrapper[5008]: I0129 15:27:36.990109 5008 flags.go:64] FLAG: --minimum-container-ttl-duration="6m0s" Jan 29 15:27:36 crc kubenswrapper[5008]: I0129 15:27:36.990119 5008 flags.go:64] FLAG: --minimum-image-ttl-duration="2m0s" Jan 29 15:27:36 crc kubenswrapper[5008]: I0129 15:27:36.990129 5008 flags.go:64] FLAG: --node-ip="192.168.126.11" Jan 29 15:27:36 crc kubenswrapper[5008]: I0129 15:27:36.990139 5008 flags.go:64] FLAG: --node-labels="node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=,node.openshift.io/os_id=rhcos" Jan 29 15:27:36 crc kubenswrapper[5008]: I0129 15:27:36.990161 5008 flags.go:64] FLAG: --node-status-max-images="50" Jan 29 15:27:36 crc kubenswrapper[5008]: I0129 15:27:36.990171 5008 flags.go:64] FLAG: --node-status-update-frequency="10s" Jan 29 15:27:36 crc kubenswrapper[5008]: I0129 15:27:36.990181 5008 flags.go:64] FLAG: --oom-score-adj="-999" Jan 29 15:27:36 crc kubenswrapper[5008]: I0129 15:27:36.990192 5008 flags.go:64] FLAG: --pod-cidr="" Jan 29 15:27:36 crc kubenswrapper[5008]: I0129 15:27:36.990201 5008 flags.go:64] FLAG: --pod-infra-container-image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:33549946e22a9ffa738fd94b1345f90921bc8f92fa6137784cb33c77ad806f9d" Jan 29 15:27:36 crc kubenswrapper[5008]: I0129 15:27:36.990218 5008 flags.go:64] FLAG: --pod-manifest-path="" Jan 29 15:27:36 crc kubenswrapper[5008]: I0129 15:27:36.990227 5008 flags.go:64] FLAG: --pod-max-pids="-1" Jan 29 15:27:36 crc kubenswrapper[5008]: I0129 15:27:36.990237 5008 flags.go:64] FLAG: --pods-per-core="0" Jan 29 15:27:36 crc kubenswrapper[5008]: I0129 15:27:36.990247 5008 flags.go:64] FLAG: --port="10250" Jan 29 15:27:36 crc kubenswrapper[5008]: I0129 15:27:36.990258 5008 flags.go:64] FLAG: --protect-kernel-defaults="false" Jan 29 15:27:36 crc kubenswrapper[5008]: I0129 15:27:36.990267 5008 flags.go:64] FLAG: --provider-id="" Jan 29 15:27:36 crc kubenswrapper[5008]: I0129 15:27:36.990277 5008 flags.go:64] FLAG: --qos-reserved="" Jan 29 15:27:36 crc kubenswrapper[5008]: I0129 15:27:36.990287 5008 flags.go:64] FLAG: --read-only-port="10255" Jan 29 15:27:36 crc kubenswrapper[5008]: I0129 15:27:36.990297 5008 flags.go:64] FLAG: --register-node="true" Jan 29 15:27:36 crc kubenswrapper[5008]: I0129 15:27:36.990307 5008 flags.go:64] FLAG: --register-schedulable="true" Jan 29 15:27:36 crc kubenswrapper[5008]: I0129 15:27:36.990318 5008 flags.go:64] FLAG: --register-with-taints="node-role.kubernetes.io/master=:NoSchedule" Jan 29 15:27:36 crc kubenswrapper[5008]: I0129 15:27:36.990334 5008 flags.go:64] FLAG: --registry-burst="10" Jan 29 15:27:36 crc kubenswrapper[5008]: I0129 15:27:36.990344 5008 flags.go:64] FLAG: --registry-qps="5" Jan 29 15:27:36 crc kubenswrapper[5008]: I0129 15:27:36.990354 5008 flags.go:64] FLAG: --reserved-cpus="" Jan 29 15:27:36 crc kubenswrapper[5008]: I0129 15:27:36.990364 5008 flags.go:64] FLAG: --reserved-memory="" Jan 29 15:27:36 crc kubenswrapper[5008]: I0129 15:27:36.990376 5008 flags.go:64] FLAG: --resolv-conf="/etc/resolv.conf" Jan 29 15:27:36 crc kubenswrapper[5008]: I0129 15:27:36.990387 5008 flags.go:64] FLAG: --root-dir="/var/lib/kubelet" Jan 29 15:27:36 crc kubenswrapper[5008]: I0129 15:27:36.990400 5008 flags.go:64] FLAG: --rotate-certificates="false" Jan 29 15:27:36 crc kubenswrapper[5008]: I0129 15:27:36.990414 5008 flags.go:64] FLAG: --rotate-server-certificates="false" Jan 29 15:27:36 crc kubenswrapper[5008]: I0129 15:27:36.990429 5008 flags.go:64] FLAG: --runonce="false" Jan 29 15:27:36 crc kubenswrapper[5008]: I0129 15:27:36.990441 5008 flags.go:64] FLAG: --runtime-cgroups="/system.slice/crio.service" Jan 29 15:27:36 crc kubenswrapper[5008]: I0129 15:27:36.990455 5008 flags.go:64] FLAG: --runtime-request-timeout="2m0s" Jan 29 15:27:36 crc kubenswrapper[5008]: I0129 15:27:36.990468 5008 flags.go:64] FLAG: --seccomp-default="false" Jan 29 15:27:36 crc kubenswrapper[5008]: I0129 15:27:36.990479 5008 flags.go:64] FLAG: --serialize-image-pulls="true" Jan 29 15:27:36 crc kubenswrapper[5008]: I0129 15:27:36.990489 5008 flags.go:64] FLAG: --storage-driver-buffer-duration="1m0s" Jan 29 15:27:36 crc kubenswrapper[5008]: I0129 15:27:36.990499 5008 flags.go:64] FLAG: --storage-driver-db="cadvisor" Jan 29 15:27:36 crc kubenswrapper[5008]: I0129 15:27:36.990510 5008 flags.go:64] FLAG: --storage-driver-host="localhost:8086" Jan 29 15:27:36 crc kubenswrapper[5008]: I0129 15:27:36.990521 5008 flags.go:64] FLAG: --storage-driver-password="root" Jan 29 15:27:36 crc kubenswrapper[5008]: I0129 15:27:36.990530 5008 flags.go:64] FLAG: --storage-driver-secure="false" Jan 29 15:27:36 crc kubenswrapper[5008]: I0129 15:27:36.990541 5008 flags.go:64] FLAG: --storage-driver-table="stats" Jan 29 15:27:36 crc kubenswrapper[5008]: I0129 15:27:36.990551 5008 flags.go:64] FLAG: --storage-driver-user="root" Jan 29 15:27:36 crc kubenswrapper[5008]: I0129 15:27:36.990561 5008 flags.go:64] FLAG: --streaming-connection-idle-timeout="4h0m0s" Jan 29 15:27:36 crc kubenswrapper[5008]: I0129 15:27:36.990571 5008 flags.go:64] FLAG: --sync-frequency="1m0s" Jan 29 15:27:36 crc kubenswrapper[5008]: I0129 15:27:36.990581 5008 flags.go:64] FLAG: --system-cgroups="" Jan 29 15:27:36 crc kubenswrapper[5008]: I0129 15:27:36.990591 5008 flags.go:64] FLAG: --system-reserved="cpu=200m,ephemeral-storage=350Mi,memory=350Mi" Jan 29 15:27:36 crc kubenswrapper[5008]: I0129 15:27:36.990607 5008 flags.go:64] FLAG: --system-reserved-cgroup="" Jan 29 15:27:36 crc kubenswrapper[5008]: I0129 15:27:36.990616 5008 flags.go:64] FLAG: --tls-cert-file="" Jan 29 15:27:36 crc kubenswrapper[5008]: I0129 15:27:36.990626 5008 flags.go:64] FLAG: --tls-cipher-suites="[]" Jan 29 15:27:36 crc kubenswrapper[5008]: I0129 15:27:36.990638 5008 flags.go:64] FLAG: --tls-min-version="" Jan 29 15:27:36 crc kubenswrapper[5008]: I0129 15:27:36.990648 5008 flags.go:64] FLAG: --tls-private-key-file="" Jan 29 15:27:36 crc kubenswrapper[5008]: I0129 15:27:36.990658 5008 flags.go:64] FLAG: --topology-manager-policy="none" Jan 29 15:27:36 crc kubenswrapper[5008]: I0129 15:27:36.990668 5008 flags.go:64] FLAG: --topology-manager-policy-options="" Jan 29 15:27:36 crc kubenswrapper[5008]: I0129 15:27:36.990678 5008 flags.go:64] FLAG: --topology-manager-scope="container" Jan 29 15:27:36 crc kubenswrapper[5008]: I0129 15:27:36.990690 5008 flags.go:64] FLAG: --v="2" Jan 29 15:27:36 crc kubenswrapper[5008]: I0129 15:27:36.990716 5008 flags.go:64] FLAG: --version="false" Jan 29 15:27:36 crc kubenswrapper[5008]: I0129 15:27:36.990733 5008 flags.go:64] FLAG: --vmodule="" Jan 29 15:27:36 crc kubenswrapper[5008]: I0129 15:27:36.990749 5008 flags.go:64] FLAG: --volume-plugin-dir="/etc/kubernetes/kubelet-plugins/volume/exec" Jan 29 15:27:36 crc kubenswrapper[5008]: I0129 15:27:36.990761 5008 flags.go:64] FLAG: --volume-stats-agg-period="1m0s" Jan 29 15:27:36 crc kubenswrapper[5008]: W0129 15:27:36.991024 5008 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Jan 29 15:27:36 crc kubenswrapper[5008]: W0129 15:27:36.991037 5008 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Jan 29 15:27:36 crc kubenswrapper[5008]: W0129 15:27:36.991047 5008 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 29 15:27:36 crc kubenswrapper[5008]: W0129 15:27:36.991059 5008 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Jan 29 15:27:36 crc kubenswrapper[5008]: W0129 15:27:36.991069 5008 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Jan 29 15:27:36 crc kubenswrapper[5008]: W0129 15:27:36.991080 5008 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Jan 29 15:27:36 crc kubenswrapper[5008]: W0129 15:27:36.991091 5008 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Jan 29 15:27:36 crc kubenswrapper[5008]: W0129 15:27:36.991101 5008 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Jan 29 15:27:36 crc kubenswrapper[5008]: W0129 15:27:36.991110 5008 feature_gate.go:330] unrecognized feature gate: GatewayAPI Jan 29 15:27:36 crc kubenswrapper[5008]: W0129 15:27:36.991120 5008 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Jan 29 15:27:36 crc kubenswrapper[5008]: W0129 15:27:36.991129 5008 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Jan 29 15:27:36 crc kubenswrapper[5008]: W0129 15:27:36.991138 5008 feature_gate.go:330] unrecognized feature gate: InsightsConfig Jan 29 15:27:36 crc kubenswrapper[5008]: W0129 15:27:36.991147 5008 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Jan 29 15:27:36 crc kubenswrapper[5008]: W0129 15:27:36.991158 5008 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Jan 29 15:27:36 crc kubenswrapper[5008]: W0129 15:27:36.991169 5008 feature_gate.go:330] unrecognized feature gate: PlatformOperators Jan 29 15:27:36 crc kubenswrapper[5008]: W0129 15:27:36.991178 5008 feature_gate.go:330] unrecognized feature gate: Example Jan 29 15:27:36 crc kubenswrapper[5008]: W0129 15:27:36.991187 5008 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Jan 29 15:27:36 crc kubenswrapper[5008]: W0129 15:27:36.991199 5008 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Jan 29 15:27:36 crc kubenswrapper[5008]: W0129 15:27:36.991213 5008 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 29 15:27:36 crc kubenswrapper[5008]: W0129 15:27:36.991226 5008 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Jan 29 15:27:36 crc kubenswrapper[5008]: W0129 15:27:36.991237 5008 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Jan 29 15:27:36 crc kubenswrapper[5008]: W0129 15:27:36.991248 5008 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Jan 29 15:27:36 crc kubenswrapper[5008]: W0129 15:27:36.991261 5008 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Jan 29 15:27:36 crc kubenswrapper[5008]: W0129 15:27:36.991276 5008 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Jan 29 15:27:36 crc kubenswrapper[5008]: W0129 15:27:36.991285 5008 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Jan 29 15:27:36 crc kubenswrapper[5008]: W0129 15:27:36.991338 5008 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Jan 29 15:27:36 crc kubenswrapper[5008]: W0129 15:27:36.991347 5008 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Jan 29 15:27:36 crc kubenswrapper[5008]: W0129 15:27:36.991356 5008 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Jan 29 15:27:36 crc kubenswrapper[5008]: W0129 15:27:36.991365 5008 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Jan 29 15:27:36 crc kubenswrapper[5008]: W0129 15:27:36.991373 5008 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Jan 29 15:27:36 crc kubenswrapper[5008]: W0129 15:27:36.991382 5008 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Jan 29 15:27:36 crc kubenswrapper[5008]: W0129 15:27:36.991390 5008 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Jan 29 15:27:36 crc kubenswrapper[5008]: W0129 15:27:36.991399 5008 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Jan 29 15:27:36 crc kubenswrapper[5008]: W0129 15:27:36.991407 5008 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Jan 29 15:27:36 crc kubenswrapper[5008]: W0129 15:27:36.991418 5008 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Jan 29 15:27:36 crc kubenswrapper[5008]: W0129 15:27:36.991426 5008 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Jan 29 15:27:36 crc kubenswrapper[5008]: W0129 15:27:36.991435 5008 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Jan 29 15:27:36 crc kubenswrapper[5008]: W0129 15:27:36.991443 5008 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Jan 29 15:27:36 crc kubenswrapper[5008]: W0129 15:27:36.991452 5008 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Jan 29 15:27:36 crc kubenswrapper[5008]: W0129 15:27:36.991461 5008 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Jan 29 15:27:36 crc kubenswrapper[5008]: W0129 15:27:36.991469 5008 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Jan 29 15:27:36 crc kubenswrapper[5008]: W0129 15:27:36.991478 5008 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Jan 29 15:27:36 crc kubenswrapper[5008]: W0129 15:27:36.991487 5008 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Jan 29 15:27:36 crc kubenswrapper[5008]: W0129 15:27:36.991495 5008 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Jan 29 15:27:36 crc kubenswrapper[5008]: W0129 15:27:36.991505 5008 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Jan 29 15:27:36 crc kubenswrapper[5008]: W0129 15:27:36.991513 5008 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Jan 29 15:27:36 crc kubenswrapper[5008]: W0129 15:27:36.991521 5008 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 29 15:27:36 crc kubenswrapper[5008]: W0129 15:27:36.991530 5008 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Jan 29 15:27:36 crc kubenswrapper[5008]: W0129 15:27:36.991538 5008 feature_gate.go:330] unrecognized feature gate: NewOLM Jan 29 15:27:36 crc kubenswrapper[5008]: W0129 15:27:36.991551 5008 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Jan 29 15:27:36 crc kubenswrapper[5008]: W0129 15:27:36.991560 5008 feature_gate.go:330] unrecognized feature gate: OVNObservability Jan 29 15:27:36 crc kubenswrapper[5008]: W0129 15:27:36.991568 5008 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 29 15:27:36 crc kubenswrapper[5008]: W0129 15:27:36.991577 5008 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Jan 29 15:27:36 crc kubenswrapper[5008]: W0129 15:27:36.991585 5008 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Jan 29 15:27:36 crc kubenswrapper[5008]: W0129 15:27:36.991594 5008 feature_gate.go:330] unrecognized feature gate: SignatureStores Jan 29 15:27:36 crc kubenswrapper[5008]: W0129 15:27:36.991605 5008 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Jan 29 15:27:36 crc kubenswrapper[5008]: W0129 15:27:36.991614 5008 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 29 15:27:36 crc kubenswrapper[5008]: W0129 15:27:36.991622 5008 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Jan 29 15:27:36 crc kubenswrapper[5008]: W0129 15:27:36.991630 5008 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Jan 29 15:27:36 crc kubenswrapper[5008]: W0129 15:27:36.991639 5008 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 29 15:27:36 crc kubenswrapper[5008]: W0129 15:27:36.991647 5008 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Jan 29 15:27:36 crc kubenswrapper[5008]: W0129 15:27:36.991656 5008 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Jan 29 15:27:36 crc kubenswrapper[5008]: W0129 15:27:36.991664 5008 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Jan 29 15:27:36 crc kubenswrapper[5008]: W0129 15:27:36.991672 5008 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Jan 29 15:27:36 crc kubenswrapper[5008]: W0129 15:27:36.991681 5008 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 29 15:27:36 crc kubenswrapper[5008]: W0129 15:27:36.991690 5008 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Jan 29 15:27:36 crc kubenswrapper[5008]: W0129 15:27:36.991698 5008 feature_gate.go:330] unrecognized feature gate: PinnedImages Jan 29 15:27:36 crc kubenswrapper[5008]: W0129 15:27:36.991706 5008 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Jan 29 15:27:36 crc kubenswrapper[5008]: W0129 15:27:36.991714 5008 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Jan 29 15:27:36 crc kubenswrapper[5008]: W0129 15:27:36.991723 5008 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Jan 29 15:27:36 crc kubenswrapper[5008]: W0129 15:27:36.991732 5008 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Jan 29 15:27:36 crc kubenswrapper[5008]: I0129 15:27:36.991760 5008 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Jan 29 15:27:37 crc kubenswrapper[5008]: I0129 15:27:37.004525 5008 server.go:491] "Kubelet version" kubeletVersion="v1.31.5" Jan 29 15:27:37 crc kubenswrapper[5008]: I0129 15:27:37.004585 5008 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 29 15:27:37 crc kubenswrapper[5008]: W0129 15:27:37.004756 5008 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Jan 29 15:27:37 crc kubenswrapper[5008]: W0129 15:27:37.004775 5008 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 29 15:27:37 crc kubenswrapper[5008]: W0129 15:27:37.004824 5008 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Jan 29 15:27:37 crc kubenswrapper[5008]: W0129 15:27:37.004837 5008 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Jan 29 15:27:37 crc kubenswrapper[5008]: W0129 15:27:37.004849 5008 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Jan 29 15:27:37 crc kubenswrapper[5008]: W0129 15:27:37.004862 5008 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 29 15:27:37 crc kubenswrapper[5008]: W0129 15:27:37.004873 5008 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Jan 29 15:27:37 crc kubenswrapper[5008]: W0129 15:27:37.004885 5008 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Jan 29 15:27:37 crc kubenswrapper[5008]: W0129 15:27:37.004896 5008 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Jan 29 15:27:37 crc kubenswrapper[5008]: W0129 15:27:37.004907 5008 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Jan 29 15:27:37 crc kubenswrapper[5008]: W0129 15:27:37.004918 5008 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Jan 29 15:27:37 crc kubenswrapper[5008]: W0129 15:27:37.004928 5008 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Jan 29 15:27:37 crc kubenswrapper[5008]: W0129 15:27:37.004974 5008 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Jan 29 15:27:37 crc kubenswrapper[5008]: W0129 15:27:37.004983 5008 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Jan 29 15:27:37 crc kubenswrapper[5008]: W0129 15:27:37.004992 5008 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Jan 29 15:27:37 crc kubenswrapper[5008]: W0129 15:27:37.005001 5008 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Jan 29 15:27:37 crc kubenswrapper[5008]: W0129 15:27:37.005009 5008 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Jan 29 15:27:37 crc kubenswrapper[5008]: W0129 15:27:37.005018 5008 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Jan 29 15:27:37 crc kubenswrapper[5008]: W0129 15:27:37.005026 5008 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Jan 29 15:27:37 crc kubenswrapper[5008]: W0129 15:27:37.005035 5008 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Jan 29 15:27:37 crc kubenswrapper[5008]: W0129 15:27:37.005046 5008 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Jan 29 15:27:37 crc kubenswrapper[5008]: W0129 15:27:37.005054 5008 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Jan 29 15:27:37 crc kubenswrapper[5008]: W0129 15:27:37.005062 5008 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Jan 29 15:27:37 crc kubenswrapper[5008]: W0129 15:27:37.005071 5008 feature_gate.go:330] unrecognized feature gate: GatewayAPI Jan 29 15:27:37 crc kubenswrapper[5008]: W0129 15:27:37.005080 5008 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Jan 29 15:27:37 crc kubenswrapper[5008]: W0129 15:27:37.005088 5008 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Jan 29 15:27:37 crc kubenswrapper[5008]: W0129 15:27:37.005097 5008 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Jan 29 15:27:37 crc kubenswrapper[5008]: W0129 15:27:37.005105 5008 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Jan 29 15:27:37 crc kubenswrapper[5008]: W0129 15:27:37.005114 5008 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 29 15:27:37 crc kubenswrapper[5008]: W0129 15:27:37.005122 5008 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Jan 29 15:27:37 crc kubenswrapper[5008]: W0129 15:27:37.005131 5008 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Jan 29 15:27:37 crc kubenswrapper[5008]: W0129 15:27:37.005139 5008 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Jan 29 15:27:37 crc kubenswrapper[5008]: W0129 15:27:37.005148 5008 feature_gate.go:330] unrecognized feature gate: Example Jan 29 15:27:37 crc kubenswrapper[5008]: W0129 15:27:37.005156 5008 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Jan 29 15:27:37 crc kubenswrapper[5008]: W0129 15:27:37.005168 5008 feature_gate.go:330] unrecognized feature gate: PlatformOperators Jan 29 15:27:37 crc kubenswrapper[5008]: W0129 15:27:37.005176 5008 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Jan 29 15:27:37 crc kubenswrapper[5008]: W0129 15:27:37.005184 5008 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Jan 29 15:27:37 crc kubenswrapper[5008]: W0129 15:27:37.005193 5008 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Jan 29 15:27:37 crc kubenswrapper[5008]: W0129 15:27:37.005201 5008 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 29 15:27:37 crc kubenswrapper[5008]: W0129 15:27:37.005210 5008 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Jan 29 15:27:37 crc kubenswrapper[5008]: W0129 15:27:37.005218 5008 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Jan 29 15:27:37 crc kubenswrapper[5008]: W0129 15:27:37.005230 5008 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Jan 29 15:27:37 crc kubenswrapper[5008]: W0129 15:27:37.005243 5008 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Jan 29 15:27:37 crc kubenswrapper[5008]: W0129 15:27:37.005256 5008 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Jan 29 15:27:37 crc kubenswrapper[5008]: W0129 15:27:37.005266 5008 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 29 15:27:37 crc kubenswrapper[5008]: W0129 15:27:37.005276 5008 feature_gate.go:330] unrecognized feature gate: SignatureStores Jan 29 15:27:37 crc kubenswrapper[5008]: W0129 15:27:37.005285 5008 feature_gate.go:330] unrecognized feature gate: PinnedImages Jan 29 15:27:37 crc kubenswrapper[5008]: W0129 15:27:37.005296 5008 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Jan 29 15:27:37 crc kubenswrapper[5008]: W0129 15:27:37.005307 5008 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 29 15:27:37 crc kubenswrapper[5008]: W0129 15:27:37.005318 5008 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Jan 29 15:27:37 crc kubenswrapper[5008]: W0129 15:27:37.005329 5008 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Jan 29 15:27:37 crc kubenswrapper[5008]: W0129 15:27:37.005343 5008 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Jan 29 15:27:37 crc kubenswrapper[5008]: W0129 15:27:37.005357 5008 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Jan 29 15:27:37 crc kubenswrapper[5008]: W0129 15:27:37.005368 5008 feature_gate.go:330] unrecognized feature gate: NewOLM Jan 29 15:27:37 crc kubenswrapper[5008]: W0129 15:27:37.005377 5008 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Jan 29 15:27:37 crc kubenswrapper[5008]: W0129 15:27:37.005387 5008 feature_gate.go:330] unrecognized feature gate: InsightsConfig Jan 29 15:27:37 crc kubenswrapper[5008]: W0129 15:27:37.005397 5008 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Jan 29 15:27:37 crc kubenswrapper[5008]: W0129 15:27:37.005408 5008 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 29 15:27:37 crc kubenswrapper[5008]: W0129 15:27:37.005418 5008 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Jan 29 15:27:37 crc kubenswrapper[5008]: W0129 15:27:37.005427 5008 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Jan 29 15:27:37 crc kubenswrapper[5008]: W0129 15:27:37.005437 5008 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Jan 29 15:27:37 crc kubenswrapper[5008]: W0129 15:27:37.005445 5008 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Jan 29 15:27:37 crc kubenswrapper[5008]: W0129 15:27:37.005454 5008 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Jan 29 15:27:37 crc kubenswrapper[5008]: W0129 15:27:37.005462 5008 feature_gate.go:330] unrecognized feature gate: OVNObservability Jan 29 15:27:37 crc kubenswrapper[5008]: W0129 15:27:37.005471 5008 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Jan 29 15:27:37 crc kubenswrapper[5008]: W0129 15:27:37.005480 5008 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Jan 29 15:27:37 crc kubenswrapper[5008]: W0129 15:27:37.005488 5008 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Jan 29 15:27:37 crc kubenswrapper[5008]: W0129 15:27:37.005496 5008 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Jan 29 15:27:37 crc kubenswrapper[5008]: W0129 15:27:37.005504 5008 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Jan 29 15:27:37 crc kubenswrapper[5008]: W0129 15:27:37.005513 5008 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Jan 29 15:27:37 crc kubenswrapper[5008]: W0129 15:27:37.005524 5008 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Jan 29 15:27:37 crc kubenswrapper[5008]: I0129 15:27:37.005539 5008 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Jan 29 15:27:37 crc kubenswrapper[5008]: W0129 15:27:37.005818 5008 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Jan 29 15:27:37 crc kubenswrapper[5008]: W0129 15:27:37.005839 5008 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 29 15:27:37 crc kubenswrapper[5008]: W0129 15:27:37.005850 5008 feature_gate.go:330] unrecognized feature gate: GatewayAPI Jan 29 15:27:37 crc kubenswrapper[5008]: W0129 15:27:37.005860 5008 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Jan 29 15:27:37 crc kubenswrapper[5008]: W0129 15:27:37.005868 5008 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Jan 29 15:27:37 crc kubenswrapper[5008]: W0129 15:27:37.005877 5008 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Jan 29 15:27:37 crc kubenswrapper[5008]: W0129 15:27:37.005885 5008 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Jan 29 15:27:37 crc kubenswrapper[5008]: W0129 15:27:37.005894 5008 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Jan 29 15:27:37 crc kubenswrapper[5008]: W0129 15:27:37.005902 5008 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Jan 29 15:27:37 crc kubenswrapper[5008]: W0129 15:27:37.005913 5008 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Jan 29 15:27:37 crc kubenswrapper[5008]: W0129 15:27:37.005921 5008 feature_gate.go:330] unrecognized feature gate: SignatureStores Jan 29 15:27:37 crc kubenswrapper[5008]: W0129 15:27:37.005930 5008 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Jan 29 15:27:37 crc kubenswrapper[5008]: W0129 15:27:37.005938 5008 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Jan 29 15:27:37 crc kubenswrapper[5008]: W0129 15:27:37.005947 5008 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Jan 29 15:27:37 crc kubenswrapper[5008]: W0129 15:27:37.005955 5008 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Jan 29 15:27:37 crc kubenswrapper[5008]: W0129 15:27:37.005964 5008 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 29 15:27:37 crc kubenswrapper[5008]: W0129 15:27:37.005972 5008 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Jan 29 15:27:37 crc kubenswrapper[5008]: W0129 15:27:37.005980 5008 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Jan 29 15:27:37 crc kubenswrapper[5008]: W0129 15:27:37.005992 5008 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Jan 29 15:27:37 crc kubenswrapper[5008]: W0129 15:27:37.006005 5008 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Jan 29 15:27:37 crc kubenswrapper[5008]: W0129 15:27:37.006014 5008 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Jan 29 15:27:37 crc kubenswrapper[5008]: W0129 15:27:37.006024 5008 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Jan 29 15:27:37 crc kubenswrapper[5008]: W0129 15:27:37.006033 5008 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Jan 29 15:27:37 crc kubenswrapper[5008]: W0129 15:27:37.006042 5008 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Jan 29 15:27:37 crc kubenswrapper[5008]: W0129 15:27:37.006053 5008 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Jan 29 15:27:37 crc kubenswrapper[5008]: W0129 15:27:37.006061 5008 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 29 15:27:37 crc kubenswrapper[5008]: W0129 15:27:37.006069 5008 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Jan 29 15:27:37 crc kubenswrapper[5008]: W0129 15:27:37.006078 5008 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Jan 29 15:27:37 crc kubenswrapper[5008]: W0129 15:27:37.006087 5008 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Jan 29 15:27:37 crc kubenswrapper[5008]: W0129 15:27:37.006096 5008 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Jan 29 15:27:37 crc kubenswrapper[5008]: W0129 15:27:37.006104 5008 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Jan 29 15:27:37 crc kubenswrapper[5008]: W0129 15:27:37.006113 5008 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Jan 29 15:27:37 crc kubenswrapper[5008]: W0129 15:27:37.006121 5008 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Jan 29 15:27:37 crc kubenswrapper[5008]: W0129 15:27:37.006130 5008 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Jan 29 15:27:37 crc kubenswrapper[5008]: W0129 15:27:37.006139 5008 feature_gate.go:330] unrecognized feature gate: InsightsConfig Jan 29 15:27:37 crc kubenswrapper[5008]: W0129 15:27:37.006148 5008 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Jan 29 15:27:37 crc kubenswrapper[5008]: W0129 15:27:37.006157 5008 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Jan 29 15:27:37 crc kubenswrapper[5008]: W0129 15:27:37.006165 5008 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 29 15:27:37 crc kubenswrapper[5008]: W0129 15:27:37.006174 5008 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Jan 29 15:27:37 crc kubenswrapper[5008]: W0129 15:27:37.006183 5008 feature_gate.go:330] unrecognized feature gate: PinnedImages Jan 29 15:27:37 crc kubenswrapper[5008]: W0129 15:27:37.006191 5008 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Jan 29 15:27:37 crc kubenswrapper[5008]: W0129 15:27:37.006200 5008 feature_gate.go:330] unrecognized feature gate: OVNObservability Jan 29 15:27:37 crc kubenswrapper[5008]: W0129 15:27:37.006209 5008 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Jan 29 15:27:37 crc kubenswrapper[5008]: W0129 15:27:37.006217 5008 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Jan 29 15:27:37 crc kubenswrapper[5008]: W0129 15:27:37.006225 5008 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Jan 29 15:27:37 crc kubenswrapper[5008]: W0129 15:27:37.006234 5008 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 29 15:27:37 crc kubenswrapper[5008]: W0129 15:27:37.006242 5008 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Jan 29 15:27:37 crc kubenswrapper[5008]: W0129 15:27:37.006250 5008 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Jan 29 15:27:37 crc kubenswrapper[5008]: W0129 15:27:37.006258 5008 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Jan 29 15:27:37 crc kubenswrapper[5008]: W0129 15:27:37.006267 5008 feature_gate.go:330] unrecognized feature gate: NewOLM Jan 29 15:27:37 crc kubenswrapper[5008]: W0129 15:27:37.006276 5008 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Jan 29 15:27:37 crc kubenswrapper[5008]: W0129 15:27:37.006284 5008 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 29 15:27:37 crc kubenswrapper[5008]: W0129 15:27:37.006292 5008 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Jan 29 15:27:37 crc kubenswrapper[5008]: W0129 15:27:37.006303 5008 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 29 15:27:37 crc kubenswrapper[5008]: W0129 15:27:37.006313 5008 feature_gate.go:330] unrecognized feature gate: PlatformOperators Jan 29 15:27:37 crc kubenswrapper[5008]: W0129 15:27:37.006322 5008 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Jan 29 15:27:37 crc kubenswrapper[5008]: W0129 15:27:37.006330 5008 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Jan 29 15:27:37 crc kubenswrapper[5008]: W0129 15:27:37.006339 5008 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Jan 29 15:27:37 crc kubenswrapper[5008]: W0129 15:27:37.006347 5008 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Jan 29 15:27:37 crc kubenswrapper[5008]: W0129 15:27:37.006356 5008 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Jan 29 15:27:37 crc kubenswrapper[5008]: W0129 15:27:37.006367 5008 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Jan 29 15:27:37 crc kubenswrapper[5008]: W0129 15:27:37.006377 5008 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Jan 29 15:27:37 crc kubenswrapper[5008]: W0129 15:27:37.006387 5008 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Jan 29 15:27:37 crc kubenswrapper[5008]: W0129 15:27:37.006397 5008 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Jan 29 15:27:37 crc kubenswrapper[5008]: W0129 15:27:37.006406 5008 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Jan 29 15:27:37 crc kubenswrapper[5008]: W0129 15:27:37.006415 5008 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Jan 29 15:27:37 crc kubenswrapper[5008]: W0129 15:27:37.006426 5008 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Jan 29 15:27:37 crc kubenswrapper[5008]: W0129 15:27:37.006437 5008 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Jan 29 15:27:37 crc kubenswrapper[5008]: W0129 15:27:37.006447 5008 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Jan 29 15:27:37 crc kubenswrapper[5008]: W0129 15:27:37.006457 5008 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Jan 29 15:27:37 crc kubenswrapper[5008]: W0129 15:27:37.006468 5008 feature_gate.go:330] unrecognized feature gate: Example Jan 29 15:27:37 crc kubenswrapper[5008]: I0129 15:27:37.006482 5008 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Jan 29 15:27:37 crc kubenswrapper[5008]: I0129 15:27:37.008379 5008 server.go:940] "Client rotation is on, will bootstrap in background" Jan 29 15:27:37 crc kubenswrapper[5008]: I0129 15:27:37.017667 5008 bootstrap.go:85] "Current kubeconfig file contents are still valid, no bootstrap necessary" Jan 29 15:27:37 crc kubenswrapper[5008]: I0129 15:27:37.017863 5008 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jan 29 15:27:37 crc kubenswrapper[5008]: I0129 15:27:37.021513 5008 server.go:997] "Starting client certificate rotation" Jan 29 15:27:37 crc kubenswrapper[5008]: I0129 15:27:37.021566 5008 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate rotation is enabled Jan 29 15:27:37 crc kubenswrapper[5008]: I0129 15:27:37.021862 5008 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2026-02-24 05:52:08 +0000 UTC, rotation deadline is 2026-01-06 22:50:41.712782906 +0000 UTC Jan 29 15:27:37 crc kubenswrapper[5008]: I0129 15:27:37.022017 5008 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Jan 29 15:27:37 crc kubenswrapper[5008]: I0129 15:27:37.075135 5008 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Jan 29 15:27:37 crc kubenswrapper[5008]: E0129 15:27:37.075682 5008 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.102.83.50:6443: connect: connection refused" logger="UnhandledError" Jan 29 15:27:37 crc kubenswrapper[5008]: I0129 15:27:37.078306 5008 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Jan 29 15:27:37 crc kubenswrapper[5008]: I0129 15:27:37.100874 5008 log.go:25] "Validated CRI v1 runtime API" Jan 29 15:27:37 crc kubenswrapper[5008]: I0129 15:27:37.205985 5008 log.go:25] "Validated CRI v1 image API" Jan 29 15:27:37 crc kubenswrapper[5008]: I0129 15:27:37.208563 5008 server.go:1437] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jan 29 15:27:37 crc kubenswrapper[5008]: I0129 15:27:37.213352 5008 fs.go:133] Filesystem UUIDs: map[0b076daa-c26a-46d2-b3a6-72a8dbc6e257:/dev/vda4 2026-01-29-15-22-48-00:/dev/sr0 7B77-95E7:/dev/vda2 de0497b0-db1b-465a-b278-03db02455c71:/dev/vda3] Jan 29 15:27:37 crc kubenswrapper[5008]: I0129 15:27:37.213390 5008 fs.go:134] Filesystem partitions: map[/dev/shm:{mountpoint:/dev/shm major:0 minor:22 fsType:tmpfs blockSize:0} /dev/vda3:{mountpoint:/boot major:252 minor:3 fsType:ext4 blockSize:0} /dev/vda4:{mountpoint:/var major:252 minor:4 fsType:xfs blockSize:0} /run:{mountpoint:/run major:0 minor:24 fsType:tmpfs blockSize:0} /run/user/1000:{mountpoint:/run/user/1000 major:0 minor:41 fsType:tmpfs blockSize:0} /tmp:{mountpoint:/tmp major:0 minor:30 fsType:tmpfs blockSize:0} /var/lib/etcd:{mountpoint:/var/lib/etcd major:0 minor:43 fsType:tmpfs blockSize:0}] Jan 29 15:27:37 crc kubenswrapper[5008]: I0129 15:27:37.227934 5008 manager.go:217] Machine: {Timestamp:2026-01-29 15:27:37.225174186 +0000 UTC m=+0.898028443 CPUVendorID:AuthenticAMD NumCores:12 NumPhysicalCores:1 NumSockets:12 CpuFrequency:2799998 MemoryCapacity:33654128640 SwapCapacity:0 MemoryByType:map[] NVMInfo:{MemoryModeCapacity:0 AppDirectModeCapacity:0 AvgPowerBudget:0} HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] MachineID:21801e6708c44f15b81395eb736a7cec SystemUUID:ad986a03-9926-4209-a3e1-d38e666bee86 BootID:23463cb0-4db2-46f4-86c5-cabe2301deff Filesystems:[{Device:/dev/shm DeviceMajor:0 DeviceMinor:22 Capacity:16827064320 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run DeviceMajor:0 DeviceMinor:24 Capacity:6730825728 Type:vfs Inodes:819200 HasInodes:true} {Device:/dev/vda4 DeviceMajor:252 DeviceMinor:4 Capacity:85292941312 Type:vfs Inodes:41679680 HasInodes:true} {Device:/tmp DeviceMajor:0 DeviceMinor:30 Capacity:16827064320 Type:vfs Inodes:1048576 HasInodes:true} {Device:/dev/vda3 DeviceMajor:252 DeviceMinor:3 Capacity:366869504 Type:vfs Inodes:98304 HasInodes:true} {Device:/run/user/1000 DeviceMajor:0 DeviceMinor:41 Capacity:3365412864 Type:vfs Inodes:821634 HasInodes:true} {Device:/var/lib/etcd DeviceMajor:0 DeviceMinor:43 Capacity:1073741824 Type:vfs Inodes:4108170 HasInodes:true}] DiskMap:map[252:0:{Name:vda Major:252 Minor:0 Size:214748364800 Scheduler:none}] NetworkDevices:[{Name:br-ex MacAddress:fa:16:3e:4d:d2:f0 Speed:0 Mtu:1500} {Name:br-int MacAddress:d6:39:55:2e:22:71 Speed:0 Mtu:1400} {Name:ens3 MacAddress:fa:16:3e:4d:d2:f0 Speed:-1 Mtu:1500} {Name:ens7 MacAddress:fa:16:3e:51:fe:80 Speed:-1 Mtu:1500} {Name:ens7.20 MacAddress:52:54:00:56:16:7f Speed:-1 Mtu:1496} {Name:ens7.21 MacAddress:52:54:00:b6:e7:3e Speed:-1 Mtu:1496} {Name:ens7.22 MacAddress:52:54:00:da:67:7c Speed:-1 Mtu:1496} {Name:eth10 MacAddress:2a:d0:3c:95:4c:ad Speed:0 Mtu:1500} {Name:ovn-k8s-mp0 MacAddress:0a:58:0a:d9:00:02 Speed:0 Mtu:1400} {Name:ovs-system MacAddress:12:d8:f3:c9:58:49 Speed:0 Mtu:1500}] Topology:[{Id:0 Memory:33654128640 HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] Cores:[{Id:0 Threads:[0] Caches:[{Id:0 Size:32768 Type:Data Level:1} {Id:0 Size:32768 Type:Instruction Level:1} {Id:0 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:0 Size:16777216 Type:Unified Level:3}] SocketID:0 BookID: DrawerID:} {Id:0 Threads:[1] Caches:[{Id:1 Size:32768 Type:Data Level:1} {Id:1 Size:32768 Type:Instruction Level:1} {Id:1 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:1 Size:16777216 Type:Unified Level:3}] SocketID:1 BookID: DrawerID:} {Id:0 Threads:[10] Caches:[{Id:10 Size:32768 Type:Data Level:1} {Id:10 Size:32768 Type:Instruction Level:1} {Id:10 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:10 Size:16777216 Type:Unified Level:3}] SocketID:10 BookID: DrawerID:} {Id:0 Threads:[11] Caches:[{Id:11 Size:32768 Type:Data Level:1} {Id:11 Size:32768 Type:Instruction Level:1} {Id:11 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:11 Size:16777216 Type:Unified Level:3}] SocketID:11 BookID: DrawerID:} {Id:0 Threads:[2] Caches:[{Id:2 Size:32768 Type:Data Level:1} {Id:2 Size:32768 Type:Instruction Level:1} {Id:2 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:2 Size:16777216 Type:Unified Level:3}] SocketID:2 BookID: DrawerID:} {Id:0 Threads:[3] Caches:[{Id:3 Size:32768 Type:Data Level:1} {Id:3 Size:32768 Type:Instruction Level:1} {Id:3 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:3 Size:16777216 Type:Unified Level:3}] SocketID:3 BookID: DrawerID:} {Id:0 Threads:[4] Caches:[{Id:4 Size:32768 Type:Data Level:1} {Id:4 Size:32768 Type:Instruction Level:1} {Id:4 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:4 Size:16777216 Type:Unified Level:3}] SocketID:4 BookID: DrawerID:} {Id:0 Threads:[5] Caches:[{Id:5 Size:32768 Type:Data Level:1} {Id:5 Size:32768 Type:Instruction Level:1} {Id:5 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:5 Size:16777216 Type:Unified Level:3}] SocketID:5 BookID: DrawerID:} {Id:0 Threads:[6] Caches:[{Id:6 Size:32768 Type:Data Level:1} {Id:6 Size:32768 Type:Instruction Level:1} {Id:6 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:6 Size:16777216 Type:Unified Level:3}] SocketID:6 BookID: DrawerID:} {Id:0 Threads:[7] Caches:[{Id:7 Size:32768 Type:Data Level:1} {Id:7 Size:32768 Type:Instruction Level:1} {Id:7 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:7 Size:16777216 Type:Unified Level:3}] SocketID:7 BookID: DrawerID:} {Id:0 Threads:[8] Caches:[{Id:8 Size:32768 Type:Data Level:1} {Id:8 Size:32768 Type:Instruction Level:1} {Id:8 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:8 Size:16777216 Type:Unified Level:3}] SocketID:8 BookID: DrawerID:} {Id:0 Threads:[9] Caches:[{Id:9 Size:32768 Type:Data Level:1} {Id:9 Size:32768 Type:Instruction Level:1} {Id:9 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:9 Size:16777216 Type:Unified Level:3}] SocketID:9 BookID: DrawerID:}] Caches:[] Distances:[10]}] CloudProvider:Unknown InstanceType:Unknown InstanceID:None} Jan 29 15:27:37 crc kubenswrapper[5008]: I0129 15:27:37.228157 5008 manager_no_libpfm.go:29] cAdvisor is build without cgo and/or libpfm support. Perf event counters are not available. Jan 29 15:27:37 crc kubenswrapper[5008]: I0129 15:27:37.228280 5008 manager.go:233] Version: {KernelVersion:5.14.0-427.50.2.el9_4.x86_64 ContainerOsVersion:Red Hat Enterprise Linux CoreOS 418.94.202502100215-0 DockerVersion: DockerAPIVersion: CadvisorVersion: CadvisorRevision:} Jan 29 15:27:37 crc kubenswrapper[5008]: I0129 15:27:37.229483 5008 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Jan 29 15:27:37 crc kubenswrapper[5008]: I0129 15:27:37.229666 5008 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 29 15:27:37 crc kubenswrapper[5008]: I0129 15:27:37.229705 5008 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"crc","RuntimeCgroupsName":"/system.slice/crio.service","SystemCgroupsName":"/system.slice","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":true,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":{"cpu":"200m","ephemeral-storage":"350Mi","memory":"350Mi"},"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":4096,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 29 15:27:37 crc kubenswrapper[5008]: I0129 15:27:37.229978 5008 topology_manager.go:138] "Creating topology manager with none policy" Jan 29 15:27:37 crc kubenswrapper[5008]: I0129 15:27:37.229989 5008 container_manager_linux.go:303] "Creating device plugin manager" Jan 29 15:27:37 crc kubenswrapper[5008]: I0129 15:27:37.230411 5008 manager.go:142] "Creating Device Plugin manager" path="/var/lib/kubelet/device-plugins/kubelet.sock" Jan 29 15:27:37 crc kubenswrapper[5008]: I0129 15:27:37.230441 5008 server.go:66] "Creating device plugin registration server" version="v1beta1" socket="/var/lib/kubelet/device-plugins/kubelet.sock" Jan 29 15:27:37 crc kubenswrapper[5008]: I0129 15:27:37.231160 5008 state_mem.go:36] "Initialized new in-memory state store" Jan 29 15:27:37 crc kubenswrapper[5008]: I0129 15:27:37.231250 5008 server.go:1245] "Using root directory" path="/var/lib/kubelet" Jan 29 15:27:37 crc kubenswrapper[5008]: I0129 15:27:37.234563 5008 kubelet.go:418] "Attempting to sync node with API server" Jan 29 15:27:37 crc kubenswrapper[5008]: I0129 15:27:37.234591 5008 kubelet.go:313] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 29 15:27:37 crc kubenswrapper[5008]: I0129 15:27:37.234619 5008 file.go:69] "Watching path" path="/etc/kubernetes/manifests" Jan 29 15:27:37 crc kubenswrapper[5008]: I0129 15:27:37.234634 5008 kubelet.go:324] "Adding apiserver pod source" Jan 29 15:27:37 crc kubenswrapper[5008]: I0129 15:27:37.234647 5008 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 29 15:27:37 crc kubenswrapper[5008]: I0129 15:27:37.247162 5008 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="cri-o" version="1.31.5-4.rhaos4.18.gitdad78d5.el9" apiVersion="v1" Jan 29 15:27:37 crc kubenswrapper[5008]: I0129 15:27:37.248292 5008 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-server-current.pem". Jan 29 15:27:37 crc kubenswrapper[5008]: W0129 15:27:37.249176 5008 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.50:6443: connect: connection refused Jan 29 15:27:37 crc kubenswrapper[5008]: E0129 15:27:37.252411 5008 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.50:6443: connect: connection refused" logger="UnhandledError" Jan 29 15:27:37 crc kubenswrapper[5008]: W0129 15:27:37.253028 5008 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.50:6443: connect: connection refused Jan 29 15:27:37 crc kubenswrapper[5008]: E0129 15:27:37.253189 5008 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.50:6443: connect: connection refused" logger="UnhandledError" Jan 29 15:27:37 crc kubenswrapper[5008]: I0129 15:27:37.256013 5008 kubelet.go:854] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 29 15:27:37 crc kubenswrapper[5008]: I0129 15:27:37.260656 5008 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/portworx-volume" Jan 29 15:27:37 crc kubenswrapper[5008]: I0129 15:27:37.260697 5008 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/empty-dir" Jan 29 15:27:37 crc kubenswrapper[5008]: I0129 15:27:37.260707 5008 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/git-repo" Jan 29 15:27:37 crc kubenswrapper[5008]: I0129 15:27:37.260717 5008 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/host-path" Jan 29 15:27:37 crc kubenswrapper[5008]: I0129 15:27:37.260734 5008 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/nfs" Jan 29 15:27:37 crc kubenswrapper[5008]: I0129 15:27:37.260745 5008 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/secret" Jan 29 15:27:37 crc kubenswrapper[5008]: I0129 15:27:37.260754 5008 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/iscsi" Jan 29 15:27:37 crc kubenswrapper[5008]: I0129 15:27:37.260812 5008 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/downward-api" Jan 29 15:27:37 crc kubenswrapper[5008]: I0129 15:27:37.260824 5008 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/fc" Jan 29 15:27:37 crc kubenswrapper[5008]: I0129 15:27:37.260842 5008 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/configmap" Jan 29 15:27:37 crc kubenswrapper[5008]: I0129 15:27:37.260894 5008 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/projected" Jan 29 15:27:37 crc kubenswrapper[5008]: I0129 15:27:37.260908 5008 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/local-volume" Jan 29 15:27:37 crc kubenswrapper[5008]: I0129 15:27:37.262467 5008 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/csi" Jan 29 15:27:37 crc kubenswrapper[5008]: I0129 15:27:37.263202 5008 server.go:1280] "Started kubelet" Jan 29 15:27:37 crc kubenswrapper[5008]: I0129 15:27:37.263342 5008 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.50:6443: connect: connection refused Jan 29 15:27:37 crc systemd[1]: Started Kubernetes Kubelet. Jan 29 15:27:37 crc kubenswrapper[5008]: I0129 15:27:37.267544 5008 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jan 29 15:27:37 crc kubenswrapper[5008]: I0129 15:27:37.267556 5008 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 29 15:27:37 crc kubenswrapper[5008]: I0129 15:27:37.268321 5008 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 29 15:27:37 crc kubenswrapper[5008]: I0129 15:27:37.268552 5008 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate rotation is enabled Jan 29 15:27:37 crc kubenswrapper[5008]: I0129 15:27:37.268598 5008 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 29 15:27:37 crc kubenswrapper[5008]: E0129 15:27:37.268820 5008 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Jan 29 15:27:37 crc kubenswrapper[5008]: I0129 15:27:37.269228 5008 volume_manager.go:287] "The desired_state_of_world populator starts" Jan 29 15:27:37 crc kubenswrapper[5008]: I0129 15:27:37.269270 5008 volume_manager.go:289] "Starting Kubelet Volume Manager" Jan 29 15:27:37 crc kubenswrapper[5008]: I0129 15:27:37.269225 5008 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-05 11:36:24.46043235 +0000 UTC Jan 29 15:27:37 crc kubenswrapper[5008]: I0129 15:27:37.269347 5008 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Jan 29 15:27:37 crc kubenswrapper[5008]: E0129 15:27:37.270227 5008 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.50:6443: connect: connection refused" interval="200ms" Jan 29 15:27:37 crc kubenswrapper[5008]: I0129 15:27:37.271137 5008 factory.go:55] Registering systemd factory Jan 29 15:27:37 crc kubenswrapper[5008]: I0129 15:27:37.271245 5008 factory.go:221] Registration of the systemd container factory successfully Jan 29 15:27:37 crc kubenswrapper[5008]: W0129 15:27:37.270999 5008 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.50:6443: connect: connection refused Jan 29 15:27:37 crc kubenswrapper[5008]: E0129 15:27:37.271582 5008 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.50:6443: connect: connection refused" logger="UnhandledError" Jan 29 15:27:37 crc kubenswrapper[5008]: I0129 15:27:37.272716 5008 factory.go:153] Registering CRI-O factory Jan 29 15:27:37 crc kubenswrapper[5008]: I0129 15:27:37.272746 5008 factory.go:221] Registration of the crio container factory successfully Jan 29 15:27:37 crc kubenswrapper[5008]: I0129 15:27:37.272845 5008 factory.go:219] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory Jan 29 15:27:37 crc kubenswrapper[5008]: I0129 15:27:37.272872 5008 factory.go:103] Registering Raw factory Jan 29 15:27:37 crc kubenswrapper[5008]: I0129 15:27:37.272892 5008 manager.go:1196] Started watching for new ooms in manager Jan 29 15:27:37 crc kubenswrapper[5008]: I0129 15:27:37.273182 5008 server.go:460] "Adding debug handlers to kubelet server" Jan 29 15:27:37 crc kubenswrapper[5008]: I0129 15:27:37.273852 5008 manager.go:319] Starting recovery of all containers Jan 29 15:27:37 crc kubenswrapper[5008]: I0129 15:27:37.276317 5008 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" seLinuxMountContext="" Jan 29 15:27:37 crc kubenswrapper[5008]: I0129 15:27:37.276370 5008 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" seLinuxMountContext="" Jan 29 15:27:37 crc kubenswrapper[5008]: I0129 15:27:37.276381 5008 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" seLinuxMountContext="" Jan 29 15:27:37 crc kubenswrapper[5008]: I0129 15:27:37.276390 5008 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" seLinuxMountContext="" Jan 29 15:27:37 crc kubenswrapper[5008]: I0129 15:27:37.276401 5008 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" seLinuxMountContext="" Jan 29 15:27:37 crc kubenswrapper[5008]: I0129 15:27:37.276414 5008 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" seLinuxMountContext="" Jan 29 15:27:37 crc kubenswrapper[5008]: I0129 15:27:37.276424 5008 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" seLinuxMountContext="" Jan 29 15:27:37 crc kubenswrapper[5008]: I0129 15:27:37.276433 5008 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" seLinuxMountContext="" Jan 29 15:27:37 crc kubenswrapper[5008]: I0129 15:27:37.276444 5008 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" seLinuxMountContext="" Jan 29 15:27:37 crc kubenswrapper[5008]: I0129 15:27:37.276454 5008 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" seLinuxMountContext="" Jan 29 15:27:37 crc kubenswrapper[5008]: I0129 15:27:37.276463 5008 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" seLinuxMountContext="" Jan 29 15:27:37 crc kubenswrapper[5008]: I0129 15:27:37.276473 5008 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" seLinuxMountContext="" Jan 29 15:27:37 crc kubenswrapper[5008]: I0129 15:27:37.276485 5008 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" seLinuxMountContext="" Jan 29 15:27:37 crc kubenswrapper[5008]: I0129 15:27:37.276494 5008 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script" seLinuxMountContext="" Jan 29 15:27:37 crc kubenswrapper[5008]: I0129 15:27:37.276504 5008 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" seLinuxMountContext="" Jan 29 15:27:37 crc kubenswrapper[5008]: I0129 15:27:37.276518 5008 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" seLinuxMountContext="" Jan 29 15:27:37 crc kubenswrapper[5008]: I0129 15:27:37.276557 5008 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" seLinuxMountContext="" Jan 29 15:27:37 crc kubenswrapper[5008]: I0129 15:27:37.276567 5008 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" seLinuxMountContext="" Jan 29 15:27:37 crc kubenswrapper[5008]: I0129 15:27:37.276579 5008 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" volumeName="kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" seLinuxMountContext="" Jan 29 15:27:37 crc kubenswrapper[5008]: I0129 15:27:37.276589 5008 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" seLinuxMountContext="" Jan 29 15:27:37 crc kubenswrapper[5008]: I0129 15:27:37.276600 5008 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" seLinuxMountContext="" Jan 29 15:27:37 crc kubenswrapper[5008]: I0129 15:27:37.276609 5008 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" seLinuxMountContext="" Jan 29 15:27:37 crc kubenswrapper[5008]: I0129 15:27:37.276620 5008 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" seLinuxMountContext="" Jan 29 15:27:37 crc kubenswrapper[5008]: I0129 15:27:37.276667 5008 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" seLinuxMountContext="" Jan 29 15:27:37 crc kubenswrapper[5008]: I0129 15:27:37.276713 5008 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" seLinuxMountContext="" Jan 29 15:27:37 crc kubenswrapper[5008]: I0129 15:27:37.276725 5008 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" seLinuxMountContext="" Jan 29 15:27:37 crc kubenswrapper[5008]: I0129 15:27:37.276739 5008 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" seLinuxMountContext="" Jan 29 15:27:37 crc kubenswrapper[5008]: I0129 15:27:37.276750 5008 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" seLinuxMountContext="" Jan 29 15:27:37 crc kubenswrapper[5008]: I0129 15:27:37.276761 5008 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" seLinuxMountContext="" Jan 29 15:27:37 crc kubenswrapper[5008]: I0129 15:27:37.276771 5008 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" seLinuxMountContext="" Jan 29 15:27:37 crc kubenswrapper[5008]: I0129 15:27:37.276797 5008 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" seLinuxMountContext="" Jan 29 15:27:37 crc kubenswrapper[5008]: I0129 15:27:37.276817 5008 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" seLinuxMountContext="" Jan 29 15:27:37 crc kubenswrapper[5008]: I0129 15:27:37.276830 5008 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" seLinuxMountContext="" Jan 29 15:27:37 crc kubenswrapper[5008]: I0129 15:27:37.276839 5008 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" volumeName="kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" seLinuxMountContext="" Jan 29 15:27:37 crc kubenswrapper[5008]: I0129 15:27:37.276849 5008 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" seLinuxMountContext="" Jan 29 15:27:37 crc kubenswrapper[5008]: I0129 15:27:37.276860 5008 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" seLinuxMountContext="" Jan 29 15:27:37 crc kubenswrapper[5008]: I0129 15:27:37.276870 5008 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" seLinuxMountContext="" Jan 29 15:27:37 crc kubenswrapper[5008]: I0129 15:27:37.276881 5008 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" seLinuxMountContext="" Jan 29 15:27:37 crc kubenswrapper[5008]: I0129 15:27:37.276892 5008 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d751cbb-f2e2-430d-9754-c882a5e924a5" volumeName="kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl" seLinuxMountContext="" Jan 29 15:27:37 crc kubenswrapper[5008]: I0129 15:27:37.276902 5008 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" seLinuxMountContext="" Jan 29 15:27:37 crc kubenswrapper[5008]: I0129 15:27:37.276913 5008 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" seLinuxMountContext="" Jan 29 15:27:37 crc kubenswrapper[5008]: I0129 15:27:37.276926 5008 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" seLinuxMountContext="" Jan 29 15:27:37 crc kubenswrapper[5008]: I0129 15:27:37.276936 5008 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" seLinuxMountContext="" Jan 29 15:27:37 crc kubenswrapper[5008]: I0129 15:27:37.276946 5008 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" seLinuxMountContext="" Jan 29 15:27:37 crc kubenswrapper[5008]: I0129 15:27:37.276956 5008 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" seLinuxMountContext="" Jan 29 15:27:37 crc kubenswrapper[5008]: I0129 15:27:37.276966 5008 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" seLinuxMountContext="" Jan 29 15:27:37 crc kubenswrapper[5008]: I0129 15:27:37.276977 5008 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" seLinuxMountContext="" Jan 29 15:27:37 crc kubenswrapper[5008]: I0129 15:27:37.276987 5008 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" seLinuxMountContext="" Jan 29 15:27:37 crc kubenswrapper[5008]: I0129 15:27:37.276999 5008 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" seLinuxMountContext="" Jan 29 15:27:37 crc kubenswrapper[5008]: I0129 15:27:37.277011 5008 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" seLinuxMountContext="" Jan 29 15:27:37 crc kubenswrapper[5008]: I0129 15:27:37.277379 5008 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" seLinuxMountContext="" Jan 29 15:27:37 crc kubenswrapper[5008]: I0129 15:27:37.277387 5008 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" seLinuxMountContext="" Jan 29 15:27:37 crc kubenswrapper[5008]: I0129 15:27:37.277400 5008 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3b6479f0-333b-4a96-9adf-2099afdc2447" volumeName="kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr" seLinuxMountContext="" Jan 29 15:27:37 crc kubenswrapper[5008]: I0129 15:27:37.277410 5008 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" seLinuxMountContext="" Jan 29 15:27:37 crc kubenswrapper[5008]: I0129 15:27:37.277419 5008 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" seLinuxMountContext="" Jan 29 15:27:37 crc kubenswrapper[5008]: I0129 15:27:37.277428 5008 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" seLinuxMountContext="" Jan 29 15:27:37 crc kubenswrapper[5008]: I0129 15:27:37.277439 5008 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" seLinuxMountContext="" Jan 29 15:27:37 crc kubenswrapper[5008]: I0129 15:27:37.277450 5008 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" seLinuxMountContext="" Jan 29 15:27:37 crc kubenswrapper[5008]: I0129 15:27:37.277458 5008 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" seLinuxMountContext="" Jan 29 15:27:37 crc kubenswrapper[5008]: I0129 15:27:37.277467 5008 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" seLinuxMountContext="" Jan 29 15:27:37 crc kubenswrapper[5008]: I0129 15:27:37.277475 5008 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf" seLinuxMountContext="" Jan 29 15:27:37 crc kubenswrapper[5008]: I0129 15:27:37.277484 5008 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" seLinuxMountContext="" Jan 29 15:27:37 crc kubenswrapper[5008]: I0129 15:27:37.277524 5008 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" seLinuxMountContext="" Jan 29 15:27:37 crc kubenswrapper[5008]: I0129 15:27:37.277536 5008 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" seLinuxMountContext="" Jan 29 15:27:37 crc kubenswrapper[5008]: I0129 15:27:37.277549 5008 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5" seLinuxMountContext="" Jan 29 15:27:37 crc kubenswrapper[5008]: I0129 15:27:37.277560 5008 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" seLinuxMountContext="" Jan 29 15:27:37 crc kubenswrapper[5008]: I0129 15:27:37.277573 5008 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" seLinuxMountContext="" Jan 29 15:27:37 crc kubenswrapper[5008]: I0129 15:27:37.277586 5008 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" seLinuxMountContext="" Jan 29 15:27:37 crc kubenswrapper[5008]: I0129 15:27:37.277599 5008 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" seLinuxMountContext="" Jan 29 15:27:37 crc kubenswrapper[5008]: I0129 15:27:37.277613 5008 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" seLinuxMountContext="" Jan 29 15:27:37 crc kubenswrapper[5008]: I0129 15:27:37.278034 5008 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" seLinuxMountContext="" Jan 29 15:27:37 crc kubenswrapper[5008]: I0129 15:27:37.278102 5008 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" seLinuxMountContext="" Jan 29 15:27:37 crc kubenswrapper[5008]: I0129 15:27:37.278115 5008 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" seLinuxMountContext="" Jan 29 15:27:37 crc kubenswrapper[5008]: I0129 15:27:37.278159 5008 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" seLinuxMountContext="" Jan 29 15:27:37 crc kubenswrapper[5008]: I0129 15:27:37.278183 5008 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" seLinuxMountContext="" Jan 29 15:27:37 crc kubenswrapper[5008]: I0129 15:27:37.278197 5008 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" seLinuxMountContext="" Jan 29 15:27:37 crc kubenswrapper[5008]: I0129 15:27:37.278208 5008 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" seLinuxMountContext="" Jan 29 15:27:37 crc kubenswrapper[5008]: I0129 15:27:37.278219 5008 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" seLinuxMountContext="" Jan 29 15:27:37 crc kubenswrapper[5008]: I0129 15:27:37.278230 5008 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" seLinuxMountContext="" Jan 29 15:27:37 crc kubenswrapper[5008]: I0129 15:27:37.278241 5008 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" seLinuxMountContext="" Jan 29 15:27:37 crc kubenswrapper[5008]: I0129 15:27:37.278255 5008 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" seLinuxMountContext="" Jan 29 15:27:37 crc kubenswrapper[5008]: I0129 15:27:37.278267 5008 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" seLinuxMountContext="" Jan 29 15:27:37 crc kubenswrapper[5008]: I0129 15:27:37.278279 5008 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" seLinuxMountContext="" Jan 29 15:27:37 crc kubenswrapper[5008]: I0129 15:27:37.278288 5008 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" seLinuxMountContext="" Jan 29 15:27:37 crc kubenswrapper[5008]: I0129 15:27:37.278301 5008 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" seLinuxMountContext="" Jan 29 15:27:37 crc kubenswrapper[5008]: I0129 15:27:37.278312 5008 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" seLinuxMountContext="" Jan 29 15:27:37 crc kubenswrapper[5008]: I0129 15:27:37.278324 5008 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" seLinuxMountContext="" Jan 29 15:27:37 crc kubenswrapper[5008]: I0129 15:27:37.278334 5008 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="44663579-783b-4372-86d6-acf235a62d72" volumeName="kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" seLinuxMountContext="" Jan 29 15:27:37 crc kubenswrapper[5008]: I0129 15:27:37.278344 5008 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" seLinuxMountContext="" Jan 29 15:27:37 crc kubenswrapper[5008]: I0129 15:27:37.278354 5008 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" seLinuxMountContext="" Jan 29 15:27:37 crc kubenswrapper[5008]: I0129 15:27:37.278364 5008 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" seLinuxMountContext="" Jan 29 15:27:37 crc kubenswrapper[5008]: I0129 15:27:37.278376 5008 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" seLinuxMountContext="" Jan 29 15:27:37 crc kubenswrapper[5008]: I0129 15:27:37.278389 5008 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" seLinuxMountContext="" Jan 29 15:27:37 crc kubenswrapper[5008]: I0129 15:27:37.278400 5008 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" seLinuxMountContext="" Jan 29 15:27:37 crc kubenswrapper[5008]: I0129 15:27:37.278412 5008 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" seLinuxMountContext="" Jan 29 15:27:37 crc kubenswrapper[5008]: I0129 15:27:37.278423 5008 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" seLinuxMountContext="" Jan 29 15:27:37 crc kubenswrapper[5008]: I0129 15:27:37.278435 5008 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" seLinuxMountContext="" Jan 29 15:27:37 crc kubenswrapper[5008]: I0129 15:27:37.278447 5008 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" seLinuxMountContext="" Jan 29 15:27:37 crc kubenswrapper[5008]: I0129 15:27:37.278462 5008 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" seLinuxMountContext="" Jan 29 15:27:37 crc kubenswrapper[5008]: E0129 15:27:37.279157 5008 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 38.102.83.50:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.188f3d308a04924f default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-29 15:27:37.263174223 +0000 UTC m=+0.936028470,LastTimestamp:2026-01-29 15:27:37.263174223 +0000 UTC m=+0.936028470,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 29 15:27:37 crc kubenswrapper[5008]: I0129 15:27:37.290445 5008 reconstruct.go:144] "Volume is marked device as uncertain and added into the actual state" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" deviceMountPath="/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount" Jan 29 15:27:37 crc kubenswrapper[5008]: I0129 15:27:37.290523 5008 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" seLinuxMountContext="" Jan 29 15:27:37 crc kubenswrapper[5008]: I0129 15:27:37.290546 5008 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" seLinuxMountContext="" Jan 29 15:27:37 crc kubenswrapper[5008]: I0129 15:27:37.290569 5008 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" seLinuxMountContext="" Jan 29 15:27:37 crc kubenswrapper[5008]: I0129 15:27:37.290587 5008 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" seLinuxMountContext="" Jan 29 15:27:37 crc kubenswrapper[5008]: I0129 15:27:37.290605 5008 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" seLinuxMountContext="" Jan 29 15:27:37 crc kubenswrapper[5008]: I0129 15:27:37.290637 5008 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" seLinuxMountContext="" Jan 29 15:27:37 crc kubenswrapper[5008]: I0129 15:27:37.290662 5008 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" seLinuxMountContext="" Jan 29 15:27:37 crc kubenswrapper[5008]: I0129 15:27:37.290682 5008 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" seLinuxMountContext="" Jan 29 15:27:37 crc kubenswrapper[5008]: I0129 15:27:37.290709 5008 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" seLinuxMountContext="" Jan 29 15:27:37 crc kubenswrapper[5008]: I0129 15:27:37.290737 5008 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" seLinuxMountContext="" Jan 29 15:27:37 crc kubenswrapper[5008]: I0129 15:27:37.290755 5008 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" seLinuxMountContext="" Jan 29 15:27:37 crc kubenswrapper[5008]: I0129 15:27:37.290776 5008 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" seLinuxMountContext="" Jan 29 15:27:37 crc kubenswrapper[5008]: I0129 15:27:37.290818 5008 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" seLinuxMountContext="" Jan 29 15:27:37 crc kubenswrapper[5008]: I0129 15:27:37.290837 5008 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" seLinuxMountContext="" Jan 29 15:27:37 crc kubenswrapper[5008]: I0129 15:27:37.290859 5008 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" seLinuxMountContext="" Jan 29 15:27:37 crc kubenswrapper[5008]: I0129 15:27:37.290877 5008 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" seLinuxMountContext="" Jan 29 15:27:37 crc kubenswrapper[5008]: I0129 15:27:37.290898 5008 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" seLinuxMountContext="" Jan 29 15:27:37 crc kubenswrapper[5008]: I0129 15:27:37.290913 5008 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" seLinuxMountContext="" Jan 29 15:27:37 crc kubenswrapper[5008]: I0129 15:27:37.290935 5008 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" seLinuxMountContext="" Jan 29 15:27:37 crc kubenswrapper[5008]: I0129 15:27:37.290951 5008 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" seLinuxMountContext="" Jan 29 15:27:37 crc kubenswrapper[5008]: I0129 15:27:37.290966 5008 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" seLinuxMountContext="" Jan 29 15:27:37 crc kubenswrapper[5008]: I0129 15:27:37.290987 5008 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" seLinuxMountContext="" Jan 29 15:27:37 crc kubenswrapper[5008]: I0129 15:27:37.291002 5008 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" seLinuxMountContext="" Jan 29 15:27:37 crc kubenswrapper[5008]: I0129 15:27:37.291246 5008 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls" seLinuxMountContext="" Jan 29 15:27:37 crc kubenswrapper[5008]: I0129 15:27:37.291276 5008 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" seLinuxMountContext="" Jan 29 15:27:37 crc kubenswrapper[5008]: I0129 15:27:37.291292 5008 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" seLinuxMountContext="" Jan 29 15:27:37 crc kubenswrapper[5008]: I0129 15:27:37.291308 5008 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" seLinuxMountContext="" Jan 29 15:27:37 crc kubenswrapper[5008]: I0129 15:27:37.291327 5008 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" seLinuxMountContext="" Jan 29 15:27:37 crc kubenswrapper[5008]: I0129 15:27:37.291342 5008 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" seLinuxMountContext="" Jan 29 15:27:37 crc kubenswrapper[5008]: I0129 15:27:37.291361 5008 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert" seLinuxMountContext="" Jan 29 15:27:37 crc kubenswrapper[5008]: I0129 15:27:37.291377 5008 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" seLinuxMountContext="" Jan 29 15:27:37 crc kubenswrapper[5008]: I0129 15:27:37.291392 5008 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" seLinuxMountContext="" Jan 29 15:27:37 crc kubenswrapper[5008]: I0129 15:27:37.291409 5008 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" seLinuxMountContext="" Jan 29 15:27:37 crc kubenswrapper[5008]: I0129 15:27:37.291424 5008 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" seLinuxMountContext="" Jan 29 15:27:37 crc kubenswrapper[5008]: I0129 15:27:37.291443 5008 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" seLinuxMountContext="" Jan 29 15:27:37 crc kubenswrapper[5008]: I0129 15:27:37.291458 5008 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" seLinuxMountContext="" Jan 29 15:27:37 crc kubenswrapper[5008]: I0129 15:27:37.291472 5008 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" seLinuxMountContext="" Jan 29 15:27:37 crc kubenswrapper[5008]: I0129 15:27:37.291491 5008 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" seLinuxMountContext="" Jan 29 15:27:37 crc kubenswrapper[5008]: I0129 15:27:37.291504 5008 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" seLinuxMountContext="" Jan 29 15:27:37 crc kubenswrapper[5008]: I0129 15:27:37.291521 5008 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" seLinuxMountContext="" Jan 29 15:27:37 crc kubenswrapper[5008]: I0129 15:27:37.291535 5008 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" seLinuxMountContext="" Jan 29 15:27:37 crc kubenswrapper[5008]: I0129 15:27:37.291549 5008 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" seLinuxMountContext="" Jan 29 15:27:37 crc kubenswrapper[5008]: I0129 15:27:37.291566 5008 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" seLinuxMountContext="" Jan 29 15:27:37 crc kubenswrapper[5008]: I0129 15:27:37.291580 5008 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" seLinuxMountContext="" Jan 29 15:27:37 crc kubenswrapper[5008]: I0129 15:27:37.291593 5008 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" seLinuxMountContext="" Jan 29 15:27:37 crc kubenswrapper[5008]: I0129 15:27:37.291609 5008 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" seLinuxMountContext="" Jan 29 15:27:37 crc kubenswrapper[5008]: I0129 15:27:37.291622 5008 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" seLinuxMountContext="" Jan 29 15:27:37 crc kubenswrapper[5008]: I0129 15:27:37.291639 5008 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" seLinuxMountContext="" Jan 29 15:27:37 crc kubenswrapper[5008]: I0129 15:27:37.291653 5008 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" seLinuxMountContext="" Jan 29 15:27:37 crc kubenswrapper[5008]: I0129 15:27:37.291666 5008 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" seLinuxMountContext="" Jan 29 15:27:37 crc kubenswrapper[5008]: I0129 15:27:37.291682 5008 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" seLinuxMountContext="" Jan 29 15:27:37 crc kubenswrapper[5008]: I0129 15:27:37.291697 5008 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" seLinuxMountContext="" Jan 29 15:27:37 crc kubenswrapper[5008]: I0129 15:27:37.291713 5008 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" seLinuxMountContext="" Jan 29 15:27:37 crc kubenswrapper[5008]: I0129 15:27:37.291727 5008 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides" seLinuxMountContext="" Jan 29 15:27:37 crc kubenswrapper[5008]: I0129 15:27:37.291740 5008 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" seLinuxMountContext="" Jan 29 15:27:37 crc kubenswrapper[5008]: I0129 15:27:37.291758 5008 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49ef4625-1d3a-4a9f-b595-c2433d32326d" volumeName="kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" seLinuxMountContext="" Jan 29 15:27:37 crc kubenswrapper[5008]: I0129 15:27:37.291772 5008 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" seLinuxMountContext="" Jan 29 15:27:37 crc kubenswrapper[5008]: I0129 15:27:37.291811 5008 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" seLinuxMountContext="" Jan 29 15:27:37 crc kubenswrapper[5008]: I0129 15:27:37.291825 5008 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" seLinuxMountContext="" Jan 29 15:27:37 crc kubenswrapper[5008]: I0129 15:27:37.291844 5008 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" seLinuxMountContext="" Jan 29 15:27:37 crc kubenswrapper[5008]: I0129 15:27:37.291860 5008 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" seLinuxMountContext="" Jan 29 15:27:37 crc kubenswrapper[5008]: I0129 15:27:37.291875 5008 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" seLinuxMountContext="" Jan 29 15:27:37 crc kubenswrapper[5008]: I0129 15:27:37.291891 5008 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" seLinuxMountContext="" Jan 29 15:27:37 crc kubenswrapper[5008]: I0129 15:27:37.291906 5008 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" seLinuxMountContext="" Jan 29 15:27:37 crc kubenswrapper[5008]: I0129 15:27:37.291921 5008 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf" seLinuxMountContext="" Jan 29 15:27:37 crc kubenswrapper[5008]: I0129 15:27:37.291938 5008 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" seLinuxMountContext="" Jan 29 15:27:37 crc kubenswrapper[5008]: I0129 15:27:37.291951 5008 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" seLinuxMountContext="" Jan 29 15:27:37 crc kubenswrapper[5008]: I0129 15:27:37.291968 5008 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm" seLinuxMountContext="" Jan 29 15:27:37 crc kubenswrapper[5008]: I0129 15:27:37.291981 5008 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" seLinuxMountContext="" Jan 29 15:27:37 crc kubenswrapper[5008]: I0129 15:27:37.291995 5008 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" seLinuxMountContext="" Jan 29 15:27:37 crc kubenswrapper[5008]: I0129 15:27:37.292011 5008 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" seLinuxMountContext="" Jan 29 15:27:37 crc kubenswrapper[5008]: I0129 15:27:37.292024 5008 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" seLinuxMountContext="" Jan 29 15:27:37 crc kubenswrapper[5008]: I0129 15:27:37.292042 5008 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" seLinuxMountContext="" Jan 29 15:27:37 crc kubenswrapper[5008]: I0129 15:27:37.292055 5008 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" seLinuxMountContext="" Jan 29 15:27:37 crc kubenswrapper[5008]: I0129 15:27:37.292068 5008 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" seLinuxMountContext="" Jan 29 15:27:37 crc kubenswrapper[5008]: I0129 15:27:37.292085 5008 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" seLinuxMountContext="" Jan 29 15:27:37 crc kubenswrapper[5008]: I0129 15:27:37.292099 5008 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" seLinuxMountContext="" Jan 29 15:27:37 crc kubenswrapper[5008]: I0129 15:27:37.292116 5008 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" seLinuxMountContext="" Jan 29 15:27:37 crc kubenswrapper[5008]: I0129 15:27:37.292132 5008 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" seLinuxMountContext="" Jan 29 15:27:37 crc kubenswrapper[5008]: I0129 15:27:37.292148 5008 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" seLinuxMountContext="" Jan 29 15:27:37 crc kubenswrapper[5008]: I0129 15:27:37.292166 5008 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" seLinuxMountContext="" Jan 29 15:27:37 crc kubenswrapper[5008]: I0129 15:27:37.292180 5008 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" seLinuxMountContext="" Jan 29 15:27:37 crc kubenswrapper[5008]: I0129 15:27:37.292242 5008 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" seLinuxMountContext="" Jan 29 15:27:37 crc kubenswrapper[5008]: I0129 15:27:37.292256 5008 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" seLinuxMountContext="" Jan 29 15:27:37 crc kubenswrapper[5008]: I0129 15:27:37.292270 5008 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" seLinuxMountContext="" Jan 29 15:27:37 crc kubenswrapper[5008]: I0129 15:27:37.292290 5008 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" seLinuxMountContext="" Jan 29 15:27:37 crc kubenswrapper[5008]: I0129 15:27:37.292307 5008 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" seLinuxMountContext="" Jan 29 15:27:37 crc kubenswrapper[5008]: I0129 15:27:37.292326 5008 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" seLinuxMountContext="" Jan 29 15:27:37 crc kubenswrapper[5008]: I0129 15:27:37.292340 5008 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" seLinuxMountContext="" Jan 29 15:27:37 crc kubenswrapper[5008]: I0129 15:27:37.292353 5008 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" seLinuxMountContext="" Jan 29 15:27:37 crc kubenswrapper[5008]: I0129 15:27:37.292372 5008 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" seLinuxMountContext="" Jan 29 15:27:37 crc kubenswrapper[5008]: I0129 15:27:37.292385 5008 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" seLinuxMountContext="" Jan 29 15:27:37 crc kubenswrapper[5008]: I0129 15:27:37.292402 5008 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert" seLinuxMountContext="" Jan 29 15:27:37 crc kubenswrapper[5008]: I0129 15:27:37.292415 5008 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" seLinuxMountContext="" Jan 29 15:27:37 crc kubenswrapper[5008]: I0129 15:27:37.292428 5008 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" seLinuxMountContext="" Jan 29 15:27:37 crc kubenswrapper[5008]: I0129 15:27:37.292448 5008 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" seLinuxMountContext="" Jan 29 15:27:37 crc kubenswrapper[5008]: I0129 15:27:37.292462 5008 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" seLinuxMountContext="" Jan 29 15:27:37 crc kubenswrapper[5008]: I0129 15:27:37.292481 5008 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" seLinuxMountContext="" Jan 29 15:27:37 crc kubenswrapper[5008]: I0129 15:27:37.292495 5008 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" seLinuxMountContext="" Jan 29 15:27:37 crc kubenswrapper[5008]: I0129 15:27:37.292508 5008 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" seLinuxMountContext="" Jan 29 15:27:37 crc kubenswrapper[5008]: I0129 15:27:37.292525 5008 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" seLinuxMountContext="" Jan 29 15:27:37 crc kubenswrapper[5008]: I0129 15:27:37.292540 5008 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb" seLinuxMountContext="" Jan 29 15:27:37 crc kubenswrapper[5008]: I0129 15:27:37.292556 5008 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" seLinuxMountContext="" Jan 29 15:27:37 crc kubenswrapper[5008]: I0129 15:27:37.292579 5008 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" seLinuxMountContext="" Jan 29 15:27:37 crc kubenswrapper[5008]: I0129 15:27:37.292593 5008 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" seLinuxMountContext="" Jan 29 15:27:37 crc kubenswrapper[5008]: I0129 15:27:37.292611 5008 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" seLinuxMountContext="" Jan 29 15:27:37 crc kubenswrapper[5008]: I0129 15:27:37.292627 5008 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" seLinuxMountContext="" Jan 29 15:27:37 crc kubenswrapper[5008]: I0129 15:27:37.292654 5008 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" seLinuxMountContext="" Jan 29 15:27:37 crc kubenswrapper[5008]: I0129 15:27:37.292684 5008 reconstruct.go:97] "Volume reconstruction finished" Jan 29 15:27:37 crc kubenswrapper[5008]: I0129 15:27:37.292698 5008 reconciler.go:26] "Reconciler: start to sync state" Jan 29 15:27:37 crc kubenswrapper[5008]: I0129 15:27:37.301677 5008 manager.go:324] Recovery completed Jan 29 15:27:37 crc kubenswrapper[5008]: I0129 15:27:37.312147 5008 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 15:27:37 crc kubenswrapper[5008]: I0129 15:27:37.313717 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:27:37 crc kubenswrapper[5008]: I0129 15:27:37.314016 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:27:37 crc kubenswrapper[5008]: I0129 15:27:37.314027 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:27:37 crc kubenswrapper[5008]: I0129 15:27:37.314712 5008 cpu_manager.go:225] "Starting CPU manager" policy="none" Jan 29 15:27:37 crc kubenswrapper[5008]: I0129 15:27:37.314728 5008 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Jan 29 15:27:37 crc kubenswrapper[5008]: I0129 15:27:37.314752 5008 state_mem.go:36] "Initialized new in-memory state store" Jan 29 15:27:37 crc kubenswrapper[5008]: I0129 15:27:37.320153 5008 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 29 15:27:37 crc kubenswrapper[5008]: I0129 15:27:37.322328 5008 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 29 15:27:37 crc kubenswrapper[5008]: I0129 15:27:37.322385 5008 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 29 15:27:37 crc kubenswrapper[5008]: I0129 15:27:37.322423 5008 kubelet.go:2335] "Starting kubelet main sync loop" Jan 29 15:27:37 crc kubenswrapper[5008]: E0129 15:27:37.322486 5008 kubelet.go:2359] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 29 15:27:37 crc kubenswrapper[5008]: W0129 15:27:37.324389 5008 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.50:6443: connect: connection refused Jan 29 15:27:37 crc kubenswrapper[5008]: E0129 15:27:37.324502 5008 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.50:6443: connect: connection refused" logger="UnhandledError" Jan 29 15:27:37 crc kubenswrapper[5008]: I0129 15:27:37.344359 5008 policy_none.go:49] "None policy: Start" Jan 29 15:27:37 crc kubenswrapper[5008]: I0129 15:27:37.345590 5008 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 29 15:27:37 crc kubenswrapper[5008]: I0129 15:27:37.345625 5008 state_mem.go:35] "Initializing new in-memory state store" Jan 29 15:27:37 crc kubenswrapper[5008]: E0129 15:27:37.368915 5008 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Jan 29 15:27:37 crc kubenswrapper[5008]: I0129 15:27:37.408835 5008 manager.go:334] "Starting Device Plugin manager" Jan 29 15:27:37 crc kubenswrapper[5008]: I0129 15:27:37.408896 5008 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 29 15:27:37 crc kubenswrapper[5008]: I0129 15:27:37.408912 5008 server.go:79] "Starting device plugin registration server" Jan 29 15:27:37 crc kubenswrapper[5008]: I0129 15:27:37.409355 5008 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 29 15:27:37 crc kubenswrapper[5008]: I0129 15:27:37.409374 5008 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 29 15:27:37 crc kubenswrapper[5008]: I0129 15:27:37.409561 5008 plugin_watcher.go:51] "Plugin Watcher Start" path="/var/lib/kubelet/plugins_registry" Jan 29 15:27:37 crc kubenswrapper[5008]: I0129 15:27:37.409656 5008 plugin_manager.go:116] "The desired_state_of_world populator (plugin watcher) starts" Jan 29 15:27:37 crc kubenswrapper[5008]: I0129 15:27:37.409668 5008 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 29 15:27:37 crc kubenswrapper[5008]: E0129 15:27:37.416972 5008 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Jan 29 15:27:37 crc kubenswrapper[5008]: I0129 15:27:37.423399 5008 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc","openshift-etcd/etcd-crc","openshift-kube-apiserver/kube-apiserver-crc","openshift-kube-controller-manager/kube-controller-manager-crc","openshift-kube-scheduler/openshift-kube-scheduler-crc"] Jan 29 15:27:37 crc kubenswrapper[5008]: I0129 15:27:37.423477 5008 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 15:27:37 crc kubenswrapper[5008]: I0129 15:27:37.424616 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:27:37 crc kubenswrapper[5008]: I0129 15:27:37.424661 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:27:37 crc kubenswrapper[5008]: I0129 15:27:37.424678 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:27:37 crc kubenswrapper[5008]: I0129 15:27:37.424836 5008 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 15:27:37 crc kubenswrapper[5008]: I0129 15:27:37.425089 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 29 15:27:37 crc kubenswrapper[5008]: I0129 15:27:37.425138 5008 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 15:27:37 crc kubenswrapper[5008]: I0129 15:27:37.425526 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:27:37 crc kubenswrapper[5008]: I0129 15:27:37.425555 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:27:37 crc kubenswrapper[5008]: I0129 15:27:37.425566 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:27:37 crc kubenswrapper[5008]: I0129 15:27:37.425662 5008 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 15:27:37 crc kubenswrapper[5008]: I0129 15:27:37.426015 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Jan 29 15:27:37 crc kubenswrapper[5008]: I0129 15:27:37.426101 5008 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 15:27:37 crc kubenswrapper[5008]: I0129 15:27:37.426329 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:27:37 crc kubenswrapper[5008]: I0129 15:27:37.426359 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:27:37 crc kubenswrapper[5008]: I0129 15:27:37.426369 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:27:37 crc kubenswrapper[5008]: I0129 15:27:37.426469 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:27:37 crc kubenswrapper[5008]: I0129 15:27:37.426500 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:27:37 crc kubenswrapper[5008]: I0129 15:27:37.426512 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:27:37 crc kubenswrapper[5008]: I0129 15:27:37.426674 5008 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 15:27:37 crc kubenswrapper[5008]: I0129 15:27:37.426799 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 15:27:37 crc kubenswrapper[5008]: I0129 15:27:37.426866 5008 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 15:27:37 crc kubenswrapper[5008]: I0129 15:27:37.427297 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:27:37 crc kubenswrapper[5008]: I0129 15:27:37.427315 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:27:37 crc kubenswrapper[5008]: I0129 15:27:37.427331 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:27:37 crc kubenswrapper[5008]: I0129 15:27:37.427492 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:27:37 crc kubenswrapper[5008]: I0129 15:27:37.427503 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:27:37 crc kubenswrapper[5008]: I0129 15:27:37.427576 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:27:37 crc kubenswrapper[5008]: I0129 15:27:37.427658 5008 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 15:27:37 crc kubenswrapper[5008]: I0129 15:27:37.427859 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 29 15:27:37 crc kubenswrapper[5008]: I0129 15:27:37.427887 5008 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 15:27:37 crc kubenswrapper[5008]: I0129 15:27:37.428517 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:27:37 crc kubenswrapper[5008]: I0129 15:27:37.428548 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:27:37 crc kubenswrapper[5008]: I0129 15:27:37.428556 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:27:37 crc kubenswrapper[5008]: I0129 15:27:37.428646 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:27:37 crc kubenswrapper[5008]: I0129 15:27:37.428660 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:27:37 crc kubenswrapper[5008]: I0129 15:27:37.428667 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:27:37 crc kubenswrapper[5008]: I0129 15:27:37.428972 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:27:37 crc kubenswrapper[5008]: I0129 15:27:37.429068 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:27:37 crc kubenswrapper[5008]: I0129 15:27:37.429116 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:27:37 crc kubenswrapper[5008]: I0129 15:27:37.429376 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 29 15:27:37 crc kubenswrapper[5008]: I0129 15:27:37.429421 5008 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 15:27:37 crc kubenswrapper[5008]: I0129 15:27:37.430466 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:27:37 crc kubenswrapper[5008]: I0129 15:27:37.430497 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:27:37 crc kubenswrapper[5008]: I0129 15:27:37.430510 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:27:37 crc kubenswrapper[5008]: E0129 15:27:37.471465 5008 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.50:6443: connect: connection refused" interval="400ms" Jan 29 15:27:37 crc kubenswrapper[5008]: I0129 15:27:37.493771 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 29 15:27:37 crc kubenswrapper[5008]: I0129 15:27:37.493851 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 29 15:27:37 crc kubenswrapper[5008]: I0129 15:27:37.493877 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 15:27:37 crc kubenswrapper[5008]: I0129 15:27:37.493896 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 29 15:27:37 crc kubenswrapper[5008]: I0129 15:27:37.493917 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 29 15:27:37 crc kubenswrapper[5008]: I0129 15:27:37.493938 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 29 15:27:37 crc kubenswrapper[5008]: I0129 15:27:37.494014 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 29 15:27:37 crc kubenswrapper[5008]: I0129 15:27:37.494069 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 29 15:27:37 crc kubenswrapper[5008]: I0129 15:27:37.494105 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 29 15:27:37 crc kubenswrapper[5008]: I0129 15:27:37.494125 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 29 15:27:37 crc kubenswrapper[5008]: I0129 15:27:37.494157 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 29 15:27:37 crc kubenswrapper[5008]: I0129 15:27:37.494188 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 29 15:27:37 crc kubenswrapper[5008]: I0129 15:27:37.494210 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 15:27:37 crc kubenswrapper[5008]: I0129 15:27:37.494237 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 15:27:37 crc kubenswrapper[5008]: I0129 15:27:37.494259 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 29 15:27:37 crc kubenswrapper[5008]: I0129 15:27:37.509455 5008 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 15:27:37 crc kubenswrapper[5008]: I0129 15:27:37.510743 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:27:37 crc kubenswrapper[5008]: I0129 15:27:37.510803 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:27:37 crc kubenswrapper[5008]: I0129 15:27:37.510820 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:27:37 crc kubenswrapper[5008]: I0129 15:27:37.510846 5008 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 29 15:27:37 crc kubenswrapper[5008]: E0129 15:27:37.511346 5008 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.50:6443: connect: connection refused" node="crc" Jan 29 15:27:37 crc kubenswrapper[5008]: I0129 15:27:37.595642 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 15:27:37 crc kubenswrapper[5008]: I0129 15:27:37.595694 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 15:27:37 crc kubenswrapper[5008]: I0129 15:27:37.595729 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 29 15:27:37 crc kubenswrapper[5008]: I0129 15:27:37.595757 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 29 15:27:37 crc kubenswrapper[5008]: I0129 15:27:37.595777 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 29 15:27:37 crc kubenswrapper[5008]: I0129 15:27:37.595817 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 29 15:27:37 crc kubenswrapper[5008]: I0129 15:27:37.595832 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 29 15:27:37 crc kubenswrapper[5008]: I0129 15:27:37.595845 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 29 15:27:37 crc kubenswrapper[5008]: I0129 15:27:37.595860 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 29 15:27:37 crc kubenswrapper[5008]: I0129 15:27:37.595852 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 15:27:37 crc kubenswrapper[5008]: I0129 15:27:37.595904 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 29 15:27:37 crc kubenswrapper[5008]: I0129 15:27:37.595925 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 29 15:27:37 crc kubenswrapper[5008]: I0129 15:27:37.595873 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 15:27:37 crc kubenswrapper[5008]: I0129 15:27:37.595906 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 15:27:37 crc kubenswrapper[5008]: I0129 15:27:37.595854 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 29 15:27:37 crc kubenswrapper[5008]: I0129 15:27:37.595904 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 15:27:37 crc kubenswrapper[5008]: I0129 15:27:37.595955 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 29 15:27:37 crc kubenswrapper[5008]: I0129 15:27:37.596001 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 29 15:27:37 crc kubenswrapper[5008]: I0129 15:27:37.596066 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 29 15:27:37 crc kubenswrapper[5008]: I0129 15:27:37.596093 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 29 15:27:37 crc kubenswrapper[5008]: I0129 15:27:37.596082 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 29 15:27:37 crc kubenswrapper[5008]: I0129 15:27:37.596134 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 29 15:27:37 crc kubenswrapper[5008]: I0129 15:27:37.596158 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 29 15:27:37 crc kubenswrapper[5008]: I0129 15:27:37.596179 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 29 15:27:37 crc kubenswrapper[5008]: I0129 15:27:37.596197 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 29 15:27:37 crc kubenswrapper[5008]: I0129 15:27:37.596216 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 29 15:27:37 crc kubenswrapper[5008]: I0129 15:27:37.596244 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 29 15:27:37 crc kubenswrapper[5008]: I0129 15:27:37.596281 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 29 15:27:37 crc kubenswrapper[5008]: I0129 15:27:37.596306 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 29 15:27:37 crc kubenswrapper[5008]: I0129 15:27:37.596319 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 29 15:27:37 crc kubenswrapper[5008]: I0129 15:27:37.711998 5008 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 15:27:37 crc kubenswrapper[5008]: I0129 15:27:37.713209 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:27:37 crc kubenswrapper[5008]: I0129 15:27:37.713237 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:27:37 crc kubenswrapper[5008]: I0129 15:27:37.713247 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:27:37 crc kubenswrapper[5008]: I0129 15:27:37.713268 5008 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 29 15:27:37 crc kubenswrapper[5008]: E0129 15:27:37.713611 5008 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.50:6443: connect: connection refused" node="crc" Jan 29 15:27:37 crc kubenswrapper[5008]: I0129 15:27:37.756207 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 29 15:27:37 crc kubenswrapper[5008]: I0129 15:27:37.762572 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Jan 29 15:27:37 crc kubenswrapper[5008]: I0129 15:27:37.791893 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 29 15:27:37 crc kubenswrapper[5008]: I0129 15:27:37.812425 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 15:27:37 crc kubenswrapper[5008]: I0129 15:27:37.819084 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 29 15:27:37 crc kubenswrapper[5008]: W0129 15:27:37.848958 5008 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2139d3e2895fc6797b9c76a1b4c9886d.slice/crio-443e5233243bfac81b4162412778665f51cf2026f6a464b4173292c6b277adbf WatchSource:0}: Error finding container 443e5233243bfac81b4162412778665f51cf2026f6a464b4173292c6b277adbf: Status 404 returned error can't find the container with id 443e5233243bfac81b4162412778665f51cf2026f6a464b4173292c6b277adbf Jan 29 15:27:37 crc kubenswrapper[5008]: W0129 15:27:37.849573 5008 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd1b160f5dda77d281dd8e69ec8d817f9.slice/crio-f77b25e7b12292e777710ff54d47e582f26c02a813a4fa7d24d243f5248dc375 WatchSource:0}: Error finding container f77b25e7b12292e777710ff54d47e582f26c02a813a4fa7d24d243f5248dc375: Status 404 returned error can't find the container with id f77b25e7b12292e777710ff54d47e582f26c02a813a4fa7d24d243f5248dc375 Jan 29 15:27:37 crc kubenswrapper[5008]: W0129 15:27:37.860472 5008 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf614b9022728cf315e60c057852e563e.slice/crio-735192c16eaefc25698b6dbbd3a2ad30d1270d93dfd04fd43ad5c91ebff6068f WatchSource:0}: Error finding container 735192c16eaefc25698b6dbbd3a2ad30d1270d93dfd04fd43ad5c91ebff6068f: Status 404 returned error can't find the container with id 735192c16eaefc25698b6dbbd3a2ad30d1270d93dfd04fd43ad5c91ebff6068f Jan 29 15:27:37 crc kubenswrapper[5008]: W0129 15:27:37.861932 5008 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf4b27818a5e8e43d0dc095d08835c792.slice/crio-f369c481aedb9ac952c8d78015d769e907ceb872a2c7c4e4b2236479ad76c2d5 WatchSource:0}: Error finding container f369c481aedb9ac952c8d78015d769e907ceb872a2c7c4e4b2236479ad76c2d5: Status 404 returned error can't find the container with id f369c481aedb9ac952c8d78015d769e907ceb872a2c7c4e4b2236479ad76c2d5 Jan 29 15:27:37 crc kubenswrapper[5008]: W0129 15:27:37.862484 5008 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3dcd261975c3d6b9a6ad6367fd4facd3.slice/crio-40eeba3db8c4a3a0f2175b908d9d580006fe1b7ac37cd3536e0ae8090fe99c3e WatchSource:0}: Error finding container 40eeba3db8c4a3a0f2175b908d9d580006fe1b7ac37cd3536e0ae8090fe99c3e: Status 404 returned error can't find the container with id 40eeba3db8c4a3a0f2175b908d9d580006fe1b7ac37cd3536e0ae8090fe99c3e Jan 29 15:27:37 crc kubenswrapper[5008]: E0129 15:27:37.872839 5008 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.50:6443: connect: connection refused" interval="800ms" Jan 29 15:27:38 crc kubenswrapper[5008]: I0129 15:27:38.114731 5008 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 15:27:38 crc kubenswrapper[5008]: I0129 15:27:38.116201 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:27:38 crc kubenswrapper[5008]: I0129 15:27:38.116262 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:27:38 crc kubenswrapper[5008]: I0129 15:27:38.116284 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:27:38 crc kubenswrapper[5008]: I0129 15:27:38.116327 5008 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 29 15:27:38 crc kubenswrapper[5008]: E0129 15:27:38.117045 5008 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.50:6443: connect: connection refused" node="crc" Jan 29 15:27:38 crc kubenswrapper[5008]: W0129 15:27:38.156950 5008 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.50:6443: connect: connection refused Jan 29 15:27:38 crc kubenswrapper[5008]: E0129 15:27:38.157038 5008 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.50:6443: connect: connection refused" logger="UnhandledError" Jan 29 15:27:38 crc kubenswrapper[5008]: I0129 15:27:38.264413 5008 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.50:6443: connect: connection refused Jan 29 15:27:38 crc kubenswrapper[5008]: I0129 15:27:38.269709 5008 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-05 17:53:39.710905526 +0000 UTC Jan 29 15:27:38 crc kubenswrapper[5008]: W0129 15:27:38.322847 5008 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.50:6443: connect: connection refused Jan 29 15:27:38 crc kubenswrapper[5008]: E0129 15:27:38.322944 5008 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.50:6443: connect: connection refused" logger="UnhandledError" Jan 29 15:27:38 crc kubenswrapper[5008]: I0129 15:27:38.327515 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"735192c16eaefc25698b6dbbd3a2ad30d1270d93dfd04fd43ad5c91ebff6068f"} Jan 29 15:27:38 crc kubenswrapper[5008]: I0129 15:27:38.328482 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"443e5233243bfac81b4162412778665f51cf2026f6a464b4173292c6b277adbf"} Jan 29 15:27:38 crc kubenswrapper[5008]: I0129 15:27:38.329470 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"f77b25e7b12292e777710ff54d47e582f26c02a813a4fa7d24d243f5248dc375"} Jan 29 15:27:38 crc kubenswrapper[5008]: I0129 15:27:38.331005 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"40eeba3db8c4a3a0f2175b908d9d580006fe1b7ac37cd3536e0ae8090fe99c3e"} Jan 29 15:27:38 crc kubenswrapper[5008]: I0129 15:27:38.332126 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"f369c481aedb9ac952c8d78015d769e907ceb872a2c7c4e4b2236479ad76c2d5"} Jan 29 15:27:38 crc kubenswrapper[5008]: W0129 15:27:38.602547 5008 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.50:6443: connect: connection refused Jan 29 15:27:38 crc kubenswrapper[5008]: E0129 15:27:38.602675 5008 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.50:6443: connect: connection refused" logger="UnhandledError" Jan 29 15:27:38 crc kubenswrapper[5008]: W0129 15:27:38.633436 5008 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.50:6443: connect: connection refused Jan 29 15:27:38 crc kubenswrapper[5008]: E0129 15:27:38.633558 5008 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.50:6443: connect: connection refused" logger="UnhandledError" Jan 29 15:27:38 crc kubenswrapper[5008]: E0129 15:27:38.673967 5008 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.50:6443: connect: connection refused" interval="1.6s" Jan 29 15:27:38 crc kubenswrapper[5008]: I0129 15:27:38.917383 5008 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 15:27:38 crc kubenswrapper[5008]: I0129 15:27:38.918339 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:27:38 crc kubenswrapper[5008]: I0129 15:27:38.918377 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:27:38 crc kubenswrapper[5008]: I0129 15:27:38.918387 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:27:38 crc kubenswrapper[5008]: I0129 15:27:38.918410 5008 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 29 15:27:38 crc kubenswrapper[5008]: E0129 15:27:38.918725 5008 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.50:6443: connect: connection refused" node="crc" Jan 29 15:27:39 crc kubenswrapper[5008]: I0129 15:27:39.179768 5008 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Jan 29 15:27:39 crc kubenswrapper[5008]: E0129 15:27:39.181675 5008 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.102.83.50:6443: connect: connection refused" logger="UnhandledError" Jan 29 15:27:39 crc kubenswrapper[5008]: I0129 15:27:39.265004 5008 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.50:6443: connect: connection refused Jan 29 15:27:39 crc kubenswrapper[5008]: I0129 15:27:39.270183 5008 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-02 21:19:48.436735268 +0000 UTC Jan 29 15:27:39 crc kubenswrapper[5008]: I0129 15:27:39.336834 5008 generic.go:334] "Generic (PLEG): container finished" podID="3dcd261975c3d6b9a6ad6367fd4facd3" containerID="0ee76cb03f96b669c6907a5d4a1520afda186e96b59ddea75f8c0fd7547c9063" exitCode=0 Jan 29 15:27:39 crc kubenswrapper[5008]: I0129 15:27:39.336941 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerDied","Data":"0ee76cb03f96b669c6907a5d4a1520afda186e96b59ddea75f8c0fd7547c9063"} Jan 29 15:27:39 crc kubenswrapper[5008]: I0129 15:27:39.336989 5008 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 15:27:39 crc kubenswrapper[5008]: I0129 15:27:39.338459 5008 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="25a0c747be0a011a60911a631709b27620d8ebc5afea1d21dbeb71f26d971f6e" exitCode=0 Jan 29 15:27:39 crc kubenswrapper[5008]: I0129 15:27:39.338504 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"25a0c747be0a011a60911a631709b27620d8ebc5afea1d21dbeb71f26d971f6e"} Jan 29 15:27:39 crc kubenswrapper[5008]: I0129 15:27:39.338608 5008 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 15:27:39 crc kubenswrapper[5008]: I0129 15:27:39.338746 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:27:39 crc kubenswrapper[5008]: I0129 15:27:39.338774 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:27:39 crc kubenswrapper[5008]: I0129 15:27:39.338801 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:27:39 crc kubenswrapper[5008]: I0129 15:27:39.339716 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:27:39 crc kubenswrapper[5008]: I0129 15:27:39.339762 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:27:39 crc kubenswrapper[5008]: I0129 15:27:39.339808 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:27:39 crc kubenswrapper[5008]: I0129 15:27:39.340734 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"c778df6f5c031669143db37980250c01473f3d9856acc44a6ef51852822f99ca"} Jan 29 15:27:39 crc kubenswrapper[5008]: I0129 15:27:39.341433 5008 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 15:27:39 crc kubenswrapper[5008]: I0129 15:27:39.342419 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:27:39 crc kubenswrapper[5008]: I0129 15:27:39.342461 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:27:39 crc kubenswrapper[5008]: I0129 15:27:39.342477 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:27:39 crc kubenswrapper[5008]: I0129 15:27:39.343910 5008 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="380711ea042d739a804ab6da4c0361004cc9ed9a48a5f4b006d168df6a84ebb5" exitCode=0 Jan 29 15:27:39 crc kubenswrapper[5008]: I0129 15:27:39.343968 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"380711ea042d739a804ab6da4c0361004cc9ed9a48a5f4b006d168df6a84ebb5"} Jan 29 15:27:39 crc kubenswrapper[5008]: I0129 15:27:39.344021 5008 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 15:27:39 crc kubenswrapper[5008]: I0129 15:27:39.344804 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:27:39 crc kubenswrapper[5008]: I0129 15:27:39.344834 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:27:39 crc kubenswrapper[5008]: I0129 15:27:39.344856 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:27:39 crc kubenswrapper[5008]: I0129 15:27:39.345334 5008 generic.go:334] "Generic (PLEG): container finished" podID="d1b160f5dda77d281dd8e69ec8d817f9" containerID="83662d418c40cdea3f8af62c97834fd30d88d2fe441ca4a0576566e8f6e9bc1d" exitCode=0 Jan 29 15:27:39 crc kubenswrapper[5008]: I0129 15:27:39.345377 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerDied","Data":"83662d418c40cdea3f8af62c97834fd30d88d2fe441ca4a0576566e8f6e9bc1d"} Jan 29 15:27:39 crc kubenswrapper[5008]: I0129 15:27:39.345386 5008 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 15:27:39 crc kubenswrapper[5008]: I0129 15:27:39.346318 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:27:39 crc kubenswrapper[5008]: I0129 15:27:39.346341 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:27:39 crc kubenswrapper[5008]: I0129 15:27:39.346350 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:27:40 crc kubenswrapper[5008]: I0129 15:27:40.265054 5008 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.50:6443: connect: connection refused Jan 29 15:27:40 crc kubenswrapper[5008]: I0129 15:27:40.270366 5008 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-24 03:07:27.06982816 +0000 UTC Jan 29 15:27:40 crc kubenswrapper[5008]: E0129 15:27:40.275144 5008 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.50:6443: connect: connection refused" interval="3.2s" Jan 29 15:27:40 crc kubenswrapper[5008]: I0129 15:27:40.349841 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"5ed1794f8b68a0810301b6f7b91e03cfb269b35084dd97b2f153789ba70970f2"} Jan 29 15:27:40 crc kubenswrapper[5008]: I0129 15:27:40.349893 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"677c04a1dffb767e6149ccb064772548ca29cf553afc20cb4eb82a5f85742ff0"} Jan 29 15:27:40 crc kubenswrapper[5008]: I0129 15:27:40.349906 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"4702656214e54c2881bf198364622648679a9981721d09a6b1551a134c63b7d7"} Jan 29 15:27:40 crc kubenswrapper[5008]: I0129 15:27:40.353045 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"e0f710dffd08d1bbb467ff9d2c6a5d5beed779550747459407916e743506ab27"} Jan 29 15:27:40 crc kubenswrapper[5008]: I0129 15:27:40.353108 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"de7c341c7443f28f5919ef6baeb21377b5571637ad807dd7515a5f28c218034b"} Jan 29 15:27:40 crc kubenswrapper[5008]: I0129 15:27:40.353109 5008 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 15:27:40 crc kubenswrapper[5008]: I0129 15:27:40.353117 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"12266e3ba2ed2e5d6d1e7ee893a0d59cd4575c8870cb1e129ca0fd9b8623467f"} Jan 29 15:27:40 crc kubenswrapper[5008]: I0129 15:27:40.354023 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:27:40 crc kubenswrapper[5008]: I0129 15:27:40.354051 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:27:40 crc kubenswrapper[5008]: I0129 15:27:40.354062 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:27:40 crc kubenswrapper[5008]: I0129 15:27:40.354200 5008 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="1d9134a6829b7df9b42aeae161ea1f3961837d6a0b322b1adcd2417c47c0f5d5" exitCode=0 Jan 29 15:27:40 crc kubenswrapper[5008]: I0129 15:27:40.354249 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"1d9134a6829b7df9b42aeae161ea1f3961837d6a0b322b1adcd2417c47c0f5d5"} Jan 29 15:27:40 crc kubenswrapper[5008]: I0129 15:27:40.354330 5008 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 15:27:40 crc kubenswrapper[5008]: I0129 15:27:40.355519 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:27:40 crc kubenswrapper[5008]: I0129 15:27:40.355563 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:27:40 crc kubenswrapper[5008]: I0129 15:27:40.355573 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:27:40 crc kubenswrapper[5008]: I0129 15:27:40.357304 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"e2cad6ba94fe1fbb01c043c1e8eabda3989f05822a3a7a6e105d2cd8aa794333"} Jan 29 15:27:40 crc kubenswrapper[5008]: I0129 15:27:40.357359 5008 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 15:27:40 crc kubenswrapper[5008]: I0129 15:27:40.358885 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:27:40 crc kubenswrapper[5008]: I0129 15:27:40.358911 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:27:40 crc kubenswrapper[5008]: I0129 15:27:40.358922 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:27:40 crc kubenswrapper[5008]: I0129 15:27:40.360389 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"c76eae897742ba4e95f6d60a81e2da82f1c0b0e220f48473436b03bff9f2f7e0"} Jan 29 15:27:40 crc kubenswrapper[5008]: I0129 15:27:40.360422 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"33245f510d76b9610b3e44259d0944eaef5873c4bc31c3f3012a013248d16933"} Jan 29 15:27:40 crc kubenswrapper[5008]: I0129 15:27:40.360435 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"3c89e24fc5acc0577d3d738d63e7982aa32a07ecc01952570f6f417286b8747a"} Jan 29 15:27:40 crc kubenswrapper[5008]: I0129 15:27:40.360450 5008 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 15:27:40 crc kubenswrapper[5008]: I0129 15:27:40.361245 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:27:40 crc kubenswrapper[5008]: I0129 15:27:40.361281 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:27:40 crc kubenswrapper[5008]: I0129 15:27:40.361294 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:27:40 crc kubenswrapper[5008]: W0129 15:27:40.504061 5008 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.50:6443: connect: connection refused Jan 29 15:27:40 crc kubenswrapper[5008]: E0129 15:27:40.504137 5008 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.50:6443: connect: connection refused" logger="UnhandledError" Jan 29 15:27:40 crc kubenswrapper[5008]: I0129 15:27:40.519820 5008 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 15:27:40 crc kubenswrapper[5008]: I0129 15:27:40.522122 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:27:40 crc kubenswrapper[5008]: I0129 15:27:40.522173 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:27:40 crc kubenswrapper[5008]: I0129 15:27:40.522185 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:27:40 crc kubenswrapper[5008]: I0129 15:27:40.522211 5008 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 29 15:27:40 crc kubenswrapper[5008]: E0129 15:27:40.522881 5008 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.50:6443: connect: connection refused" node="crc" Jan 29 15:27:40 crc kubenswrapper[5008]: W0129 15:27:40.700981 5008 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.50:6443: connect: connection refused Jan 29 15:27:40 crc kubenswrapper[5008]: E0129 15:27:40.701051 5008 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.50:6443: connect: connection refused" logger="UnhandledError" Jan 29 15:27:40 crc kubenswrapper[5008]: W0129 15:27:40.723959 5008 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.50:6443: connect: connection refused Jan 29 15:27:40 crc kubenswrapper[5008]: E0129 15:27:40.724035 5008 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.50:6443: connect: connection refused" logger="UnhandledError" Jan 29 15:27:41 crc kubenswrapper[5008]: I0129 15:27:41.264363 5008 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.50:6443: connect: connection refused Jan 29 15:27:41 crc kubenswrapper[5008]: W0129 15:27:41.268818 5008 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.50:6443: connect: connection refused Jan 29 15:27:41 crc kubenswrapper[5008]: E0129 15:27:41.268981 5008 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.50:6443: connect: connection refused" logger="UnhandledError" Jan 29 15:27:41 crc kubenswrapper[5008]: I0129 15:27:41.270799 5008 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-13 22:35:03.018932027 +0000 UTC Jan 29 15:27:41 crc kubenswrapper[5008]: I0129 15:27:41.367727 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"824cd135db982b1543c1eedd31029e6ffaf33861ab2214da9a9d50cf96681e8e"} Jan 29 15:27:41 crc kubenswrapper[5008]: I0129 15:27:41.367835 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"3397d4d59fbac09e49247425eb263f25d13c62a72013146c981b606f6389165d"} Jan 29 15:27:41 crc kubenswrapper[5008]: I0129 15:27:41.367909 5008 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 15:27:41 crc kubenswrapper[5008]: I0129 15:27:41.370278 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:27:41 crc kubenswrapper[5008]: I0129 15:27:41.370326 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:27:41 crc kubenswrapper[5008]: I0129 15:27:41.370345 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:27:41 crc kubenswrapper[5008]: I0129 15:27:41.370692 5008 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="9206e5978757b0979fa411a384d9e5b4728b01a769f87383df38dbb8f0f18e4b" exitCode=0 Jan 29 15:27:41 crc kubenswrapper[5008]: I0129 15:27:41.370797 5008 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 15:27:41 crc kubenswrapper[5008]: I0129 15:27:41.370874 5008 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 29 15:27:41 crc kubenswrapper[5008]: I0129 15:27:41.370938 5008 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 15:27:41 crc kubenswrapper[5008]: I0129 15:27:41.371270 5008 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 15:27:41 crc kubenswrapper[5008]: I0129 15:27:41.371295 5008 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 15:27:41 crc kubenswrapper[5008]: I0129 15:27:41.371319 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"9206e5978757b0979fa411a384d9e5b4728b01a769f87383df38dbb8f0f18e4b"} Jan 29 15:27:41 crc kubenswrapper[5008]: I0129 15:27:41.371765 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:27:41 crc kubenswrapper[5008]: I0129 15:27:41.371836 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:27:41 crc kubenswrapper[5008]: I0129 15:27:41.371956 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:27:41 crc kubenswrapper[5008]: I0129 15:27:41.372463 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:27:41 crc kubenswrapper[5008]: I0129 15:27:41.372486 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:27:41 crc kubenswrapper[5008]: I0129 15:27:41.372512 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:27:41 crc kubenswrapper[5008]: I0129 15:27:41.372518 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:27:41 crc kubenswrapper[5008]: I0129 15:27:41.372544 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:27:41 crc kubenswrapper[5008]: I0129 15:27:41.372532 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:27:41 crc kubenswrapper[5008]: I0129 15:27:41.372676 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:27:41 crc kubenswrapper[5008]: I0129 15:27:41.372718 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:27:41 crc kubenswrapper[5008]: I0129 15:27:41.372742 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:27:42 crc kubenswrapper[5008]: I0129 15:27:42.270993 5008 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-27 18:24:03.503283688 +0000 UTC Jan 29 15:27:42 crc kubenswrapper[5008]: I0129 15:27:42.374882 5008 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Jan 29 15:27:42 crc kubenswrapper[5008]: I0129 15:27:42.376658 5008 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="824cd135db982b1543c1eedd31029e6ffaf33861ab2214da9a9d50cf96681e8e" exitCode=255 Jan 29 15:27:42 crc kubenswrapper[5008]: I0129 15:27:42.376758 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"824cd135db982b1543c1eedd31029e6ffaf33861ab2214da9a9d50cf96681e8e"} Jan 29 15:27:42 crc kubenswrapper[5008]: I0129 15:27:42.376817 5008 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 15:27:42 crc kubenswrapper[5008]: I0129 15:27:42.377683 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:27:42 crc kubenswrapper[5008]: I0129 15:27:42.377736 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:27:42 crc kubenswrapper[5008]: I0129 15:27:42.377751 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:27:42 crc kubenswrapper[5008]: I0129 15:27:42.378342 5008 scope.go:117] "RemoveContainer" containerID="824cd135db982b1543c1eedd31029e6ffaf33861ab2214da9a9d50cf96681e8e" Jan 29 15:27:42 crc kubenswrapper[5008]: I0129 15:27:42.382887 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"3cb618c2c44502074cb37ce1e688d187254eafae3916372a16c8ab845fed767a"} Jan 29 15:27:42 crc kubenswrapper[5008]: I0129 15:27:42.382936 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"1dcda54222f387e6560d3e297be72e19032a975feb916bc12a220870207a3f35"} Jan 29 15:27:42 crc kubenswrapper[5008]: I0129 15:27:42.382946 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"de76f0d6e08ee14b4a5ab39a21ebdc63bdf379dcd5b648ae46a4edcc2a49f20e"} Jan 29 15:27:42 crc kubenswrapper[5008]: I0129 15:27:42.382954 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"7393e24277d74a2b9987e6cdc54cd65485f5bc57d93ec25a2cb8479923db1feb"} Jan 29 15:27:43 crc kubenswrapper[5008]: I0129 15:27:43.272112 5008 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-20 10:33:53.481688924 +0000 UTC Jan 29 15:27:43 crc kubenswrapper[5008]: I0129 15:27:43.388310 5008 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Jan 29 15:27:43 crc kubenswrapper[5008]: I0129 15:27:43.389810 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"4d710e35a02d14289e2d5fe6b35c08621e78c96b7e9e30451ffd6d51962fb761"} Jan 29 15:27:43 crc kubenswrapper[5008]: I0129 15:27:43.389895 5008 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 15:27:43 crc kubenswrapper[5008]: I0129 15:27:43.389944 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 15:27:43 crc kubenswrapper[5008]: I0129 15:27:43.390701 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:27:43 crc kubenswrapper[5008]: I0129 15:27:43.390723 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:27:43 crc kubenswrapper[5008]: I0129 15:27:43.390732 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:27:43 crc kubenswrapper[5008]: I0129 15:27:43.394845 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"ff8e5fd243880ce71f07c5c532cad2cdff0e4bca2d0083280be78206a1a4c854"} Jan 29 15:27:43 crc kubenswrapper[5008]: I0129 15:27:43.394971 5008 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 15:27:43 crc kubenswrapper[5008]: I0129 15:27:43.395822 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:27:43 crc kubenswrapper[5008]: I0129 15:27:43.395858 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:27:43 crc kubenswrapper[5008]: I0129 15:27:43.395870 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:27:43 crc kubenswrapper[5008]: I0129 15:27:43.491523 5008 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Jan 29 15:27:43 crc kubenswrapper[5008]: I0129 15:27:43.723601 5008 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 15:27:43 crc kubenswrapper[5008]: I0129 15:27:43.725179 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:27:43 crc kubenswrapper[5008]: I0129 15:27:43.725214 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:27:43 crc kubenswrapper[5008]: I0129 15:27:43.725224 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:27:43 crc kubenswrapper[5008]: I0129 15:27:43.725245 5008 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 29 15:27:44 crc kubenswrapper[5008]: I0129 15:27:44.272278 5008 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-27 12:38:12.578348089 +0000 UTC Jan 29 15:27:44 crc kubenswrapper[5008]: I0129 15:27:44.398344 5008 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 15:27:44 crc kubenswrapper[5008]: I0129 15:27:44.398379 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 15:27:44 crc kubenswrapper[5008]: I0129 15:27:44.398344 5008 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 15:27:44 crc kubenswrapper[5008]: I0129 15:27:44.399998 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:27:44 crc kubenswrapper[5008]: I0129 15:27:44.400033 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:27:44 crc kubenswrapper[5008]: I0129 15:27:44.400044 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:27:44 crc kubenswrapper[5008]: I0129 15:27:44.400279 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:27:44 crc kubenswrapper[5008]: I0129 15:27:44.400388 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:27:44 crc kubenswrapper[5008]: I0129 15:27:44.400449 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:27:45 crc kubenswrapper[5008]: I0129 15:27:45.273345 5008 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-19 18:16:50.658044434 +0000 UTC Jan 29 15:27:45 crc kubenswrapper[5008]: I0129 15:27:45.401566 5008 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 15:27:45 crc kubenswrapper[5008]: I0129 15:27:45.403253 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:27:45 crc kubenswrapper[5008]: I0129 15:27:45.403319 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:27:45 crc kubenswrapper[5008]: I0129 15:27:45.403334 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:27:45 crc kubenswrapper[5008]: I0129 15:27:45.479343 5008 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 29 15:27:45 crc kubenswrapper[5008]: I0129 15:27:45.479677 5008 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 15:27:45 crc kubenswrapper[5008]: I0129 15:27:45.481637 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:27:45 crc kubenswrapper[5008]: I0129 15:27:45.481738 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:27:45 crc kubenswrapper[5008]: I0129 15:27:45.481778 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:27:45 crc kubenswrapper[5008]: I0129 15:27:45.797317 5008 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 15:27:45 crc kubenswrapper[5008]: I0129 15:27:45.875250 5008 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 29 15:27:45 crc kubenswrapper[5008]: I0129 15:27:45.881817 5008 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 29 15:27:45 crc kubenswrapper[5008]: I0129 15:27:45.927736 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 29 15:27:46 crc kubenswrapper[5008]: I0129 15:27:46.229531 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 15:27:46 crc kubenswrapper[5008]: I0129 15:27:46.273572 5008 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-22 07:07:51.719818949 +0000 UTC Jan 29 15:27:46 crc kubenswrapper[5008]: I0129 15:27:46.404401 5008 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 15:27:46 crc kubenswrapper[5008]: I0129 15:27:46.404427 5008 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 15:27:46 crc kubenswrapper[5008]: I0129 15:27:46.404564 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 29 15:27:46 crc kubenswrapper[5008]: I0129 15:27:46.405675 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:27:46 crc kubenswrapper[5008]: I0129 15:27:46.405717 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:27:46 crc kubenswrapper[5008]: I0129 15:27:46.405735 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:27:46 crc kubenswrapper[5008]: I0129 15:27:46.405678 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:27:46 crc kubenswrapper[5008]: I0129 15:27:46.405827 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:27:46 crc kubenswrapper[5008]: I0129 15:27:46.405841 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:27:46 crc kubenswrapper[5008]: I0129 15:27:46.975086 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 29 15:27:46 crc kubenswrapper[5008]: I0129 15:27:46.975417 5008 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 15:27:46 crc kubenswrapper[5008]: I0129 15:27:46.977138 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:27:46 crc kubenswrapper[5008]: I0129 15:27:46.977174 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:27:46 crc kubenswrapper[5008]: I0129 15:27:46.977184 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:27:47 crc kubenswrapper[5008]: I0129 15:27:47.274492 5008 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-27 11:58:53.630245589 +0000 UTC Jan 29 15:27:47 crc kubenswrapper[5008]: I0129 15:27:47.408497 5008 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 15:27:47 crc kubenswrapper[5008]: I0129 15:27:47.408528 5008 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 15:27:47 crc kubenswrapper[5008]: I0129 15:27:47.409820 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:27:47 crc kubenswrapper[5008]: I0129 15:27:47.409931 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:27:47 crc kubenswrapper[5008]: I0129 15:27:47.410020 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:27:47 crc kubenswrapper[5008]: I0129 15:27:47.409888 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:27:47 crc kubenswrapper[5008]: I0129 15:27:47.410113 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:27:47 crc kubenswrapper[5008]: I0129 15:27:47.410127 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:27:47 crc kubenswrapper[5008]: E0129 15:27:47.417154 5008 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Jan 29 15:27:48 crc kubenswrapper[5008]: I0129 15:27:48.102527 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-etcd/etcd-crc" Jan 29 15:27:48 crc kubenswrapper[5008]: I0129 15:27:48.102856 5008 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 15:27:48 crc kubenswrapper[5008]: I0129 15:27:48.104309 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:27:48 crc kubenswrapper[5008]: I0129 15:27:48.104357 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:27:48 crc kubenswrapper[5008]: I0129 15:27:48.104372 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:27:48 crc kubenswrapper[5008]: I0129 15:27:48.274960 5008 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-25 18:00:46.637701198 +0000 UTC Jan 29 15:27:48 crc kubenswrapper[5008]: I0129 15:27:48.479842 5008 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 29 15:27:48 crc kubenswrapper[5008]: I0129 15:27:48.479925 5008 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 29 15:27:49 crc kubenswrapper[5008]: I0129 15:27:49.275707 5008 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-05 00:04:44.500471307 +0000 UTC Jan 29 15:27:49 crc kubenswrapper[5008]: I0129 15:27:49.464117 5008 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-etcd/etcd-crc" Jan 29 15:27:49 crc kubenswrapper[5008]: I0129 15:27:49.464371 5008 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 15:27:49 crc kubenswrapper[5008]: I0129 15:27:49.466260 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:27:49 crc kubenswrapper[5008]: I0129 15:27:49.466336 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:27:49 crc kubenswrapper[5008]: I0129 15:27:49.466358 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:27:50 crc kubenswrapper[5008]: I0129 15:27:50.277006 5008 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-22 08:50:02.581188808 +0000 UTC Jan 29 15:27:51 crc kubenswrapper[5008]: I0129 15:27:51.278159 5008 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-26 11:46:23.766109965 +0000 UTC Jan 29 15:27:52 crc kubenswrapper[5008]: I0129 15:27:52.264860 5008 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": net/http: TLS handshake timeout Jan 29 15:27:52 crc kubenswrapper[5008]: I0129 15:27:52.279272 5008 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-04 20:55:53.543757157 +0000 UTC Jan 29 15:27:52 crc kubenswrapper[5008]: I0129 15:27:52.701226 5008 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Jan 29 15:27:52 crc kubenswrapper[5008]: I0129 15:27:52.701484 5008 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Jan 29 15:27:52 crc kubenswrapper[5008]: I0129 15:27:52.707914 5008 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Jan 29 15:27:52 crc kubenswrapper[5008]: I0129 15:27:52.708101 5008 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Jan 29 15:27:53 crc kubenswrapper[5008]: I0129 15:27:53.280355 5008 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-20 03:12:17.916324867 +0000 UTC Jan 29 15:27:54 crc kubenswrapper[5008]: I0129 15:27:54.282386 5008 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-04 06:26:49.950322813 +0000 UTC Jan 29 15:27:54 crc kubenswrapper[5008]: I0129 15:27:54.892625 5008 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Liveness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" start-of-body= Jan 29 15:27:54 crc kubenswrapper[5008]: I0129 15:27:54.892709 5008 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" Jan 29 15:27:55 crc kubenswrapper[5008]: I0129 15:27:55.282905 5008 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-22 07:24:05.749104779 +0000 UTC Jan 29 15:27:55 crc kubenswrapper[5008]: I0129 15:27:55.803684 5008 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 15:27:55 crc kubenswrapper[5008]: I0129 15:27:55.803987 5008 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 15:27:55 crc kubenswrapper[5008]: I0129 15:27:55.805190 5008 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" start-of-body= Jan 29 15:27:55 crc kubenswrapper[5008]: I0129 15:27:55.805608 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:27:55 crc kubenswrapper[5008]: I0129 15:27:55.805679 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:27:55 crc kubenswrapper[5008]: I0129 15:27:55.805708 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:27:55 crc kubenswrapper[5008]: I0129 15:27:55.805861 5008 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" Jan 29 15:27:55 crc kubenswrapper[5008]: I0129 15:27:55.809179 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 15:27:56 crc kubenswrapper[5008]: I0129 15:27:56.283870 5008 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-01 01:14:55.726457632 +0000 UTC Jan 29 15:27:56 crc kubenswrapper[5008]: I0129 15:27:56.432220 5008 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 15:27:56 crc kubenswrapper[5008]: I0129 15:27:56.432599 5008 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" start-of-body= Jan 29 15:27:56 crc kubenswrapper[5008]: I0129 15:27:56.432681 5008 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" Jan 29 15:27:56 crc kubenswrapper[5008]: I0129 15:27:56.433402 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:27:56 crc kubenswrapper[5008]: I0129 15:27:56.433461 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:27:56 crc kubenswrapper[5008]: I0129 15:27:56.433478 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:27:57 crc kubenswrapper[5008]: I0129 15:27:57.284688 5008 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-25 13:06:20.493862877 +0000 UTC Jan 29 15:27:57 crc kubenswrapper[5008]: E0129 15:27:57.417327 5008 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Jan 29 15:27:57 crc kubenswrapper[5008]: I0129 15:27:57.701750 5008 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Jan 29 15:27:57 crc kubenswrapper[5008]: E0129 15:27:57.701870 5008 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": context deadline exceeded" interval="6.4s" Jan 29 15:27:57 crc kubenswrapper[5008]: I0129 15:27:57.705178 5008 trace.go:236] Trace[1473017949]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (29-Jan-2026 15:27:45.339) (total time: 12365ms): Jan 29 15:27:57 crc kubenswrapper[5008]: Trace[1473017949]: ---"Objects listed" error: 12365ms (15:27:57.705) Jan 29 15:27:57 crc kubenswrapper[5008]: Trace[1473017949]: [12.36595257s] [12.36595257s] END Jan 29 15:27:57 crc kubenswrapper[5008]: I0129 15:27:57.705196 5008 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Jan 29 15:27:57 crc kubenswrapper[5008]: I0129 15:27:57.705214 5008 trace.go:236] Trace[1594515651]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (29-Jan-2026 15:27:46.616) (total time: 11088ms): Jan 29 15:27:57 crc kubenswrapper[5008]: Trace[1594515651]: ---"Objects listed" error: 11088ms (15:27:57.705) Jan 29 15:27:57 crc kubenswrapper[5008]: Trace[1594515651]: [11.088289036s] [11.088289036s] END Jan 29 15:27:57 crc kubenswrapper[5008]: I0129 15:27:57.705251 5008 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Jan 29 15:27:57 crc kubenswrapper[5008]: E0129 15:27:57.705320 5008 kubelet_node_status.go:99] "Unable to register node with API server" err="nodes \"crc\" is forbidden: autoscaling.openshift.io/ManagedNode infra config cache not synchronized" node="crc" Jan 29 15:27:57 crc kubenswrapper[5008]: I0129 15:27:57.705457 5008 trace.go:236] Trace[206545253]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (29-Jan-2026 15:27:46.020) (total time: 11685ms): Jan 29 15:27:57 crc kubenswrapper[5008]: Trace[206545253]: ---"Objects listed" error: 11685ms (15:27:57.705) Jan 29 15:27:57 crc kubenswrapper[5008]: Trace[206545253]: [11.685202424s] [11.685202424s] END Jan 29 15:27:57 crc kubenswrapper[5008]: I0129 15:27:57.705483 5008 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Jan 29 15:27:57 crc kubenswrapper[5008]: I0129 15:27:57.706098 5008 reconstruct.go:205] "DevicePaths of reconstructed volumes updated" Jan 29 15:27:57 crc kubenswrapper[5008]: I0129 15:27:57.708104 5008 trace.go:236] Trace[637164502]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (29-Jan-2026 15:27:46.310) (total time: 11397ms): Jan 29 15:27:57 crc kubenswrapper[5008]: Trace[637164502]: ---"Objects listed" error: 11397ms (15:27:57.707) Jan 29 15:27:57 crc kubenswrapper[5008]: Trace[637164502]: [11.39716807s] [11.39716807s] END Jan 29 15:27:57 crc kubenswrapper[5008]: I0129 15:27:57.708139 5008 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Jan 29 15:27:57 crc kubenswrapper[5008]: I0129 15:27:57.720986 5008 csr.go:261] certificate signing request csr-42t29 is approved, waiting to be issued Jan 29 15:27:57 crc kubenswrapper[5008]: I0129 15:27:57.741979 5008 csr.go:257] certificate signing request csr-42t29 is issued Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.119984 5008 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" start-of-body= Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.120058 5008 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.247727 5008 apiserver.go:52] "Watching apiserver" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.276241 5008 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.276509 5008 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-network-console/networking-console-plugin-85b44fc459-gdk6g","openshift-network-diagnostics/network-check-source-55646444c4-trplf","openshift-network-diagnostics/network-check-target-xd92c","openshift-network-node-identity/network-node-identity-vrzqb","openshift-network-operator/iptables-alerter-4ln5h","openshift-network-operator/network-operator-58b4c7f79c-55gtf"] Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.276857 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.276947 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 15:27:58 crc kubenswrapper[5008]: E0129 15:27:58.277007 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.277070 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.277096 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.277146 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 29 15:27:58 crc kubenswrapper[5008]: E0129 15:27:58.277147 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.277203 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 29 15:27:58 crc kubenswrapper[5008]: E0129 15:27:58.277281 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.278791 5008 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.279835 5008 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.279888 5008 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.279952 5008 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.279992 5008 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.280198 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.280487 5008 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.281189 5008 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.283184 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.285069 5008 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-27 15:09:22.9882038 +0000 UTC Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.307996 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.319165 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.330276 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.340526 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.352299 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.367035 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.370060 5008 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.380024 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.390956 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.411188 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.411240 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.411264 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.411288 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.411310 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.411335 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.411359 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.411381 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.411402 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.411422 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.411443 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.411505 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.411532 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.411558 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.411581 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.411604 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.411659 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.411686 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.411709 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.411732 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.411519 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.411890 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" (OuterVolumeSpecName: "kube-api-access-htfz6") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "kube-api-access-htfz6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.412164 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.412607 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.412677 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" (OuterVolumeSpecName: "images") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.412754 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" (OuterVolumeSpecName: "kube-api-access-w9rds") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "kube-api-access-w9rds". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.412774 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" (OuterVolumeSpecName: "kube-api-access-dbsvg") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "kube-api-access-dbsvg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.412833 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" (OuterVolumeSpecName: "etcd-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.412823 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.412950 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.412987 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.413024 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" (OuterVolumeSpecName: "apiservice-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "apiservice-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.413047 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" (OuterVolumeSpecName: "kube-api-access-ngvvp") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "kube-api-access-ngvvp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.413048 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.413105 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" (OuterVolumeSpecName: "signing-key") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.413137 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.413169 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.413192 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.413216 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.413305 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" (OuterVolumeSpecName: "kube-api-access-fcqwp") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "kube-api-access-fcqwp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.413264 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.413396 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.413420 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.413445 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.413468 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.413494 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.413518 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.413543 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.413567 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.413596 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.413624 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.413648 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.413680 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.413860 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.413897 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.413996 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.414012 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" (OuterVolumeSpecName: "kube-api-access-cfbct") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "kube-api-access-cfbct". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.414223 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" (OuterVolumeSpecName: "config") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.414248 5008 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.414265 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" (OuterVolumeSpecName: "kube-api-access-d4lsv") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "kube-api-access-d4lsv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.414297 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" (OuterVolumeSpecName: "config") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.414346 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" (OuterVolumeSpecName: "images") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.414693 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" (OuterVolumeSpecName: "kube-api-access-9xfj7") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "kube-api-access-9xfj7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.414803 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" (OuterVolumeSpecName: "config") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.414830 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.414842 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" (OuterVolumeSpecName: "serviceca") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "serviceca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.414906 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Jan 29 15:27:58 crc kubenswrapper[5008]: E0129 15:27:58.414940 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 15:27:58.914914047 +0000 UTC m=+22.587768294 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.415032 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" (OuterVolumeSpecName: "config") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.415068 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.416269 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.416393 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.416425 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.416458 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.416484 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.416538 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.416570 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.416594 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.416619 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.416643 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.416666 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.416693 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.416716 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.416723 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.416791 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.416824 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.416853 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.416886 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.416911 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.416940 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.416966 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.416991 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.417017 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.417044 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.417088 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.417115 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.417141 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.417152 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" (OuterVolumeSpecName: "config") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.417165 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.417190 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.417214 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.417239 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.417266 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.417293 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.417319 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.417342 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.417348 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" (OuterVolumeSpecName: "signing-cabundle") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-cabundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.417367 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.417392 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.417416 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.417440 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.417465 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.417489 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.417513 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.417536 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.417558 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.417583 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.417607 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.417629 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.417652 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.417674 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.417698 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.417719 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.417743 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.417752 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" (OuterVolumeSpecName: "image-import-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "image-import-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.417804 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.417830 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.417856 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.417877 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.417904 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.417927 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.417949 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.417972 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.417999 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.418025 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.418048 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.418071 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.418093 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.418098 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.418115 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.418136 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.418158 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.418180 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.418201 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.418222 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.418243 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.418265 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.418287 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.418291 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" (OuterVolumeSpecName: "package-server-manager-serving-cert") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "package-server-manager-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.418316 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.418339 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.418365 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.418390 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.418415 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.418439 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.418464 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.418488 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.418491 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.418510 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.418532 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.418555 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.418577 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.418600 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") pod \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\" (UID: \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\") " Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.418623 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.418645 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") pod \"49ef4625-1d3a-4a9f-b595-c2433d32326d\" (UID: \"49ef4625-1d3a-4a9f-b595-c2433d32326d\") " Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.418667 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.418689 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.418712 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.418733 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.418759 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.418800 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.418824 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.418848 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.418854 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" (OuterVolumeSpecName: "kube-api-access-gf66m") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "kube-api-access-gf66m". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.418873 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.418923 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.418950 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.418972 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.418994 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.419015 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.419038 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.419062 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.419087 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.419113 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.419135 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.419156 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.419179 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.419205 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.419227 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.419251 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.419274 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.419297 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.419322 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.419347 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.419369 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.419393 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.419417 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.419439 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.419466 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.419491 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.419514 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.419536 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") pod \"44663579-783b-4372-86d6-acf235a62d72\" (UID: \"44663579-783b-4372-86d6-acf235a62d72\") " Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.419558 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.419579 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.419602 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.419625 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.419648 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.419672 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.419694 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.419722 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.419744 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.419769 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.419812 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.419835 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") pod \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\" (UID: \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\") " Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.419858 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.419879 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.419902 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.419924 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.419947 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.419970 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.419994 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.420018 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.420244 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.420269 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.420292 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.420317 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.420340 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.420363 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.420390 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.420415 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.420438 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.420461 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.420485 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.420530 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.420560 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.420589 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.420624 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.420653 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.420682 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.420710 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.420737 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.420767 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.420883 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.420912 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.420939 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.420967 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.420994 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.421079 5008 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") on node \"crc\" DevicePath \"\"" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.421096 5008 reconciler_common.go:293] "Volume detached for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") on node \"crc\" DevicePath \"\"" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.421110 5008 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") on node \"crc\" DevicePath \"\"" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.421124 5008 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") on node \"crc\" DevicePath \"\"" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.421137 5008 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") on node \"crc\" DevicePath \"\"" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.421149 5008 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.421162 5008 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") on node \"crc\" DevicePath \"\"" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.421174 5008 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") on node \"crc\" DevicePath \"\"" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.421186 5008 reconciler_common.go:293] "Volume detached for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") on node \"crc\" DevicePath \"\"" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.421202 5008 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") on node \"crc\" DevicePath \"\"" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.421216 5008 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") on node \"crc\" DevicePath \"\"" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.421228 5008 reconciler_common.go:293] "Volume detached for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") on node \"crc\" DevicePath \"\"" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.421241 5008 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") on node \"crc\" DevicePath \"\"" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.421254 5008 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") on node \"crc\" DevicePath \"\"" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.421266 5008 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.421280 5008 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") on node \"crc\" DevicePath \"\"" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.421292 5008 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") on node \"crc\" DevicePath \"\"" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.421304 5008 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.421317 5008 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") on node \"crc\" DevicePath \"\"" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.421330 5008 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") on node \"crc\" DevicePath \"\"" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.421364 5008 reconciler_common.go:293] "Volume detached for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") on node \"crc\" DevicePath \"\"" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.421378 5008 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") on node \"crc\" DevicePath \"\"" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.421392 5008 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.421404 5008 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.421416 5008 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") on node \"crc\" DevicePath \"\"" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.421429 5008 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") on node \"crc\" DevicePath \"\"" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.421443 5008 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") on node \"crc\" DevicePath \"\"" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.421460 5008 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.421473 5008 reconciler_common.go:293] "Volume detached for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") on node \"crc\" DevicePath \"\"" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.421488 5008 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.421501 5008 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.421514 5008 reconciler_common.go:293] "Volume detached for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") on node \"crc\" DevicePath \"\"" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.421526 5008 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") on node \"crc\" DevicePath \"\"" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.421538 5008 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") on node \"crc\" DevicePath \"\"" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.421551 5008 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") on node \"crc\" DevicePath \"\"" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.419035 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" (OuterVolumeSpecName: "cert") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.419092 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.419239 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" (OuterVolumeSpecName: "node-bootstrap-token") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "node-bootstrap-token". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.419302 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.419457 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.422390 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" (OuterVolumeSpecName: "config") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.419840 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" (OuterVolumeSpecName: "utilities") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.420059 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.420367 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" (OuterVolumeSpecName: "mcd-auth-proxy-config") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "mcd-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.420962 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" (OuterVolumeSpecName: "webhook-certs") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "webhook-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.421039 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.421406 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.421558 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" (OuterVolumeSpecName: "kube-api-access-zkvpv") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "kube-api-access-zkvpv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:27:58 crc kubenswrapper[5008]: E0129 15:27:58.421649 5008 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 29 15:27:58 crc kubenswrapper[5008]: E0129 15:27:58.422601 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-29 15:27:58.92258319 +0000 UTC m=+22.595437437 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.421699 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.422624 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" (OuterVolumeSpecName: "kube-api-access-zgdk5") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "kube-api-access-zgdk5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.421881 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.422083 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" (OuterVolumeSpecName: "config") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.422131 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" (OuterVolumeSpecName: "config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.422314 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" (OuterVolumeSpecName: "kube-api-access-lzf88") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "kube-api-access-lzf88". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.422333 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.422526 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.422852 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.423027 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.423042 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.423137 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.423150 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.423244 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" (OuterVolumeSpecName: "kube-api-access-8tdtz") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "kube-api-access-8tdtz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.423400 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" (OuterVolumeSpecName: "kube-api-access-bf2bz") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "kube-api-access-bf2bz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.423501 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.423637 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" (OuterVolumeSpecName: "kube-api-access-tk88c") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "kube-api-access-tk88c". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.423794 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" (OuterVolumeSpecName: "kube-api-access-7c4vf") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "kube-api-access-7c4vf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.423918 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.424035 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.424048 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.424428 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" (OuterVolumeSpecName: "kube-api-access-6g6sz") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "kube-api-access-6g6sz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.424533 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.424568 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.424633 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.425523 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.425547 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" (OuterVolumeSpecName: "default-certificate") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "default-certificate". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.425817 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" (OuterVolumeSpecName: "kube-api-access-d6qdx") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "kube-api-access-d6qdx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.426053 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" (OuterVolumeSpecName: "kube-api-access-jkwtn") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "kube-api-access-jkwtn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.426374 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.426613 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" (OuterVolumeSpecName: "kube-api-access-2d4wz") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "kube-api-access-2d4wz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.426652 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.426732 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.427020 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" (OuterVolumeSpecName: "kube-api-access-mg5zb") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "kube-api-access-mg5zb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.427022 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" (OuterVolumeSpecName: "kube-api-access-fqsjt") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "kube-api-access-fqsjt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.427285 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" (OuterVolumeSpecName: "kube-api-access-2w9zh") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "kube-api-access-2w9zh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.427756 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" (OuterVolumeSpecName: "utilities") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.427846 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" (OuterVolumeSpecName: "kube-api-access-pjr6v") pod "49ef4625-1d3a-4a9f-b595-c2433d32326d" (UID: "49ef4625-1d3a-4a9f-b595-c2433d32326d"). InnerVolumeSpecName "kube-api-access-pjr6v". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.427918 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" (OuterVolumeSpecName: "samples-operator-tls") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "samples-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.428163 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" (OuterVolumeSpecName: "kube-api-access-x7zkh") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "kube-api-access-x7zkh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.428250 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.429096 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.429866 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.433035 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.437358 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" (OuterVolumeSpecName: "kube-api-access-wxkg8") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "kube-api-access-wxkg8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.437592 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" (OuterVolumeSpecName: "kube-api-access-nzwt7") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "kube-api-access-nzwt7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.437429 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.437930 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.439033 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.439219 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" (OuterVolumeSpecName: "image-registry-operator-tls") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "image-registry-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.439328 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.439584 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" (OuterVolumeSpecName: "config") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.439939 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" (OuterVolumeSpecName: "control-plane-machine-set-operator-tls") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "control-plane-machine-set-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.440193 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" (OuterVolumeSpecName: "kube-api-access-w7l8j") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "kube-api-access-w7l8j". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.441579 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.442565 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.443332 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" (OuterVolumeSpecName: "config-volume") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.443814 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.444305 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" (OuterVolumeSpecName: "config") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.444509 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.445849 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.446895 5008 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/kube-controller-manager-crc"] Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.464236 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" (OuterVolumeSpecName: "kube-api-access-mnrrd") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "kube-api-access-mnrrd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.464345 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.464518 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.464285 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.465034 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.465434 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.465535 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" (OuterVolumeSpecName: "ovn-control-plane-metrics-cert") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovn-control-plane-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.465854 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" (OuterVolumeSpecName: "kube-api-access-x2m85") pod "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" (UID: "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d"). InnerVolumeSpecName "kube-api-access-x2m85". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.466563 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.468534 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" (OuterVolumeSpecName: "service-ca") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.468589 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.468720 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.468959 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" (OuterVolumeSpecName: "kube-api-access-qs4fp") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "kube-api-access-qs4fp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.469028 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.469166 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" (OuterVolumeSpecName: "mcc-auth-proxy-config") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "mcc-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.469395 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.469435 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" (OuterVolumeSpecName: "machine-approver-tls") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "machine-approver-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.469617 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.469638 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.469649 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" (OuterVolumeSpecName: "kube-api-access-s4n52") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "kube-api-access-s4n52". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.469989 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.470280 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" (OuterVolumeSpecName: "machine-api-operator-tls") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "machine-api-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.469890 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" (OuterVolumeSpecName: "kube-api-access-249nr") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "kube-api-access-249nr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.472408 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.475439 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.476514 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:27:58 crc kubenswrapper[5008]: E0129 15:27:58.476911 5008 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.477323 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 29 15:27:58 crc kubenswrapper[5008]: E0129 15:27:58.477636 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-29 15:27:58.977171623 +0000 UTC m=+22.650025860 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.479214 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" (OuterVolumeSpecName: "config") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.479318 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" (OuterVolumeSpecName: "kube-api-access-pj782") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "kube-api-access-pj782". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.479328 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.479925 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.480848 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" (OuterVolumeSpecName: "kube-api-access-lz9wn") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "kube-api-access-lz9wn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.481046 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" (OuterVolumeSpecName: "kube-api-access-xcgwh") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "kube-api-access-xcgwh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.482198 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" (OuterVolumeSpecName: "kube-api-access-pcxfs") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "kube-api-access-pcxfs". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.483155 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" (OuterVolumeSpecName: "utilities") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.484273 5008 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.484899 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" (OuterVolumeSpecName: "config") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.485201 5008 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.485279 5008 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.485363 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.485463 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" (OuterVolumeSpecName: "kube-api-access-vt5rc") pod "44663579-783b-4372-86d6-acf235a62d72" (UID: "44663579-783b-4372-86d6-acf235a62d72"). InnerVolumeSpecName "kube-api-access-vt5rc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.485888 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.486836 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" (OuterVolumeSpecName: "kube-api-access-279lb") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "kube-api-access-279lb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.491115 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" (OuterVolumeSpecName: "kube-api-access-jhbk2") pod "bd23aa5c-e532-4e53-bccf-e79f130c5ae8" (UID: "bd23aa5c-e532-4e53-bccf-e79f130c5ae8"). InnerVolumeSpecName "kube-api-access-jhbk2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.491752 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" (OuterVolumeSpecName: "kube-api-access-rnphk") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "kube-api-access-rnphk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.499420 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" (OuterVolumeSpecName: "available-featuregates") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "available-featuregates". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 15:27:58 crc kubenswrapper[5008]: E0129 15:27:58.499594 5008 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 29 15:27:58 crc kubenswrapper[5008]: E0129 15:27:58.499686 5008 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 29 15:27:58 crc kubenswrapper[5008]: E0129 15:27:58.499752 5008 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 29 15:27:58 crc kubenswrapper[5008]: E0129 15:27:58.499901 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-29 15:27:58.999878256 +0000 UTC m=+22.672732493 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.501121 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" (OuterVolumeSpecName: "config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.501386 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" (OuterVolumeSpecName: "kube-api-access-xcphl") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "kube-api-access-xcphl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.501527 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" (OuterVolumeSpecName: "client-ca") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.501645 5008 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="4d710e35a02d14289e2d5fe6b35c08621e78c96b7e9e30451ffd6d51962fb761" exitCode=255 Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.501756 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"4d710e35a02d14289e2d5fe6b35c08621e78c96b7e9e30451ffd6d51962fb761"} Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.501878 5008 scope.go:117] "RemoveContainer" containerID="824cd135db982b1543c1eedd31029e6ffaf33861ab2214da9a9d50cf96681e8e" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.501767 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.502399 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.502481 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.502740 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.502754 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.502828 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" (OuterVolumeSpecName: "utilities") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.506339 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.506744 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" (OuterVolumeSpecName: "etcd-service-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.507144 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" (OuterVolumeSpecName: "kube-api-access-x4zgh") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "kube-api-access-x4zgh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.508035 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.508083 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.508568 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.510589 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 29 15:27:58 crc kubenswrapper[5008]: E0129 15:27:58.510885 5008 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 29 15:27:58 crc kubenswrapper[5008]: E0129 15:27:58.510903 5008 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 29 15:27:58 crc kubenswrapper[5008]: E0129 15:27:58.510916 5008 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 29 15:27:58 crc kubenswrapper[5008]: E0129 15:27:58.510959 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-29 15:27:59.010943165 +0000 UTC m=+22.683797402 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.511184 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" (OuterVolumeSpecName: "kube-api-access-4d4hj") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "kube-api-access-4d4hj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.512072 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" (OuterVolumeSpecName: "stats-auth") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "stats-auth". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.512535 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" (OuterVolumeSpecName: "client-ca") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.512815 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" (OuterVolumeSpecName: "kube-api-access-sb6h7") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "kube-api-access-sb6h7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.514016 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" (OuterVolumeSpecName: "kube-api-access-v47cf") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "kube-api-access-v47cf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.515122 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" (OuterVolumeSpecName: "kube-api-access-6ccd8") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "kube-api-access-6ccd8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.515154 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.515192 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.515201 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" (OuterVolumeSpecName: "audit") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "audit". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.517041 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.517323 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.517621 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" (OuterVolumeSpecName: "cni-sysctl-allowlist") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-sysctl-allowlist". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.517756 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.518375 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" (OuterVolumeSpecName: "service-ca") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.518490 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" (OuterVolumeSpecName: "kube-api-access-kfwg7") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "kube-api-access-kfwg7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.519753 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" (OuterVolumeSpecName: "config") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.519934 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" (OuterVolumeSpecName: "kube-api-access-w4xd4") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "kube-api-access-w4xd4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.520060 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" (OuterVolumeSpecName: "tmpfs") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "tmpfs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.520254 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.520483 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.520554 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.520771 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.521007 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.522203 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.522683 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.523243 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" (OuterVolumeSpecName: "certs") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.523562 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.523628 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.523673 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.523753 5008 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.523767 5008 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.523796 5008 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") on node \"crc\" DevicePath \"\"" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.523811 5008 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") on node \"crc\" DevicePath \"\"" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.523823 5008 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") on node \"crc\" DevicePath \"\"" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.523835 5008 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.523848 5008 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") on node \"crc\" DevicePath \"\"" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.523859 5008 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") on node \"crc\" DevicePath \"\"" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.523873 5008 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.523885 5008 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") on node \"crc\" DevicePath \"\"" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.523896 5008 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.523908 5008 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") on node \"crc\" DevicePath \"\"" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.523926 5008 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.523937 5008 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") on node \"crc\" DevicePath \"\"" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.523949 5008 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.523958 5008 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") on node \"crc\" DevicePath \"\"" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.523970 5008 reconciler_common.go:293] "Volume detached for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.523980 5008 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") on node \"crc\" DevicePath \"\"" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.523990 5008 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") on node \"crc\" DevicePath \"\"" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.524000 5008 reconciler_common.go:293] "Volume detached for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") on node \"crc\" DevicePath \"\"" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.524011 5008 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") on node \"crc\" DevicePath \"\"" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.524022 5008 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.524033 5008 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") on node \"crc\" DevicePath \"\"" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.524043 5008 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.524053 5008 reconciler_common.go:293] "Volume detached for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") on node \"crc\" DevicePath \"\"" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.524138 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:27:58 crc kubenswrapper[5008]: W0129 15:27:58.524216 5008 empty_dir.go:500] Warning: Unmount skipped because path does not exist: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/volumes/kubernetes.io~secret/certs Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.524229 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" (OuterVolumeSpecName: "certs") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.524351 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.524392 5008 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") on node \"crc\" DevicePath \"\"" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.524407 5008 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.524420 5008 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") on node \"crc\" DevicePath \"\"" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.524444 5008 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.524457 5008 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.524470 5008 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.524480 5008 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.524491 5008 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") on node \"crc\" DevicePath \"\"" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.524500 5008 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.524511 5008 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") on node \"crc\" DevicePath \"\"" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.524521 5008 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.524532 5008 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") on node \"crc\" DevicePath \"\"" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.524543 5008 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.524555 5008 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") on node \"crc\" DevicePath \"\"" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.524566 5008 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") on node \"crc\" DevicePath \"\"" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.524576 5008 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") on node \"crc\" DevicePath \"\"" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.524586 5008 reconciler_common.go:293] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") on node \"crc\" DevicePath \"\"" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.524598 5008 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.524611 5008 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.524623 5008 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") on node \"crc\" DevicePath \"\"" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.524636 5008 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") on node \"crc\" DevicePath \"\"" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.524647 5008 reconciler_common.go:293] "Volume detached for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.524658 5008 reconciler_common.go:293] "Volume detached for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.524668 5008 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") on node \"crc\" DevicePath \"\"" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.524680 5008 reconciler_common.go:293] "Volume detached for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") on node \"crc\" DevicePath \"\"" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.524691 5008 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.525145 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" (OuterVolumeSpecName: "kube-api-access-qg5z5") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "kube-api-access-qg5z5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.525228 5008 reconciler_common.go:293] "Volume detached for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") on node \"crc\" DevicePath \"\"" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.525246 5008 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") on node \"crc\" DevicePath \"\"" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.525258 5008 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") on node \"crc\" DevicePath \"\"" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.525269 5008 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") on node \"crc\" DevicePath \"\"" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.525279 5008 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.525290 5008 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") on node \"crc\" DevicePath \"\"" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.525300 5008 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") on node \"crc\" DevicePath \"\"" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.525314 5008 reconciler_common.go:293] "Volume detached for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") on node \"crc\" DevicePath \"\"" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.525327 5008 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.525337 5008 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") on node \"crc\" DevicePath \"\"" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.525350 5008 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.525361 5008 reconciler_common.go:293] "Volume detached for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.525357 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" (OuterVolumeSpecName: "console-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.525373 5008 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") on node \"crc\" DevicePath \"\"" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.525385 5008 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") on node \"crc\" DevicePath \"\"" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.525404 5008 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") on node \"crc\" DevicePath \"\"" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.525415 5008 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.525425 5008 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") on node \"crc\" DevicePath \"\"" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.525436 5008 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") on node \"crc\" DevicePath \"\"" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.525456 5008 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") on node \"crc\" DevicePath \"\"" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.525467 5008 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") on node \"crc\" DevicePath \"\"" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.525477 5008 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") on node \"crc\" DevicePath \"\"" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.525486 5008 reconciler_common.go:293] "Volume detached for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") on node \"crc\" DevicePath \"\"" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.525495 5008 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.525504 5008 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") on node \"crc\" DevicePath \"\"" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.525514 5008 reconciler_common.go:293] "Volume detached for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.525525 5008 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.525535 5008 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") on node \"crc\" DevicePath \"\"" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.525545 5008 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.525582 5008 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.525594 5008 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") on node \"crc\" DevicePath \"\"" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.525606 5008 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.525616 5008 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") on node \"crc\" DevicePath \"\"" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.525626 5008 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") on node \"crc\" DevicePath \"\"" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.525636 5008 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") on node \"crc\" DevicePath \"\"" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.525646 5008 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.525657 5008 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") on node \"crc\" DevicePath \"\"" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.525669 5008 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.525679 5008 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") on node \"crc\" DevicePath \"\"" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.525690 5008 reconciler_common.go:293] "Volume detached for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") on node \"crc\" DevicePath \"\"" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.525700 5008 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") on node \"crc\" DevicePath \"\"" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.525711 5008 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.525721 5008 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") on node \"crc\" DevicePath \"\"" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.525742 5008 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.525754 5008 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.525764 5008 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.525791 5008 reconciler_common.go:293] "Volume detached for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") on node \"crc\" DevicePath \"\"" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.525804 5008 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.525815 5008 reconciler_common.go:293] "Volume detached for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") on node \"crc\" DevicePath \"\"" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.525825 5008 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.525837 5008 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") on node \"crc\" DevicePath \"\"" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.525847 5008 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") on node \"crc\" DevicePath \"\"" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.525857 5008 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.525869 5008 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.525881 5008 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.525891 5008 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.525901 5008 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.525911 5008 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") on node \"crc\" DevicePath \"\"" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.525921 5008 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") on node \"crc\" DevicePath \"\"" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.525931 5008 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") on node \"crc\" DevicePath \"\"" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.525942 5008 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.525953 5008 reconciler_common.go:293] "Volume detached for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") on node \"crc\" DevicePath \"\"" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.525963 5008 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") on node \"crc\" DevicePath \"\"" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.525974 5008 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.525984 5008 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") on node \"crc\" DevicePath \"\"" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.525996 5008 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") on node \"crc\" DevicePath \"\"" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.526006 5008 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.526017 5008 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") on node \"crc\" DevicePath \"\"" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.526030 5008 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") on node \"crc\" DevicePath \"\"" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.526041 5008 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.526052 5008 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") on node \"crc\" DevicePath \"\"" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.526064 5008 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.526074 5008 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.526086 5008 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") on node \"crc\" DevicePath \"\"" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.526098 5008 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") on node \"crc\" DevicePath \"\"" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.526109 5008 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.526120 5008 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") on node \"crc\" DevicePath \"\"" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.526131 5008 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") on node \"crc\" DevicePath \"\"" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.526144 5008 reconciler_common.go:293] "Volume detached for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") on node \"crc\" DevicePath \"\"" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.526156 5008 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") on node \"crc\" DevicePath \"\"" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.526168 5008 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.526179 5008 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.526190 5008 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.526202 5008 reconciler_common.go:293] "Volume detached for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") on node \"crc\" DevicePath \"\"" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.526212 5008 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") on node \"crc\" DevicePath \"\"" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.526224 5008 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") on node \"crc\" DevicePath \"\"" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.526262 5008 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.526275 5008 reconciler_common.go:293] "Volume detached for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.526287 5008 reconciler_common.go:293] "Volume detached for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") on node \"crc\" DevicePath \"\"" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.526299 5008 reconciler_common.go:293] "Volume detached for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.526310 5008 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") on node \"crc\" DevicePath \"\"" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.526321 5008 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") on node \"crc\" DevicePath \"\"" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.526330 5008 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.526341 5008 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") on node \"crc\" DevicePath \"\"" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.526352 5008 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") on node \"crc\" DevicePath \"\"" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.526363 5008 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") on node \"crc\" DevicePath \"\"" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.526373 5008 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") on node \"crc\" DevicePath \"\"" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.526384 5008 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.526394 5008 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") on node \"crc\" DevicePath \"\"" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.526733 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" (OuterVolumeSpecName: "config") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.526767 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.529111 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.529414 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" (OuterVolumeSpecName: "multus-daemon-config") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "multus-daemon-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.529512 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.531417 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.533625 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.535016 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.544736 5008 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.545252 5008 scope.go:117] "RemoveContainer" containerID="4d710e35a02d14289e2d5fe6b35c08621e78c96b7e9e30451ffd6d51962fb761" Jan 29 15:27:58 crc kubenswrapper[5008]: E0129 15:27:58.545888 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.553331 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.555500 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.561644 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.572391 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.582670 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.588482 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.594459 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.595494 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.601164 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.608466 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2624b9eb-bfe1-4c46-8825-6152c5e00565\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://12266e3ba2ed2e5d6d1e7ee893a0d59cd4575c8870cb1e129ca0fd9b8623467f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c778df6f5c031669143db37980250c01473f3d9856acc44a6ef51852822f99ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://de7c341c7443f28f5919ef6baeb21377b5571637ad807dd7515a5f28c218034b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0f710dffd08d1bbb467ff9d2c6a5d5beed779550747459407916e743506ab27\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:27:37Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.623854 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d62f7cc2-d2d7-4c9a-9432-8b4fb9f3fcf2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4702656214e54c2881bf198364622648679a9981721d09a6b1551a134c63b7d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ed1794f8b68a0810301b6f7b91e03cfb269b35084dd97b2f153789ba70970f2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://677c04a1dffb767e6149ccb064772548ca29cf553afc20cb4eb82a5f85742ff0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d710e35a02d14289e2d5fe6b35c08621e78c96b7e9e30451ffd6d51962fb761\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://824cd135db982b1543c1eedd31029e6ffaf33861ab2214da9a9d50cf96681e8e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T15:27:41Z\\\",\\\"message\\\":\\\"W0129 15:27:40.899468 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0129 15:27:40.899809 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769700460 cert, and key in /tmp/serving-cert-2093150862/serving-signer.crt, /tmp/serving-cert-2093150862/serving-signer.key\\\\nI0129 15:27:41.249157 1 observer_polling.go:159] Starting file observer\\\\nW0129 15:27:41.261429 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0129 15:27:41.261720 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 15:27:41.263585 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2093150862/tls.crt::/tmp/serving-cert-2093150862/tls.key\\\\\\\"\\\\nF0129 15:27:41.657916 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T15:27:40Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4d710e35a02d14289e2d5fe6b35c08621e78c96b7e9e30451ffd6d51962fb761\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"message\\\":\\\"file observer\\\\nW0129 15:27:57.701071 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0129 15:27:57.704726 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 15:27:57.707574 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-445213743/tls.crt::/tmp/serving-cert-445213743/tls.key\\\\\\\"\\\\nI0129 15:27:58.036057 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0129 15:27:58.041904 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0129 15:27:58.041936 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0129 15:27:58.041959 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0129 15:27:58.041967 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0129 15:27:58.046875 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0129 15:27:58.046901 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 15:27:58.046906 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 15:27:58.046911 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0129 15:27:58.046914 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0129 15:27:58.046917 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0129 15:27:58.046919 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0129 15:27:58.047110 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0129 15:27:58.052272 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T15:27:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3397d4d59fbac09e49247425eb263f25d13c62a72013146c981b606f6389165d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:40Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://25a0c747be0a011a60911a631709b27620d8ebc5afea1d21dbeb71f26d971f6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://25a0c747be0a011a60911a631709b27620d8ebc5afea1d21dbeb71f26d971f6e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:27:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:27:37Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.627188 5008 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.627683 5008 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.627752 5008 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") on node \"crc\" DevicePath \"\"" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.627851 5008 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.627923 5008 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.627978 5008 reconciler_common.go:293] "Volume detached for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") on node \"crc\" DevicePath \"\"" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.628037 5008 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.628117 5008 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") on node \"crc\" DevicePath \"\"" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.628205 5008 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.628269 5008 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") on node \"crc\" DevicePath \"\"" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.651107 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.661831 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.680420 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.697530 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.711495 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.743865 5008 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2027-01-29 15:22:57 +0000 UTC, rotation deadline is 2026-10-22 21:15:26.897945549 +0000 UTC Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.743928 5008 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Waiting 6389h47m28.154019666s for next certificate rotation Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.866553 5008 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/node-resolver-wtvvb"] Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.867178 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-wtvvb" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.869336 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.869565 5008 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.873144 5008 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.894444 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.906631 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.917368 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-wtvvb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2dede057-dcce-4302-8efe-e2c3640308ec\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mtnst\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:27:58Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-wtvvb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.927327 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d62f7cc2-d2d7-4c9a-9432-8b4fb9f3fcf2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4702656214e54c2881bf198364622648679a9981721d09a6b1551a134c63b7d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ed1794f8b68a0810301b6f7b91e03cfb269b35084dd97b2f153789ba70970f2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://677c04a1dffb767e6149ccb064772548ca29cf553afc20cb4eb82a5f85742ff0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d710e35a02d14289e2d5fe6b35c08621e78c96b7e9e30451ffd6d51962fb761\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://824cd135db982b1543c1eedd31029e6ffaf33861ab2214da9a9d50cf96681e8e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T15:27:41Z\\\",\\\"message\\\":\\\"W0129 15:27:40.899468 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0129 15:27:40.899809 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769700460 cert, and key in /tmp/serving-cert-2093150862/serving-signer.crt, /tmp/serving-cert-2093150862/serving-signer.key\\\\nI0129 15:27:41.249157 1 observer_polling.go:159] Starting file observer\\\\nW0129 15:27:41.261429 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0129 15:27:41.261720 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 15:27:41.263585 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2093150862/tls.crt::/tmp/serving-cert-2093150862/tls.key\\\\\\\"\\\\nF0129 15:27:41.657916 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T15:27:40Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4d710e35a02d14289e2d5fe6b35c08621e78c96b7e9e30451ffd6d51962fb761\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"message\\\":\\\"file observer\\\\nW0129 15:27:57.701071 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0129 15:27:57.704726 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 15:27:57.707574 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-445213743/tls.crt::/tmp/serving-cert-445213743/tls.key\\\\\\\"\\\\nI0129 15:27:58.036057 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0129 15:27:58.041904 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0129 15:27:58.041936 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0129 15:27:58.041959 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0129 15:27:58.041967 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0129 15:27:58.046875 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0129 15:27:58.046901 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 15:27:58.046906 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 15:27:58.046911 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0129 15:27:58.046914 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0129 15:27:58.046917 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0129 15:27:58.046919 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0129 15:27:58.047110 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0129 15:27:58.052272 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T15:27:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3397d4d59fbac09e49247425eb263f25d13c62a72013146c981b606f6389165d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:40Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://25a0c747be0a011a60911a631709b27620d8ebc5afea1d21dbeb71f26d971f6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://25a0c747be0a011a60911a631709b27620d8ebc5afea1d21dbeb71f26d971f6e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:27:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:27:37Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.931529 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.931607 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mtnst\" (UniqueName: \"kubernetes.io/projected/2dede057-dcce-4302-8efe-e2c3640308ec-kube-api-access-mtnst\") pod \"node-resolver-wtvvb\" (UID: \"2dede057-dcce-4302-8efe-e2c3640308ec\") " pod="openshift-dns/node-resolver-wtvvb" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.931649 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/2dede057-dcce-4302-8efe-e2c3640308ec-hosts-file\") pod \"node-resolver-wtvvb\" (UID: \"2dede057-dcce-4302-8efe-e2c3640308ec\") " pod="openshift-dns/node-resolver-wtvvb" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.931673 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 15:27:58 crc kubenswrapper[5008]: E0129 15:27:58.931834 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 15:27:59.931766162 +0000 UTC m=+23.604620399 (durationBeforeRetry 1s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:27:58 crc kubenswrapper[5008]: E0129 15:27:58.931878 5008 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 29 15:27:58 crc kubenswrapper[5008]: E0129 15:27:58.931968 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-29 15:27:59.931944297 +0000 UTC m=+23.604798704 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.941596 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.954409 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.968847 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.980088 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 15:27:58 crc kubenswrapper[5008]: I0129 15:27:58.990694 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2624b9eb-bfe1-4c46-8825-6152c5e00565\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://12266e3ba2ed2e5d6d1e7ee893a0d59cd4575c8870cb1e129ca0fd9b8623467f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c778df6f5c031669143db37980250c01473f3d9856acc44a6ef51852822f99ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://de7c341c7443f28f5919ef6baeb21377b5571637ad807dd7515a5f28c218034b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0f710dffd08d1bbb467ff9d2c6a5d5beed779550747459407916e743506ab27\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:27:37Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 15:27:59 crc kubenswrapper[5008]: I0129 15:27:59.032396 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mtnst\" (UniqueName: \"kubernetes.io/projected/2dede057-dcce-4302-8efe-e2c3640308ec-kube-api-access-mtnst\") pod \"node-resolver-wtvvb\" (UID: \"2dede057-dcce-4302-8efe-e2c3640308ec\") " pod="openshift-dns/node-resolver-wtvvb" Jan 29 15:27:59 crc kubenswrapper[5008]: I0129 15:27:59.032435 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 15:27:59 crc kubenswrapper[5008]: I0129 15:27:59.032455 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 15:27:59 crc kubenswrapper[5008]: I0129 15:27:59.032481 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/2dede057-dcce-4302-8efe-e2c3640308ec-hosts-file\") pod \"node-resolver-wtvvb\" (UID: \"2dede057-dcce-4302-8efe-e2c3640308ec\") " pod="openshift-dns/node-resolver-wtvvb" Jan 29 15:27:59 crc kubenswrapper[5008]: I0129 15:27:59.032499 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 15:27:59 crc kubenswrapper[5008]: E0129 15:27:59.032565 5008 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 29 15:27:59 crc kubenswrapper[5008]: E0129 15:27:59.032628 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-29 15:28:00.032613495 +0000 UTC m=+23.705467732 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 29 15:27:59 crc kubenswrapper[5008]: E0129 15:27:59.032838 5008 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 29 15:27:59 crc kubenswrapper[5008]: E0129 15:27:59.032857 5008 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 29 15:27:59 crc kubenswrapper[5008]: E0129 15:27:59.032870 5008 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 29 15:27:59 crc kubenswrapper[5008]: I0129 15:27:59.032837 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/2dede057-dcce-4302-8efe-e2c3640308ec-hosts-file\") pod \"node-resolver-wtvvb\" (UID: \"2dede057-dcce-4302-8efe-e2c3640308ec\") " pod="openshift-dns/node-resolver-wtvvb" Jan 29 15:27:59 crc kubenswrapper[5008]: E0129 15:27:59.032904 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-29 15:28:00.032893112 +0000 UTC m=+23.705747349 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 29 15:27:59 crc kubenswrapper[5008]: E0129 15:27:59.032908 5008 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 29 15:27:59 crc kubenswrapper[5008]: E0129 15:27:59.032954 5008 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 29 15:27:59 crc kubenswrapper[5008]: E0129 15:27:59.032976 5008 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 29 15:27:59 crc kubenswrapper[5008]: E0129 15:27:59.033008 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-29 15:28:00.032997525 +0000 UTC m=+23.705851762 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 29 15:27:59 crc kubenswrapper[5008]: I0129 15:27:59.051438 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mtnst\" (UniqueName: \"kubernetes.io/projected/2dede057-dcce-4302-8efe-e2c3640308ec-kube-api-access-mtnst\") pod \"node-resolver-wtvvb\" (UID: \"2dede057-dcce-4302-8efe-e2c3640308ec\") " pod="openshift-dns/node-resolver-wtvvb" Jan 29 15:27:59 crc kubenswrapper[5008]: I0129 15:27:59.181494 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-wtvvb" Jan 29 15:27:59 crc kubenswrapper[5008]: I0129 15:27:59.230927 5008 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-additional-cni-plugins-78bl2"] Jan 29 15:27:59 crc kubenswrapper[5008]: I0129 15:27:59.231508 5008 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-42hcz"] Jan 29 15:27:59 crc kubenswrapper[5008]: I0129 15:27:59.231673 5008 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-daemon-gk9q8"] Jan 29 15:27:59 crc kubenswrapper[5008]: I0129 15:27:59.231953 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-gk9q8" Jan 29 15:27:59 crc kubenswrapper[5008]: I0129 15:27:59.232333 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-78bl2" Jan 29 15:27:59 crc kubenswrapper[5008]: I0129 15:27:59.232569 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-42hcz" Jan 29 15:27:59 crc kubenswrapper[5008]: I0129 15:27:59.234551 5008 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Jan 29 15:27:59 crc kubenswrapper[5008]: I0129 15:27:59.234870 5008 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Jan 29 15:27:59 crc kubenswrapper[5008]: I0129 15:27:59.235145 5008 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Jan 29 15:27:59 crc kubenswrapper[5008]: I0129 15:27:59.235658 5008 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Jan 29 15:27:59 crc kubenswrapper[5008]: I0129 15:27:59.235761 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Jan 29 15:27:59 crc kubenswrapper[5008]: I0129 15:27:59.235842 5008 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Jan 29 15:27:59 crc kubenswrapper[5008]: I0129 15:27:59.236308 5008 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Jan 29 15:27:59 crc kubenswrapper[5008]: I0129 15:27:59.236544 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Jan 29 15:27:59 crc kubenswrapper[5008]: I0129 15:27:59.236561 5008 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Jan 29 15:27:59 crc kubenswrapper[5008]: I0129 15:27:59.236698 5008 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Jan 29 15:27:59 crc kubenswrapper[5008]: I0129 15:27:59.236743 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Jan 29 15:27:59 crc kubenswrapper[5008]: I0129 15:27:59.238101 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Jan 29 15:27:59 crc kubenswrapper[5008]: W0129 15:27:59.239725 5008 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2dede057_dcce_4302_8efe_e2c3640308ec.slice/crio-cd2df9713cf65dda3bee2262acc3ddea403a1fb82fd0fbfc4cb4187c1c4d87fc WatchSource:0}: Error finding container cd2df9713cf65dda3bee2262acc3ddea403a1fb82fd0fbfc4cb4187c1c4d87fc: Status 404 returned error can't find the container with id cd2df9713cf65dda3bee2262acc3ddea403a1fb82fd0fbfc4cb4187c1c4d87fc Jan 29 15:27:59 crc kubenswrapper[5008]: I0129 15:27:59.247663 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-78bl2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa065d0b-d690-4a7d-9079-a8f976a7aca3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trwfk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trwfk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trwfk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trwfk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trwfk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trwfk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trwfk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:27:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-78bl2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 15:27:59 crc kubenswrapper[5008]: I0129 15:27:59.261448 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 15:27:59 crc kubenswrapper[5008]: I0129 15:27:59.277022 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 15:27:59 crc kubenswrapper[5008]: I0129 15:27:59.285499 5008 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-10 20:04:11.636766864 +0000 UTC Jan 29 15:27:59 crc kubenswrapper[5008]: I0129 15:27:59.291657 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-42hcz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cdd8ae23-3f9f-49f8-928d-46dad823fde4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg75x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:27:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-42hcz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 15:27:59 crc kubenswrapper[5008]: I0129 15:27:59.304850 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2624b9eb-bfe1-4c46-8825-6152c5e00565\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://12266e3ba2ed2e5d6d1e7ee893a0d59cd4575c8870cb1e129ca0fd9b8623467f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c778df6f5c031669143db37980250c01473f3d9856acc44a6ef51852822f99ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://de7c341c7443f28f5919ef6baeb21377b5571637ad807dd7515a5f28c218034b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0f710dffd08d1bbb467ff9d2c6a5d5beed779550747459407916e743506ab27\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:27:37Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 15:27:59 crc kubenswrapper[5008]: I0129 15:27:59.331075 5008 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01ab3dd5-8196-46d0-ad33-122e2ca51def" path="/var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes" Jan 29 15:27:59 crc kubenswrapper[5008]: I0129 15:27:59.332710 5008 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" path="/var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes" Jan 29 15:27:59 crc kubenswrapper[5008]: I0129 15:27:59.333492 5008 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09efc573-dbb6-4249-bd59-9b87aba8dd28" path="/var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes" Jan 29 15:27:59 crc kubenswrapper[5008]: I0129 15:27:59.333378 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d62f7cc2-d2d7-4c9a-9432-8b4fb9f3fcf2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4702656214e54c2881bf198364622648679a9981721d09a6b1551a134c63b7d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ed1794f8b68a0810301b6f7b91e03cfb269b35084dd97b2f153789ba70970f2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://677c04a1dffb767e6149ccb064772548ca29cf553afc20cb4eb82a5f85742ff0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d710e35a02d14289e2d5fe6b35c08621e78c96b7e9e30451ffd6d51962fb761\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://824cd135db982b1543c1eedd31029e6ffaf33861ab2214da9a9d50cf96681e8e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T15:27:41Z\\\",\\\"message\\\":\\\"W0129 15:27:40.899468 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0129 15:27:40.899809 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769700460 cert, and key in /tmp/serving-cert-2093150862/serving-signer.crt, /tmp/serving-cert-2093150862/serving-signer.key\\\\nI0129 15:27:41.249157 1 observer_polling.go:159] Starting file observer\\\\nW0129 15:27:41.261429 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0129 15:27:41.261720 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 15:27:41.263585 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2093150862/tls.crt::/tmp/serving-cert-2093150862/tls.key\\\\\\\"\\\\nF0129 15:27:41.657916 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T15:27:40Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4d710e35a02d14289e2d5fe6b35c08621e78c96b7e9e30451ffd6d51962fb761\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"message\\\":\\\"file observer\\\\nW0129 15:27:57.701071 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0129 15:27:57.704726 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 15:27:57.707574 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-445213743/tls.crt::/tmp/serving-cert-445213743/tls.key\\\\\\\"\\\\nI0129 15:27:58.036057 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0129 15:27:58.041904 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0129 15:27:58.041936 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0129 15:27:58.041959 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0129 15:27:58.041967 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0129 15:27:58.046875 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0129 15:27:58.046901 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 15:27:58.046906 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 15:27:58.046911 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0129 15:27:58.046914 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0129 15:27:58.046917 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0129 15:27:58.046919 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0129 15:27:58.047110 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0129 15:27:58.052272 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T15:27:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3397d4d59fbac09e49247425eb263f25d13c62a72013146c981b606f6389165d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:40Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://25a0c747be0a011a60911a631709b27620d8ebc5afea1d21dbeb71f26d971f6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://25a0c747be0a011a60911a631709b27620d8ebc5afea1d21dbeb71f26d971f6e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:27:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:27:37Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 15:27:59 crc kubenswrapper[5008]: I0129 15:27:59.334143 5008 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b574797-001e-440a-8f4e-c0be86edad0f" path="/var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes" Jan 29 15:27:59 crc kubenswrapper[5008]: I0129 15:27:59.335127 5008 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b78653f-4ff9-4508-8672-245ed9b561e3" path="/var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes" Jan 29 15:27:59 crc kubenswrapper[5008]: I0129 15:27:59.335615 5008 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1386a44e-36a2-460c-96d0-0359d2b6f0f5" path="/var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes" Jan 29 15:27:59 crc kubenswrapper[5008]: I0129 15:27:59.336270 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6blck\" (UniqueName: \"kubernetes.io/projected/ca0fcb2d-733d-4bde-9bbf-3f7082d0e244-kube-api-access-6blck\") pod \"machine-config-daemon-gk9q8\" (UID: \"ca0fcb2d-733d-4bde-9bbf-3f7082d0e244\") " pod="openshift-machine-config-operator/machine-config-daemon-gk9q8" Jan 29 15:27:59 crc kubenswrapper[5008]: I0129 15:27:59.336313 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/cdd8ae23-3f9f-49f8-928d-46dad823fde4-host-var-lib-cni-bin\") pod \"multus-42hcz\" (UID: \"cdd8ae23-3f9f-49f8-928d-46dad823fde4\") " pod="openshift-multus/multus-42hcz" Jan 29 15:27:59 crc kubenswrapper[5008]: I0129 15:27:59.336337 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tg75x\" (UniqueName: \"kubernetes.io/projected/cdd8ae23-3f9f-49f8-928d-46dad823fde4-kube-api-access-tg75x\") pod \"multus-42hcz\" (UID: \"cdd8ae23-3f9f-49f8-928d-46dad823fde4\") " pod="openshift-multus/multus-42hcz" Jan 29 15:27:59 crc kubenswrapper[5008]: I0129 15:27:59.336371 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/cdd8ae23-3f9f-49f8-928d-46dad823fde4-multus-cni-dir\") pod \"multus-42hcz\" (UID: \"cdd8ae23-3f9f-49f8-928d-46dad823fde4\") " pod="openshift-multus/multus-42hcz" Jan 29 15:27:59 crc kubenswrapper[5008]: I0129 15:27:59.336394 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/cdd8ae23-3f9f-49f8-928d-46dad823fde4-etc-kubernetes\") pod \"multus-42hcz\" (UID: \"cdd8ae23-3f9f-49f8-928d-46dad823fde4\") " pod="openshift-multus/multus-42hcz" Jan 29 15:27:59 crc kubenswrapper[5008]: I0129 15:27:59.336413 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/cdd8ae23-3f9f-49f8-928d-46dad823fde4-multus-daemon-config\") pod \"multus-42hcz\" (UID: \"cdd8ae23-3f9f-49f8-928d-46dad823fde4\") " pod="openshift-multus/multus-42hcz" Jan 29 15:27:59 crc kubenswrapper[5008]: I0129 15:27:59.336433 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/fa065d0b-d690-4a7d-9079-a8f976a7aca3-tuning-conf-dir\") pod \"multus-additional-cni-plugins-78bl2\" (UID: \"fa065d0b-d690-4a7d-9079-a8f976a7aca3\") " pod="openshift-multus/multus-additional-cni-plugins-78bl2" Jan 29 15:27:59 crc kubenswrapper[5008]: I0129 15:27:59.336452 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/cdd8ae23-3f9f-49f8-928d-46dad823fde4-cni-binary-copy\") pod \"multus-42hcz\" (UID: \"cdd8ae23-3f9f-49f8-928d-46dad823fde4\") " pod="openshift-multus/multus-42hcz" Jan 29 15:27:59 crc kubenswrapper[5008]: I0129 15:27:59.336474 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/fa065d0b-d690-4a7d-9079-a8f976a7aca3-system-cni-dir\") pod \"multus-additional-cni-plugins-78bl2\" (UID: \"fa065d0b-d690-4a7d-9079-a8f976a7aca3\") " pod="openshift-multus/multus-additional-cni-plugins-78bl2" Jan 29 15:27:59 crc kubenswrapper[5008]: I0129 15:27:59.336502 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/cdd8ae23-3f9f-49f8-928d-46dad823fde4-host-run-k8s-cni-cncf-io\") pod \"multus-42hcz\" (UID: \"cdd8ae23-3f9f-49f8-928d-46dad823fde4\") " pod="openshift-multus/multus-42hcz" Jan 29 15:27:59 crc kubenswrapper[5008]: I0129 15:27:59.336521 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/cdd8ae23-3f9f-49f8-928d-46dad823fde4-host-var-lib-cni-multus\") pod \"multus-42hcz\" (UID: \"cdd8ae23-3f9f-49f8-928d-46dad823fde4\") " pod="openshift-multus/multus-42hcz" Jan 29 15:27:59 crc kubenswrapper[5008]: I0129 15:27:59.336538 5008 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1bf7eb37-55a3-4c65-b768-a94c82151e69" path="/var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes" Jan 29 15:27:59 crc kubenswrapper[5008]: I0129 15:27:59.336540 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/cdd8ae23-3f9f-49f8-928d-46dad823fde4-hostroot\") pod \"multus-42hcz\" (UID: \"cdd8ae23-3f9f-49f8-928d-46dad823fde4\") " pod="openshift-multus/multus-42hcz" Jan 29 15:27:59 crc kubenswrapper[5008]: I0129 15:27:59.336848 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/fa065d0b-d690-4a7d-9079-a8f976a7aca3-os-release\") pod \"multus-additional-cni-plugins-78bl2\" (UID: \"fa065d0b-d690-4a7d-9079-a8f976a7aca3\") " pod="openshift-multus/multus-additional-cni-plugins-78bl2" Jan 29 15:27:59 crc kubenswrapper[5008]: I0129 15:27:59.336961 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-trwfk\" (UniqueName: \"kubernetes.io/projected/fa065d0b-d690-4a7d-9079-a8f976a7aca3-kube-api-access-trwfk\") pod \"multus-additional-cni-plugins-78bl2\" (UID: \"fa065d0b-d690-4a7d-9079-a8f976a7aca3\") " pod="openshift-multus/multus-additional-cni-plugins-78bl2" Jan 29 15:27:59 crc kubenswrapper[5008]: I0129 15:27:59.337064 5008 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1d611f23-29be-4491-8495-bee1670e935f" path="/var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes" Jan 29 15:27:59 crc kubenswrapper[5008]: I0129 15:27:59.337063 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/cdd8ae23-3f9f-49f8-928d-46dad823fde4-cnibin\") pod \"multus-42hcz\" (UID: \"cdd8ae23-3f9f-49f8-928d-46dad823fde4\") " pod="openshift-multus/multus-42hcz" Jan 29 15:27:59 crc kubenswrapper[5008]: I0129 15:27:59.337297 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/ca0fcb2d-733d-4bde-9bbf-3f7082d0e244-proxy-tls\") pod \"machine-config-daemon-gk9q8\" (UID: \"ca0fcb2d-733d-4bde-9bbf-3f7082d0e244\") " pod="openshift-machine-config-operator/machine-config-daemon-gk9q8" Jan 29 15:27:59 crc kubenswrapper[5008]: I0129 15:27:59.337573 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/ca0fcb2d-733d-4bde-9bbf-3f7082d0e244-mcd-auth-proxy-config\") pod \"machine-config-daemon-gk9q8\" (UID: \"ca0fcb2d-733d-4bde-9bbf-3f7082d0e244\") " pod="openshift-machine-config-operator/machine-config-daemon-gk9q8" Jan 29 15:27:59 crc kubenswrapper[5008]: I0129 15:27:59.337665 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/cdd8ae23-3f9f-49f8-928d-46dad823fde4-os-release\") pod \"multus-42hcz\" (UID: \"cdd8ae23-3f9f-49f8-928d-46dad823fde4\") " pod="openshift-multus/multus-42hcz" Jan 29 15:27:59 crc kubenswrapper[5008]: I0129 15:27:59.337694 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/fa065d0b-d690-4a7d-9079-a8f976a7aca3-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-78bl2\" (UID: \"fa065d0b-d690-4a7d-9079-a8f976a7aca3\") " pod="openshift-multus/multus-additional-cni-plugins-78bl2" Jan 29 15:27:59 crc kubenswrapper[5008]: I0129 15:27:59.337762 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/cdd8ae23-3f9f-49f8-928d-46dad823fde4-system-cni-dir\") pod \"multus-42hcz\" (UID: \"cdd8ae23-3f9f-49f8-928d-46dad823fde4\") " pod="openshift-multus/multus-42hcz" Jan 29 15:27:59 crc kubenswrapper[5008]: I0129 15:27:59.337806 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/fa065d0b-d690-4a7d-9079-a8f976a7aca3-cnibin\") pod \"multus-additional-cni-plugins-78bl2\" (UID: \"fa065d0b-d690-4a7d-9079-a8f976a7aca3\") " pod="openshift-multus/multus-additional-cni-plugins-78bl2" Jan 29 15:27:59 crc kubenswrapper[5008]: I0129 15:27:59.337836 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/fa065d0b-d690-4a7d-9079-a8f976a7aca3-cni-binary-copy\") pod \"multus-additional-cni-plugins-78bl2\" (UID: \"fa065d0b-d690-4a7d-9079-a8f976a7aca3\") " pod="openshift-multus/multus-additional-cni-plugins-78bl2" Jan 29 15:27:59 crc kubenswrapper[5008]: I0129 15:27:59.337866 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/cdd8ae23-3f9f-49f8-928d-46dad823fde4-host-run-multus-certs\") pod \"multus-42hcz\" (UID: \"cdd8ae23-3f9f-49f8-928d-46dad823fde4\") " pod="openshift-multus/multus-42hcz" Jan 29 15:27:59 crc kubenswrapper[5008]: I0129 15:27:59.337885 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/cdd8ae23-3f9f-49f8-928d-46dad823fde4-host-run-netns\") pod \"multus-42hcz\" (UID: \"cdd8ae23-3f9f-49f8-928d-46dad823fde4\") " pod="openshift-multus/multus-42hcz" Jan 29 15:27:59 crc kubenswrapper[5008]: I0129 15:27:59.337909 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/ca0fcb2d-733d-4bde-9bbf-3f7082d0e244-rootfs\") pod \"machine-config-daemon-gk9q8\" (UID: \"ca0fcb2d-733d-4bde-9bbf-3f7082d0e244\") " pod="openshift-machine-config-operator/machine-config-daemon-gk9q8" Jan 29 15:27:59 crc kubenswrapper[5008]: I0129 15:27:59.337932 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/cdd8ae23-3f9f-49f8-928d-46dad823fde4-multus-socket-dir-parent\") pod \"multus-42hcz\" (UID: \"cdd8ae23-3f9f-49f8-928d-46dad823fde4\") " pod="openshift-multus/multus-42hcz" Jan 29 15:27:59 crc kubenswrapper[5008]: I0129 15:27:59.337962 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/cdd8ae23-3f9f-49f8-928d-46dad823fde4-host-var-lib-kubelet\") pod \"multus-42hcz\" (UID: \"cdd8ae23-3f9f-49f8-928d-46dad823fde4\") " pod="openshift-multus/multus-42hcz" Jan 29 15:27:59 crc kubenswrapper[5008]: I0129 15:27:59.337987 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/cdd8ae23-3f9f-49f8-928d-46dad823fde4-multus-conf-dir\") pod \"multus-42hcz\" (UID: \"cdd8ae23-3f9f-49f8-928d-46dad823fde4\") " pod="openshift-multus/multus-42hcz" Jan 29 15:27:59 crc kubenswrapper[5008]: I0129 15:27:59.338329 5008 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20b0d48f-5fd6-431c-a545-e3c800c7b866" path="/var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/volumes" Jan 29 15:27:59 crc kubenswrapper[5008]: I0129 15:27:59.338852 5008 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" path="/var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes" Jan 29 15:27:59 crc kubenswrapper[5008]: I0129 15:27:59.339738 5008 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="22c825df-677d-4ca6-82db-3454ed06e783" path="/var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes" Jan 29 15:27:59 crc kubenswrapper[5008]: I0129 15:27:59.340512 5008 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="25e176fe-21b4-4974-b1ed-c8b94f112a7f" path="/var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes" Jan 29 15:27:59 crc kubenswrapper[5008]: I0129 15:27:59.342050 5008 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" path="/var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes" Jan 29 15:27:59 crc kubenswrapper[5008]: I0129 15:27:59.342710 5008 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="31d8b7a1-420e-4252-a5b7-eebe8a111292" path="/var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes" Jan 29 15:27:59 crc kubenswrapper[5008]: I0129 15:27:59.343548 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-wtvvb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2dede057-dcce-4302-8efe-e2c3640308ec\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mtnst\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:27:58Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-wtvvb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 15:27:59 crc kubenswrapper[5008]: I0129 15:27:59.343890 5008 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3ab1a177-2de0-46d9-b765-d0d0649bb42e" path="/var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/volumes" Jan 29 15:27:59 crc kubenswrapper[5008]: I0129 15:27:59.345029 5008 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" path="/var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes" Jan 29 15:27:59 crc kubenswrapper[5008]: I0129 15:27:59.345589 5008 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="43509403-f426-496e-be36-56cef71462f5" path="/var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes" Jan 29 15:27:59 crc kubenswrapper[5008]: I0129 15:27:59.347514 5008 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="44663579-783b-4372-86d6-acf235a62d72" path="/var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/volumes" Jan 29 15:27:59 crc kubenswrapper[5008]: I0129 15:27:59.349081 5008 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="496e6271-fb68-4057-954e-a0d97a4afa3f" path="/var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes" Jan 29 15:27:59 crc kubenswrapper[5008]: I0129 15:27:59.349763 5008 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" path="/var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes" Jan 29 15:27:59 crc kubenswrapper[5008]: I0129 15:27:59.350698 5008 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49ef4625-1d3a-4a9f-b595-c2433d32326d" path="/var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/volumes" Jan 29 15:27:59 crc kubenswrapper[5008]: I0129 15:27:59.351336 5008 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4bb40260-dbaa-4fb0-84df-5e680505d512" path="/var/lib/kubelet/pods/4bb40260-dbaa-4fb0-84df-5e680505d512/volumes" Jan 29 15:27:59 crc kubenswrapper[5008]: I0129 15:27:59.351770 5008 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5225d0e4-402f-4861-b410-819f433b1803" path="/var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes" Jan 29 15:27:59 crc kubenswrapper[5008]: I0129 15:27:59.355222 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-gk9q8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ca0fcb2d-733d-4bde-9bbf-3f7082d0e244\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6blck\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6blck\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:27:59Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-gk9q8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 15:27:59 crc kubenswrapper[5008]: I0129 15:27:59.357046 5008 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5441d097-087c-4d9a-baa8-b210afa90fc9" path="/var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes" Jan 29 15:27:59 crc kubenswrapper[5008]: I0129 15:27:59.357699 5008 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="57a731c4-ef35-47a8-b875-bfb08a7f8011" path="/var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes" Jan 29 15:27:59 crc kubenswrapper[5008]: I0129 15:27:59.359737 5008 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5b88f790-22fa-440e-b583-365168c0b23d" path="/var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/volumes" Jan 29 15:27:59 crc kubenswrapper[5008]: I0129 15:27:59.361199 5008 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5fe579f8-e8a6-4643-bce5-a661393c4dde" path="/var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/volumes" Jan 29 15:27:59 crc kubenswrapper[5008]: I0129 15:27:59.361673 5008 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6402fda4-df10-493c-b4e5-d0569419652d" path="/var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes" Jan 29 15:27:59 crc kubenswrapper[5008]: I0129 15:27:59.362251 5008 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6509e943-70c6-444c-bc41-48a544e36fbd" path="/var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes" Jan 29 15:27:59 crc kubenswrapper[5008]: I0129 15:27:59.363531 5008 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6731426b-95fe-49ff-bb5f-40441049fde2" path="/var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/volumes" Jan 29 15:27:59 crc kubenswrapper[5008]: I0129 15:27:59.364047 5008 kubelet_volumes.go:152] "Cleaned up orphaned volume subpath from pod" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volume-subpaths/run-systemd/ovnkube-controller/6" Jan 29 15:27:59 crc kubenswrapper[5008]: I0129 15:27:59.364196 5008 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volumes" Jan 29 15:27:59 crc kubenswrapper[5008]: I0129 15:27:59.364547 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 15:27:59 crc kubenswrapper[5008]: I0129 15:27:59.366412 5008 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7539238d-5fe0-46ed-884e-1c3b566537ec" path="/var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes" Jan 29 15:27:59 crc kubenswrapper[5008]: I0129 15:27:59.367180 5008 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7583ce53-e0fe-4a16-9e4d-50516596a136" path="/var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes" Jan 29 15:27:59 crc kubenswrapper[5008]: I0129 15:27:59.371626 5008 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7bb08738-c794-4ee8-9972-3a62ca171029" path="/var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes" Jan 29 15:27:59 crc kubenswrapper[5008]: I0129 15:27:59.373349 5008 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="87cf06ed-a83f-41a7-828d-70653580a8cb" path="/var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes" Jan 29 15:27:59 crc kubenswrapper[5008]: I0129 15:27:59.374012 5008 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" path="/var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes" Jan 29 15:27:59 crc kubenswrapper[5008]: I0129 15:27:59.374346 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 15:27:59 crc kubenswrapper[5008]: I0129 15:27:59.374927 5008 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="925f1c65-6136-48ba-85aa-3a3b50560753" path="/var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes" Jan 29 15:27:59 crc kubenswrapper[5008]: I0129 15:27:59.375532 5008 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" path="/var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/volumes" Jan 29 15:27:59 crc kubenswrapper[5008]: I0129 15:27:59.377164 5008 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9d4552c7-cd75-42dd-8880-30dd377c49a4" path="/var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes" Jan 29 15:27:59 crc kubenswrapper[5008]: I0129 15:27:59.377661 5008 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" path="/var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/volumes" Jan 29 15:27:59 crc kubenswrapper[5008]: I0129 15:27:59.378874 5008 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a31745f5-9847-4afe-82a5-3161cc66ca93" path="/var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes" Jan 29 15:27:59 crc kubenswrapper[5008]: I0129 15:27:59.379524 5008 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" path="/var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes" Jan 29 15:27:59 crc kubenswrapper[5008]: I0129 15:27:59.380523 5008 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6312bbd-5731-4ea0-a20f-81d5a57df44a" path="/var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/volumes" Jan 29 15:27:59 crc kubenswrapper[5008]: I0129 15:27:59.382845 5008 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" path="/var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes" Jan 29 15:27:59 crc kubenswrapper[5008]: I0129 15:27:59.383482 5008 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" path="/var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes" Jan 29 15:27:59 crc kubenswrapper[5008]: I0129 15:27:59.386315 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 15:27:59 crc kubenswrapper[5008]: I0129 15:27:59.387835 5008 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" path="/var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/volumes" Jan 29 15:27:59 crc kubenswrapper[5008]: I0129 15:27:59.388736 5008 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bf126b07-da06-4140-9a57-dfd54fc6b486" path="/var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes" Jan 29 15:27:59 crc kubenswrapper[5008]: I0129 15:27:59.389814 5008 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c03ee662-fb2f-4fc4-a2c1-af487c19d254" path="/var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes" Jan 29 15:27:59 crc kubenswrapper[5008]: I0129 15:27:59.390396 5008 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" path="/var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/volumes" Jan 29 15:27:59 crc kubenswrapper[5008]: I0129 15:27:59.391077 5008 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e7e6199b-1264-4501-8953-767f51328d08" path="/var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes" Jan 29 15:27:59 crc kubenswrapper[5008]: I0129 15:27:59.392108 5008 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="efdd0498-1daa-4136-9a4a-3b948c2293fc" path="/var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/volumes" Jan 29 15:27:59 crc kubenswrapper[5008]: I0129 15:27:59.392661 5008 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" path="/var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/volumes" Jan 29 15:27:59 crc kubenswrapper[5008]: I0129 15:27:59.393591 5008 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fda69060-fa79-4696-b1a6-7980f124bf7c" path="/var/lib/kubelet/pods/fda69060-fa79-4696-b1a6-7980f124bf7c/volumes" Jan 29 15:27:59 crc kubenswrapper[5008]: I0129 15:27:59.403947 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 15:27:59 crc kubenswrapper[5008]: I0129 15:27:59.416303 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-78bl2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa065d0b-d690-4a7d-9079-a8f976a7aca3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trwfk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trwfk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trwfk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trwfk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trwfk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trwfk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trwfk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:27:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-78bl2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 15:27:59 crc kubenswrapper[5008]: I0129 15:27:59.426340 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d62f7cc2-d2d7-4c9a-9432-8b4fb9f3fcf2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4702656214e54c2881bf198364622648679a9981721d09a6b1551a134c63b7d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ed1794f8b68a0810301b6f7b91e03cfb269b35084dd97b2f153789ba70970f2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://677c04a1dffb767e6149ccb064772548ca29cf553afc20cb4eb82a5f85742ff0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d710e35a02d14289e2d5fe6b35c08621e78c96b7e9e30451ffd6d51962fb761\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://824cd135db982b1543c1eedd31029e6ffaf33861ab2214da9a9d50cf96681e8e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T15:27:41Z\\\",\\\"message\\\":\\\"W0129 15:27:40.899468 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0129 15:27:40.899809 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769700460 cert, and key in /tmp/serving-cert-2093150862/serving-signer.crt, /tmp/serving-cert-2093150862/serving-signer.key\\\\nI0129 15:27:41.249157 1 observer_polling.go:159] Starting file observer\\\\nW0129 15:27:41.261429 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0129 15:27:41.261720 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 15:27:41.263585 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2093150862/tls.crt::/tmp/serving-cert-2093150862/tls.key\\\\\\\"\\\\nF0129 15:27:41.657916 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T15:27:40Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4d710e35a02d14289e2d5fe6b35c08621e78c96b7e9e30451ffd6d51962fb761\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"message\\\":\\\"file observer\\\\nW0129 15:27:57.701071 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0129 15:27:57.704726 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 15:27:57.707574 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-445213743/tls.crt::/tmp/serving-cert-445213743/tls.key\\\\\\\"\\\\nI0129 15:27:58.036057 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0129 15:27:58.041904 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0129 15:27:58.041936 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0129 15:27:58.041959 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0129 15:27:58.041967 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0129 15:27:58.046875 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0129 15:27:58.046901 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 15:27:58.046906 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 15:27:58.046911 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0129 15:27:58.046914 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0129 15:27:58.046917 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0129 15:27:58.046919 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0129 15:27:58.047110 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0129 15:27:58.052272 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T15:27:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3397d4d59fbac09e49247425eb263f25d13c62a72013146c981b606f6389165d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:40Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://25a0c747be0a011a60911a631709b27620d8ebc5afea1d21dbeb71f26d971f6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://25a0c747be0a011a60911a631709b27620d8ebc5afea1d21dbeb71f26d971f6e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:27:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:27:37Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 15:27:59 crc kubenswrapper[5008]: I0129 15:27:59.435482 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 15:27:59 crc kubenswrapper[5008]: I0129 15:27:59.438529 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/cdd8ae23-3f9f-49f8-928d-46dad823fde4-system-cni-dir\") pod \"multus-42hcz\" (UID: \"cdd8ae23-3f9f-49f8-928d-46dad823fde4\") " pod="openshift-multus/multus-42hcz" Jan 29 15:27:59 crc kubenswrapper[5008]: I0129 15:27:59.438565 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/fa065d0b-d690-4a7d-9079-a8f976a7aca3-cnibin\") pod \"multus-additional-cni-plugins-78bl2\" (UID: \"fa065d0b-d690-4a7d-9079-a8f976a7aca3\") " pod="openshift-multus/multus-additional-cni-plugins-78bl2" Jan 29 15:27:59 crc kubenswrapper[5008]: I0129 15:27:59.438589 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/fa065d0b-d690-4a7d-9079-a8f976a7aca3-cni-binary-copy\") pod \"multus-additional-cni-plugins-78bl2\" (UID: \"fa065d0b-d690-4a7d-9079-a8f976a7aca3\") " pod="openshift-multus/multus-additional-cni-plugins-78bl2" Jan 29 15:27:59 crc kubenswrapper[5008]: I0129 15:27:59.438614 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/cdd8ae23-3f9f-49f8-928d-46dad823fde4-host-run-multus-certs\") pod \"multus-42hcz\" (UID: \"cdd8ae23-3f9f-49f8-928d-46dad823fde4\") " pod="openshift-multus/multus-42hcz" Jan 29 15:27:59 crc kubenswrapper[5008]: I0129 15:27:59.438668 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/fa065d0b-d690-4a7d-9079-a8f976a7aca3-cnibin\") pod \"multus-additional-cni-plugins-78bl2\" (UID: \"fa065d0b-d690-4a7d-9079-a8f976a7aca3\") " pod="openshift-multus/multus-additional-cni-plugins-78bl2" Jan 29 15:27:59 crc kubenswrapper[5008]: I0129 15:27:59.438704 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/cdd8ae23-3f9f-49f8-928d-46dad823fde4-host-run-multus-certs\") pod \"multus-42hcz\" (UID: \"cdd8ae23-3f9f-49f8-928d-46dad823fde4\") " pod="openshift-multus/multus-42hcz" Jan 29 15:27:59 crc kubenswrapper[5008]: I0129 15:27:59.438636 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/ca0fcb2d-733d-4bde-9bbf-3f7082d0e244-rootfs\") pod \"machine-config-daemon-gk9q8\" (UID: \"ca0fcb2d-733d-4bde-9bbf-3f7082d0e244\") " pod="openshift-machine-config-operator/machine-config-daemon-gk9q8" Jan 29 15:27:59 crc kubenswrapper[5008]: I0129 15:27:59.438764 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/cdd8ae23-3f9f-49f8-928d-46dad823fde4-host-run-netns\") pod \"multus-42hcz\" (UID: \"cdd8ae23-3f9f-49f8-928d-46dad823fde4\") " pod="openshift-multus/multus-42hcz" Jan 29 15:27:59 crc kubenswrapper[5008]: I0129 15:27:59.438798 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/cdd8ae23-3f9f-49f8-928d-46dad823fde4-system-cni-dir\") pod \"multus-42hcz\" (UID: \"cdd8ae23-3f9f-49f8-928d-46dad823fde4\") " pod="openshift-multus/multus-42hcz" Jan 29 15:27:59 crc kubenswrapper[5008]: I0129 15:27:59.438810 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/ca0fcb2d-733d-4bde-9bbf-3f7082d0e244-rootfs\") pod \"machine-config-daemon-gk9q8\" (UID: \"ca0fcb2d-733d-4bde-9bbf-3f7082d0e244\") " pod="openshift-machine-config-operator/machine-config-daemon-gk9q8" Jan 29 15:27:59 crc kubenswrapper[5008]: I0129 15:27:59.438860 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/cdd8ae23-3f9f-49f8-928d-46dad823fde4-multus-socket-dir-parent\") pod \"multus-42hcz\" (UID: \"cdd8ae23-3f9f-49f8-928d-46dad823fde4\") " pod="openshift-multus/multus-42hcz" Jan 29 15:27:59 crc kubenswrapper[5008]: I0129 15:27:59.438814 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/cdd8ae23-3f9f-49f8-928d-46dad823fde4-multus-socket-dir-parent\") pod \"multus-42hcz\" (UID: \"cdd8ae23-3f9f-49f8-928d-46dad823fde4\") " pod="openshift-multus/multus-42hcz" Jan 29 15:27:59 crc kubenswrapper[5008]: I0129 15:27:59.438901 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/cdd8ae23-3f9f-49f8-928d-46dad823fde4-host-run-netns\") pod \"multus-42hcz\" (UID: \"cdd8ae23-3f9f-49f8-928d-46dad823fde4\") " pod="openshift-multus/multus-42hcz" Jan 29 15:27:59 crc kubenswrapper[5008]: I0129 15:27:59.438901 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/cdd8ae23-3f9f-49f8-928d-46dad823fde4-host-var-lib-kubelet\") pod \"multus-42hcz\" (UID: \"cdd8ae23-3f9f-49f8-928d-46dad823fde4\") " pod="openshift-multus/multus-42hcz" Jan 29 15:27:59 crc kubenswrapper[5008]: I0129 15:27:59.438957 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/cdd8ae23-3f9f-49f8-928d-46dad823fde4-host-var-lib-kubelet\") pod \"multus-42hcz\" (UID: \"cdd8ae23-3f9f-49f8-928d-46dad823fde4\") " pod="openshift-multus/multus-42hcz" Jan 29 15:27:59 crc kubenswrapper[5008]: I0129 15:27:59.438994 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/cdd8ae23-3f9f-49f8-928d-46dad823fde4-multus-conf-dir\") pod \"multus-42hcz\" (UID: \"cdd8ae23-3f9f-49f8-928d-46dad823fde4\") " pod="openshift-multus/multus-42hcz" Jan 29 15:27:59 crc kubenswrapper[5008]: I0129 15:27:59.439020 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6blck\" (UniqueName: \"kubernetes.io/projected/ca0fcb2d-733d-4bde-9bbf-3f7082d0e244-kube-api-access-6blck\") pod \"machine-config-daemon-gk9q8\" (UID: \"ca0fcb2d-733d-4bde-9bbf-3f7082d0e244\") " pod="openshift-machine-config-operator/machine-config-daemon-gk9q8" Jan 29 15:27:59 crc kubenswrapper[5008]: I0129 15:27:59.439038 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/cdd8ae23-3f9f-49f8-928d-46dad823fde4-host-var-lib-cni-bin\") pod \"multus-42hcz\" (UID: \"cdd8ae23-3f9f-49f8-928d-46dad823fde4\") " pod="openshift-multus/multus-42hcz" Jan 29 15:27:59 crc kubenswrapper[5008]: I0129 15:27:59.439055 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tg75x\" (UniqueName: \"kubernetes.io/projected/cdd8ae23-3f9f-49f8-928d-46dad823fde4-kube-api-access-tg75x\") pod \"multus-42hcz\" (UID: \"cdd8ae23-3f9f-49f8-928d-46dad823fde4\") " pod="openshift-multus/multus-42hcz" Jan 29 15:27:59 crc kubenswrapper[5008]: I0129 15:27:59.439063 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/cdd8ae23-3f9f-49f8-928d-46dad823fde4-multus-conf-dir\") pod \"multus-42hcz\" (UID: \"cdd8ae23-3f9f-49f8-928d-46dad823fde4\") " pod="openshift-multus/multus-42hcz" Jan 29 15:27:59 crc kubenswrapper[5008]: I0129 15:27:59.439125 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/cdd8ae23-3f9f-49f8-928d-46dad823fde4-multus-cni-dir\") pod \"multus-42hcz\" (UID: \"cdd8ae23-3f9f-49f8-928d-46dad823fde4\") " pod="openshift-multus/multus-42hcz" Jan 29 15:27:59 crc kubenswrapper[5008]: I0129 15:27:59.439127 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/cdd8ae23-3f9f-49f8-928d-46dad823fde4-host-var-lib-cni-bin\") pod \"multus-42hcz\" (UID: \"cdd8ae23-3f9f-49f8-928d-46dad823fde4\") " pod="openshift-multus/multus-42hcz" Jan 29 15:27:59 crc kubenswrapper[5008]: I0129 15:27:59.439071 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/cdd8ae23-3f9f-49f8-928d-46dad823fde4-multus-cni-dir\") pod \"multus-42hcz\" (UID: \"cdd8ae23-3f9f-49f8-928d-46dad823fde4\") " pod="openshift-multus/multus-42hcz" Jan 29 15:27:59 crc kubenswrapper[5008]: I0129 15:27:59.439183 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/cdd8ae23-3f9f-49f8-928d-46dad823fde4-etc-kubernetes\") pod \"multus-42hcz\" (UID: \"cdd8ae23-3f9f-49f8-928d-46dad823fde4\") " pod="openshift-multus/multus-42hcz" Jan 29 15:27:59 crc kubenswrapper[5008]: I0129 15:27:59.439218 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/cdd8ae23-3f9f-49f8-928d-46dad823fde4-cni-binary-copy\") pod \"multus-42hcz\" (UID: \"cdd8ae23-3f9f-49f8-928d-46dad823fde4\") " pod="openshift-multus/multus-42hcz" Jan 29 15:27:59 crc kubenswrapper[5008]: I0129 15:27:59.439242 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/cdd8ae23-3f9f-49f8-928d-46dad823fde4-multus-daemon-config\") pod \"multus-42hcz\" (UID: \"cdd8ae23-3f9f-49f8-928d-46dad823fde4\") " pod="openshift-multus/multus-42hcz" Jan 29 15:27:59 crc kubenswrapper[5008]: I0129 15:27:59.439265 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/fa065d0b-d690-4a7d-9079-a8f976a7aca3-tuning-conf-dir\") pod \"multus-additional-cni-plugins-78bl2\" (UID: \"fa065d0b-d690-4a7d-9079-a8f976a7aca3\") " pod="openshift-multus/multus-additional-cni-plugins-78bl2" Jan 29 15:27:59 crc kubenswrapper[5008]: I0129 15:27:59.439289 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/fa065d0b-d690-4a7d-9079-a8f976a7aca3-system-cni-dir\") pod \"multus-additional-cni-plugins-78bl2\" (UID: \"fa065d0b-d690-4a7d-9079-a8f976a7aca3\") " pod="openshift-multus/multus-additional-cni-plugins-78bl2" Jan 29 15:27:59 crc kubenswrapper[5008]: I0129 15:27:59.439311 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/cdd8ae23-3f9f-49f8-928d-46dad823fde4-host-var-lib-cni-multus\") pod \"multus-42hcz\" (UID: \"cdd8ae23-3f9f-49f8-928d-46dad823fde4\") " pod="openshift-multus/multus-42hcz" Jan 29 15:27:59 crc kubenswrapper[5008]: I0129 15:27:59.439334 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/cdd8ae23-3f9f-49f8-928d-46dad823fde4-hostroot\") pod \"multus-42hcz\" (UID: \"cdd8ae23-3f9f-49f8-928d-46dad823fde4\") " pod="openshift-multus/multus-42hcz" Jan 29 15:27:59 crc kubenswrapper[5008]: I0129 15:27:59.439359 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/fa065d0b-d690-4a7d-9079-a8f976a7aca3-system-cni-dir\") pod \"multus-additional-cni-plugins-78bl2\" (UID: \"fa065d0b-d690-4a7d-9079-a8f976a7aca3\") " pod="openshift-multus/multus-additional-cni-plugins-78bl2" Jan 29 15:27:59 crc kubenswrapper[5008]: I0129 15:27:59.439368 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/cdd8ae23-3f9f-49f8-928d-46dad823fde4-host-run-k8s-cni-cncf-io\") pod \"multus-42hcz\" (UID: \"cdd8ae23-3f9f-49f8-928d-46dad823fde4\") " pod="openshift-multus/multus-42hcz" Jan 29 15:27:59 crc kubenswrapper[5008]: I0129 15:27:59.439406 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/cdd8ae23-3f9f-49f8-928d-46dad823fde4-etc-kubernetes\") pod \"multus-42hcz\" (UID: \"cdd8ae23-3f9f-49f8-928d-46dad823fde4\") " pod="openshift-multus/multus-42hcz" Jan 29 15:27:59 crc kubenswrapper[5008]: I0129 15:27:59.439420 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/fa065d0b-d690-4a7d-9079-a8f976a7aca3-os-release\") pod \"multus-additional-cni-plugins-78bl2\" (UID: \"fa065d0b-d690-4a7d-9079-a8f976a7aca3\") " pod="openshift-multus/multus-additional-cni-plugins-78bl2" Jan 29 15:27:59 crc kubenswrapper[5008]: I0129 15:27:59.439444 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-trwfk\" (UniqueName: \"kubernetes.io/projected/fa065d0b-d690-4a7d-9079-a8f976a7aca3-kube-api-access-trwfk\") pod \"multus-additional-cni-plugins-78bl2\" (UID: \"fa065d0b-d690-4a7d-9079-a8f976a7aca3\") " pod="openshift-multus/multus-additional-cni-plugins-78bl2" Jan 29 15:27:59 crc kubenswrapper[5008]: I0129 15:27:59.439469 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/cdd8ae23-3f9f-49f8-928d-46dad823fde4-cnibin\") pod \"multus-42hcz\" (UID: \"cdd8ae23-3f9f-49f8-928d-46dad823fde4\") " pod="openshift-multus/multus-42hcz" Jan 29 15:27:59 crc kubenswrapper[5008]: I0129 15:27:59.439489 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/ca0fcb2d-733d-4bde-9bbf-3f7082d0e244-proxy-tls\") pod \"machine-config-daemon-gk9q8\" (UID: \"ca0fcb2d-733d-4bde-9bbf-3f7082d0e244\") " pod="openshift-machine-config-operator/machine-config-daemon-gk9q8" Jan 29 15:27:59 crc kubenswrapper[5008]: I0129 15:27:59.439511 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/ca0fcb2d-733d-4bde-9bbf-3f7082d0e244-mcd-auth-proxy-config\") pod \"machine-config-daemon-gk9q8\" (UID: \"ca0fcb2d-733d-4bde-9bbf-3f7082d0e244\") " pod="openshift-machine-config-operator/machine-config-daemon-gk9q8" Jan 29 15:27:59 crc kubenswrapper[5008]: I0129 15:27:59.439534 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/fa065d0b-d690-4a7d-9079-a8f976a7aca3-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-78bl2\" (UID: \"fa065d0b-d690-4a7d-9079-a8f976a7aca3\") " pod="openshift-multus/multus-additional-cni-plugins-78bl2" Jan 29 15:27:59 crc kubenswrapper[5008]: I0129 15:27:59.439557 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/cdd8ae23-3f9f-49f8-928d-46dad823fde4-os-release\") pod \"multus-42hcz\" (UID: \"cdd8ae23-3f9f-49f8-928d-46dad823fde4\") " pod="openshift-multus/multus-42hcz" Jan 29 15:27:59 crc kubenswrapper[5008]: I0129 15:27:59.439639 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/cdd8ae23-3f9f-49f8-928d-46dad823fde4-os-release\") pod \"multus-42hcz\" (UID: \"cdd8ae23-3f9f-49f8-928d-46dad823fde4\") " pod="openshift-multus/multus-42hcz" Jan 29 15:27:59 crc kubenswrapper[5008]: I0129 15:27:59.439663 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/fa065d0b-d690-4a7d-9079-a8f976a7aca3-cni-binary-copy\") pod \"multus-additional-cni-plugins-78bl2\" (UID: \"fa065d0b-d690-4a7d-9079-a8f976a7aca3\") " pod="openshift-multus/multus-additional-cni-plugins-78bl2" Jan 29 15:27:59 crc kubenswrapper[5008]: I0129 15:27:59.439676 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/cdd8ae23-3f9f-49f8-928d-46dad823fde4-host-var-lib-cni-multus\") pod \"multus-42hcz\" (UID: \"cdd8ae23-3f9f-49f8-928d-46dad823fde4\") " pod="openshift-multus/multus-42hcz" Jan 29 15:27:59 crc kubenswrapper[5008]: I0129 15:27:59.439716 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/cdd8ae23-3f9f-49f8-928d-46dad823fde4-cnibin\") pod \"multus-42hcz\" (UID: \"cdd8ae23-3f9f-49f8-928d-46dad823fde4\") " pod="openshift-multus/multus-42hcz" Jan 29 15:27:59 crc kubenswrapper[5008]: I0129 15:27:59.439869 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/cdd8ae23-3f9f-49f8-928d-46dad823fde4-hostroot\") pod \"multus-42hcz\" (UID: \"cdd8ae23-3f9f-49f8-928d-46dad823fde4\") " pod="openshift-multus/multus-42hcz" Jan 29 15:27:59 crc kubenswrapper[5008]: I0129 15:27:59.439890 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/fa065d0b-d690-4a7d-9079-a8f976a7aca3-tuning-conf-dir\") pod \"multus-additional-cni-plugins-78bl2\" (UID: \"fa065d0b-d690-4a7d-9079-a8f976a7aca3\") " pod="openshift-multus/multus-additional-cni-plugins-78bl2" Jan 29 15:27:59 crc kubenswrapper[5008]: I0129 15:27:59.439899 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/cdd8ae23-3f9f-49f8-928d-46dad823fde4-host-run-k8s-cni-cncf-io\") pod \"multus-42hcz\" (UID: \"cdd8ae23-3f9f-49f8-928d-46dad823fde4\") " pod="openshift-multus/multus-42hcz" Jan 29 15:27:59 crc kubenswrapper[5008]: I0129 15:27:59.439971 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/fa065d0b-d690-4a7d-9079-a8f976a7aca3-os-release\") pod \"multus-additional-cni-plugins-78bl2\" (UID: \"fa065d0b-d690-4a7d-9079-a8f976a7aca3\") " pod="openshift-multus/multus-additional-cni-plugins-78bl2" Jan 29 15:27:59 crc kubenswrapper[5008]: I0129 15:27:59.440037 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/cdd8ae23-3f9f-49f8-928d-46dad823fde4-multus-daemon-config\") pod \"multus-42hcz\" (UID: \"cdd8ae23-3f9f-49f8-928d-46dad823fde4\") " pod="openshift-multus/multus-42hcz" Jan 29 15:27:59 crc kubenswrapper[5008]: I0129 15:27:59.440091 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/cdd8ae23-3f9f-49f8-928d-46dad823fde4-cni-binary-copy\") pod \"multus-42hcz\" (UID: \"cdd8ae23-3f9f-49f8-928d-46dad823fde4\") " pod="openshift-multus/multus-42hcz" Jan 29 15:27:59 crc kubenswrapper[5008]: I0129 15:27:59.440435 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/ca0fcb2d-733d-4bde-9bbf-3f7082d0e244-mcd-auth-proxy-config\") pod \"machine-config-daemon-gk9q8\" (UID: \"ca0fcb2d-733d-4bde-9bbf-3f7082d0e244\") " pod="openshift-machine-config-operator/machine-config-daemon-gk9q8" Jan 29 15:27:59 crc kubenswrapper[5008]: I0129 15:27:59.440510 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/fa065d0b-d690-4a7d-9079-a8f976a7aca3-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-78bl2\" (UID: \"fa065d0b-d690-4a7d-9079-a8f976a7aca3\") " pod="openshift-multus/multus-additional-cni-plugins-78bl2" Jan 29 15:27:59 crc kubenswrapper[5008]: I0129 15:27:59.442917 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/ca0fcb2d-733d-4bde-9bbf-3f7082d0e244-proxy-tls\") pod \"machine-config-daemon-gk9q8\" (UID: \"ca0fcb2d-733d-4bde-9bbf-3f7082d0e244\") " pod="openshift-machine-config-operator/machine-config-daemon-gk9q8" Jan 29 15:27:59 crc kubenswrapper[5008]: I0129 15:27:59.448547 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 15:27:59 crc kubenswrapper[5008]: I0129 15:27:59.459844 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-trwfk\" (UniqueName: \"kubernetes.io/projected/fa065d0b-d690-4a7d-9079-a8f976a7aca3-kube-api-access-trwfk\") pod \"multus-additional-cni-plugins-78bl2\" (UID: \"fa065d0b-d690-4a7d-9079-a8f976a7aca3\") " pod="openshift-multus/multus-additional-cni-plugins-78bl2" Jan 29 15:27:59 crc kubenswrapper[5008]: I0129 15:27:59.461316 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6blck\" (UniqueName: \"kubernetes.io/projected/ca0fcb2d-733d-4bde-9bbf-3f7082d0e244-kube-api-access-6blck\") pod \"machine-config-daemon-gk9q8\" (UID: \"ca0fcb2d-733d-4bde-9bbf-3f7082d0e244\") " pod="openshift-machine-config-operator/machine-config-daemon-gk9q8" Jan 29 15:27:59 crc kubenswrapper[5008]: I0129 15:27:59.461754 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-42hcz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cdd8ae23-3f9f-49f8-928d-46dad823fde4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg75x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:27:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-42hcz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 15:27:59 crc kubenswrapper[5008]: I0129 15:27:59.463958 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tg75x\" (UniqueName: \"kubernetes.io/projected/cdd8ae23-3f9f-49f8-928d-46dad823fde4-kube-api-access-tg75x\") pod \"multus-42hcz\" (UID: \"cdd8ae23-3f9f-49f8-928d-46dad823fde4\") " pod="openshift-multus/multus-42hcz" Jan 29 15:27:59 crc kubenswrapper[5008]: I0129 15:27:59.479578 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2624b9eb-bfe1-4c46-8825-6152c5e00565\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://12266e3ba2ed2e5d6d1e7ee893a0d59cd4575c8870cb1e129ca0fd9b8623467f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c778df6f5c031669143db37980250c01473f3d9856acc44a6ef51852822f99ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://de7c341c7443f28f5919ef6baeb21377b5571637ad807dd7515a5f28c218034b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0f710dffd08d1bbb467ff9d2c6a5d5beed779550747459407916e743506ab27\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:27:37Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:27:59Z is after 2025-08-24T17:21:41Z" Jan 29 15:27:59 crc kubenswrapper[5008]: I0129 15:27:59.491547 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:27:59Z is after 2025-08-24T17:21:41Z" Jan 29 15:27:59 crc kubenswrapper[5008]: I0129 15:27:59.497964 5008 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-etcd/etcd-crc" Jan 29 15:27:59 crc kubenswrapper[5008]: I0129 15:27:59.503051 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-wtvvb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2dede057-dcce-4302-8efe-e2c3640308ec\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mtnst\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:27:58Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-wtvvb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:27:59Z is after 2025-08-24T17:21:41Z" Jan 29 15:27:59 crc kubenswrapper[5008]: I0129 15:27:59.506316 5008 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Jan 29 15:27:59 crc kubenswrapper[5008]: I0129 15:27:59.508806 5008 scope.go:117] "RemoveContainer" containerID="4d710e35a02d14289e2d5fe6b35c08621e78c96b7e9e30451ffd6d51962fb761" Jan 29 15:27:59 crc kubenswrapper[5008]: E0129 15:27:59.509001 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Jan 29 15:27:59 crc kubenswrapper[5008]: I0129 15:27:59.510255 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-wtvvb" event={"ID":"2dede057-dcce-4302-8efe-e2c3640308ec","Type":"ContainerStarted","Data":"63cab2ec47a6dc148b6d3554a6f4b5c1985ca43bf62bfc444ff3582273cce517"} Jan 29 15:27:59 crc kubenswrapper[5008]: I0129 15:27:59.510295 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-wtvvb" event={"ID":"2dede057-dcce-4302-8efe-e2c3640308ec","Type":"ContainerStarted","Data":"cd2df9713cf65dda3bee2262acc3ddea403a1fb82fd0fbfc4cb4187c1c4d87fc"} Jan 29 15:27:59 crc kubenswrapper[5008]: I0129 15:27:59.511151 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"e77b0d1917796cde25b55664bf23efd7ed77639f9bdcac08bf26dbbb557870a9"} Jan 29 15:27:59 crc kubenswrapper[5008]: I0129 15:27:59.512747 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"8e5526ab405f367c31c46e86dc356f5c21ac7529cd706af08cb6cd35e54dbe33"} Jan 29 15:27:59 crc kubenswrapper[5008]: I0129 15:27:59.512887 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"a34142066431679db41e56f6697765165128986ad22bc919152524672e3035d6"} Jan 29 15:27:59 crc kubenswrapper[5008]: I0129 15:27:59.512986 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"477179bf249a19b16e085eee86630532632185d70cd428684e1abfdf97d53f95"} Jan 29 15:27:59 crc kubenswrapper[5008]: I0129 15:27:59.513821 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"ae42d856f5916fe3a1dace4ed5ed53a6cab552d169357b7303516719b78ef076"} Jan 29 15:27:59 crc kubenswrapper[5008]: I0129 15:27:59.513882 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"11b3eb18bc1e054c634937244422a000e1ad2ecccff77ecb72f04109c5cbf34a"} Jan 29 15:27:59 crc kubenswrapper[5008]: I0129 15:27:59.519120 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-etcd/etcd-crc" Jan 29 15:27:59 crc kubenswrapper[5008]: I0129 15:27:59.520596 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-gk9q8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ca0fcb2d-733d-4bde-9bbf-3f7082d0e244\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6blck\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6blck\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:27:59Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-gk9q8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:27:59Z is after 2025-08-24T17:21:41Z" Jan 29 15:27:59 crc kubenswrapper[5008]: I0129 15:27:59.531838 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:27:59Z is after 2025-08-24T17:21:41Z" Jan 29 15:27:59 crc kubenswrapper[5008]: I0129 15:27:59.544913 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-gk9q8" Jan 29 15:27:59 crc kubenswrapper[5008]: I0129 15:27:59.545415 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:27:59Z is after 2025-08-24T17:21:41Z" Jan 29 15:27:59 crc kubenswrapper[5008]: I0129 15:27:59.553707 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-78bl2" Jan 29 15:27:59 crc kubenswrapper[5008]: W0129 15:27:59.555531 5008 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podca0fcb2d_733d_4bde_9bbf_3f7082d0e244.slice/crio-5322962a9ed8ffae5b21db73f40150f5b6ddce142937397a45c4a59534a8a608 WatchSource:0}: Error finding container 5322962a9ed8ffae5b21db73f40150f5b6ddce142937397a45c4a59534a8a608: Status 404 returned error can't find the container with id 5322962a9ed8ffae5b21db73f40150f5b6ddce142937397a45c4a59534a8a608 Jan 29 15:27:59 crc kubenswrapper[5008]: I0129 15:27:59.559373 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-42hcz" Jan 29 15:27:59 crc kubenswrapper[5008]: I0129 15:27:59.559351 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:27:59Z is after 2025-08-24T17:21:41Z" Jan 29 15:27:59 crc kubenswrapper[5008]: W0129 15:27:59.573091 5008 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfa065d0b_d690_4a7d_9079_a8f976a7aca3.slice/crio-feeb0c23c07da0fef24a102842931147d1529065121b9e0131ef3ac1a002c490 WatchSource:0}: Error finding container feeb0c23c07da0fef24a102842931147d1529065121b9e0131ef3ac1a002c490: Status 404 returned error can't find the container with id feeb0c23c07da0fef24a102842931147d1529065121b9e0131ef3ac1a002c490 Jan 29 15:27:59 crc kubenswrapper[5008]: I0129 15:27:59.576891 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:27:59Z is after 2025-08-24T17:21:41Z" Jan 29 15:27:59 crc kubenswrapper[5008]: I0129 15:27:59.595959 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-wtvvb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2dede057-dcce-4302-8efe-e2c3640308ec\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://63cab2ec47a6dc148b6d3554a6f4b5c1985ca43bf62bfc444ff3582273cce517\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mtnst\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:27:58Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-wtvvb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:27:59Z is after 2025-08-24T17:21:41Z" Jan 29 15:27:59 crc kubenswrapper[5008]: I0129 15:27:59.613608 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-gk9q8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ca0fcb2d-733d-4bde-9bbf-3f7082d0e244\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6blck\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6blck\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:27:59Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-gk9q8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:27:59Z is after 2025-08-24T17:21:41Z" Jan 29 15:27:59 crc kubenswrapper[5008]: I0129 15:27:59.624902 5008 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-pqg9w"] Jan 29 15:27:59 crc kubenswrapper[5008]: I0129 15:27:59.626005 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-pqg9w" Jan 29 15:27:59 crc kubenswrapper[5008]: I0129 15:27:59.628569 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Jan 29 15:27:59 crc kubenswrapper[5008]: I0129 15:27:59.628714 5008 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Jan 29 15:27:59 crc kubenswrapper[5008]: I0129 15:27:59.628879 5008 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Jan 29 15:27:59 crc kubenswrapper[5008]: I0129 15:27:59.628960 5008 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Jan 29 15:27:59 crc kubenswrapper[5008]: I0129 15:27:59.630187 5008 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Jan 29 15:27:59 crc kubenswrapper[5008]: I0129 15:27:59.630416 5008 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Jan 29 15:27:59 crc kubenswrapper[5008]: I0129 15:27:59.630680 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Jan 29 15:27:59 crc kubenswrapper[5008]: I0129 15:27:59.648825 5008 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd/etcd-crc"] Jan 29 15:27:59 crc kubenswrapper[5008]: I0129 15:27:59.649058 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:27:59Z is after 2025-08-24T17:21:41Z" Jan 29 15:27:59 crc kubenswrapper[5008]: I0129 15:27:59.665465 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ae42d856f5916fe3a1dace4ed5ed53a6cab552d169357b7303516719b78ef076\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:27:59Z is after 2025-08-24T17:21:41Z" Jan 29 15:27:59 crc kubenswrapper[5008]: I0129 15:27:59.679156 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e5526ab405f367c31c46e86dc356f5c21ac7529cd706af08cb6cd35e54dbe33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a34142066431679db41e56f6697765165128986ad22bc919152524672e3035d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:27:59Z is after 2025-08-24T17:21:41Z" Jan 29 15:27:59 crc kubenswrapper[5008]: I0129 15:27:59.696413 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-78bl2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa065d0b-d690-4a7d-9079-a8f976a7aca3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trwfk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trwfk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trwfk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trwfk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trwfk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trwfk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trwfk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:27:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-78bl2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:27:59Z is after 2025-08-24T17:21:41Z" Jan 29 15:27:59 crc kubenswrapper[5008]: I0129 15:27:59.710907 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d62f7cc2-d2d7-4c9a-9432-8b4fb9f3fcf2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4702656214e54c2881bf198364622648679a9981721d09a6b1551a134c63b7d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ed1794f8b68a0810301b6f7b91e03cfb269b35084dd97b2f153789ba70970f2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://677c04a1dffb767e6149ccb064772548ca29cf553afc20cb4eb82a5f85742ff0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d710e35a02d14289e2d5fe6b35c08621e78c96b7e9e30451ffd6d51962fb761\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4d710e35a02d14289e2d5fe6b35c08621e78c96b7e9e30451ffd6d51962fb761\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"message\\\":\\\"file observer\\\\nW0129 15:27:57.701071 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0129 15:27:57.704726 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 15:27:57.707574 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-445213743/tls.crt::/tmp/serving-cert-445213743/tls.key\\\\\\\"\\\\nI0129 15:27:58.036057 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0129 15:27:58.041904 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0129 15:27:58.041936 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0129 15:27:58.041959 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0129 15:27:58.041967 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0129 15:27:58.046875 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0129 15:27:58.046901 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 15:27:58.046906 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 15:27:58.046911 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0129 15:27:58.046914 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0129 15:27:58.046917 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0129 15:27:58.046919 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0129 15:27:58.047110 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0129 15:27:58.052272 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T15:27:42Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3397d4d59fbac09e49247425eb263f25d13c62a72013146c981b606f6389165d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:40Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://25a0c747be0a011a60911a631709b27620d8ebc5afea1d21dbeb71f26d971f6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://25a0c747be0a011a60911a631709b27620d8ebc5afea1d21dbeb71f26d971f6e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:27:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:27:37Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:27:59Z is after 2025-08-24T17:21:41Z" Jan 29 15:27:59 crc kubenswrapper[5008]: I0129 15:27:59.739311 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:27:59Z is after 2025-08-24T17:21:41Z" Jan 29 15:27:59 crc kubenswrapper[5008]: I0129 15:27:59.744734 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/1d092513-7735-4c98-9734-57bc46b99280-run-systemd\") pod \"ovnkube-node-pqg9w\" (UID: \"1d092513-7735-4c98-9734-57bc46b99280\") " pod="openshift-ovn-kubernetes/ovnkube-node-pqg9w" Jan 29 15:27:59 crc kubenswrapper[5008]: I0129 15:27:59.744792 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/1d092513-7735-4c98-9734-57bc46b99280-host-run-ovn-kubernetes\") pod \"ovnkube-node-pqg9w\" (UID: \"1d092513-7735-4c98-9734-57bc46b99280\") " pod="openshift-ovn-kubernetes/ovnkube-node-pqg9w" Jan 29 15:27:59 crc kubenswrapper[5008]: I0129 15:27:59.744827 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/1d092513-7735-4c98-9734-57bc46b99280-host-cni-bin\") pod \"ovnkube-node-pqg9w\" (UID: \"1d092513-7735-4c98-9734-57bc46b99280\") " pod="openshift-ovn-kubernetes/ovnkube-node-pqg9w" Jan 29 15:27:59 crc kubenswrapper[5008]: I0129 15:27:59.744873 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/1d092513-7735-4c98-9734-57bc46b99280-run-ovn\") pod \"ovnkube-node-pqg9w\" (UID: \"1d092513-7735-4c98-9734-57bc46b99280\") " pod="openshift-ovn-kubernetes/ovnkube-node-pqg9w" Jan 29 15:27:59 crc kubenswrapper[5008]: I0129 15:27:59.744892 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/1d092513-7735-4c98-9734-57bc46b99280-env-overrides\") pod \"ovnkube-node-pqg9w\" (UID: \"1d092513-7735-4c98-9734-57bc46b99280\") " pod="openshift-ovn-kubernetes/ovnkube-node-pqg9w" Jan 29 15:27:59 crc kubenswrapper[5008]: I0129 15:27:59.744910 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/1d092513-7735-4c98-9734-57bc46b99280-run-openvswitch\") pod \"ovnkube-node-pqg9w\" (UID: \"1d092513-7735-4c98-9734-57bc46b99280\") " pod="openshift-ovn-kubernetes/ovnkube-node-pqg9w" Jan 29 15:27:59 crc kubenswrapper[5008]: I0129 15:27:59.744928 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/1d092513-7735-4c98-9734-57bc46b99280-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-pqg9w\" (UID: \"1d092513-7735-4c98-9734-57bc46b99280\") " pod="openshift-ovn-kubernetes/ovnkube-node-pqg9w" Jan 29 15:27:59 crc kubenswrapper[5008]: I0129 15:27:59.744948 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/1d092513-7735-4c98-9734-57bc46b99280-ovnkube-script-lib\") pod \"ovnkube-node-pqg9w\" (UID: \"1d092513-7735-4c98-9734-57bc46b99280\") " pod="openshift-ovn-kubernetes/ovnkube-node-pqg9w" Jan 29 15:27:59 crc kubenswrapper[5008]: I0129 15:27:59.744979 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/1d092513-7735-4c98-9734-57bc46b99280-node-log\") pod \"ovnkube-node-pqg9w\" (UID: \"1d092513-7735-4c98-9734-57bc46b99280\") " pod="openshift-ovn-kubernetes/ovnkube-node-pqg9w" Jan 29 15:27:59 crc kubenswrapper[5008]: I0129 15:27:59.744996 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/1d092513-7735-4c98-9734-57bc46b99280-ovnkube-config\") pod \"ovnkube-node-pqg9w\" (UID: \"1d092513-7735-4c98-9734-57bc46b99280\") " pod="openshift-ovn-kubernetes/ovnkube-node-pqg9w" Jan 29 15:27:59 crc kubenswrapper[5008]: I0129 15:27:59.745021 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/1d092513-7735-4c98-9734-57bc46b99280-etc-openvswitch\") pod \"ovnkube-node-pqg9w\" (UID: \"1d092513-7735-4c98-9734-57bc46b99280\") " pod="openshift-ovn-kubernetes/ovnkube-node-pqg9w" Jan 29 15:27:59 crc kubenswrapper[5008]: I0129 15:27:59.745056 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/1d092513-7735-4c98-9734-57bc46b99280-systemd-units\") pod \"ovnkube-node-pqg9w\" (UID: \"1d092513-7735-4c98-9734-57bc46b99280\") " pod="openshift-ovn-kubernetes/ovnkube-node-pqg9w" Jan 29 15:27:59 crc kubenswrapper[5008]: I0129 15:27:59.745074 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/1d092513-7735-4c98-9734-57bc46b99280-host-run-netns\") pod \"ovnkube-node-pqg9w\" (UID: \"1d092513-7735-4c98-9734-57bc46b99280\") " pod="openshift-ovn-kubernetes/ovnkube-node-pqg9w" Jan 29 15:27:59 crc kubenswrapper[5008]: I0129 15:27:59.745091 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d2xcc\" (UniqueName: \"kubernetes.io/projected/1d092513-7735-4c98-9734-57bc46b99280-kube-api-access-d2xcc\") pod \"ovnkube-node-pqg9w\" (UID: \"1d092513-7735-4c98-9734-57bc46b99280\") " pod="openshift-ovn-kubernetes/ovnkube-node-pqg9w" Jan 29 15:27:59 crc kubenswrapper[5008]: I0129 15:27:59.745111 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/1d092513-7735-4c98-9734-57bc46b99280-host-kubelet\") pod \"ovnkube-node-pqg9w\" (UID: \"1d092513-7735-4c98-9734-57bc46b99280\") " pod="openshift-ovn-kubernetes/ovnkube-node-pqg9w" Jan 29 15:27:59 crc kubenswrapper[5008]: I0129 15:27:59.745634 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/1d092513-7735-4c98-9734-57bc46b99280-log-socket\") pod \"ovnkube-node-pqg9w\" (UID: \"1d092513-7735-4c98-9734-57bc46b99280\") " pod="openshift-ovn-kubernetes/ovnkube-node-pqg9w" Jan 29 15:27:59 crc kubenswrapper[5008]: I0129 15:27:59.745830 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/1d092513-7735-4c98-9734-57bc46b99280-host-slash\") pod \"ovnkube-node-pqg9w\" (UID: \"1d092513-7735-4c98-9734-57bc46b99280\") " pod="openshift-ovn-kubernetes/ovnkube-node-pqg9w" Jan 29 15:27:59 crc kubenswrapper[5008]: I0129 15:27:59.745889 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/1d092513-7735-4c98-9734-57bc46b99280-var-lib-openvswitch\") pod \"ovnkube-node-pqg9w\" (UID: \"1d092513-7735-4c98-9734-57bc46b99280\") " pod="openshift-ovn-kubernetes/ovnkube-node-pqg9w" Jan 29 15:27:59 crc kubenswrapper[5008]: I0129 15:27:59.745991 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/1d092513-7735-4c98-9734-57bc46b99280-host-cni-netd\") pod \"ovnkube-node-pqg9w\" (UID: \"1d092513-7735-4c98-9734-57bc46b99280\") " pod="openshift-ovn-kubernetes/ovnkube-node-pqg9w" Jan 29 15:27:59 crc kubenswrapper[5008]: I0129 15:27:59.746024 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/1d092513-7735-4c98-9734-57bc46b99280-ovn-node-metrics-cert\") pod \"ovnkube-node-pqg9w\" (UID: \"1d092513-7735-4c98-9734-57bc46b99280\") " pod="openshift-ovn-kubernetes/ovnkube-node-pqg9w" Jan 29 15:27:59 crc kubenswrapper[5008]: I0129 15:27:59.779103 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:27:59Z is after 2025-08-24T17:21:41Z" Jan 29 15:27:59 crc kubenswrapper[5008]: I0129 15:27:59.820290 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-42hcz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cdd8ae23-3f9f-49f8-928d-46dad823fde4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg75x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:27:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-42hcz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:27:59Z is after 2025-08-24T17:21:41Z" Jan 29 15:27:59 crc kubenswrapper[5008]: I0129 15:27:59.846624 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/1d092513-7735-4c98-9734-57bc46b99280-etc-openvswitch\") pod \"ovnkube-node-pqg9w\" (UID: \"1d092513-7735-4c98-9734-57bc46b99280\") " pod="openshift-ovn-kubernetes/ovnkube-node-pqg9w" Jan 29 15:27:59 crc kubenswrapper[5008]: I0129 15:27:59.846697 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/1d092513-7735-4c98-9734-57bc46b99280-systemd-units\") pod \"ovnkube-node-pqg9w\" (UID: \"1d092513-7735-4c98-9734-57bc46b99280\") " pod="openshift-ovn-kubernetes/ovnkube-node-pqg9w" Jan 29 15:27:59 crc kubenswrapper[5008]: I0129 15:27:59.846713 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/1d092513-7735-4c98-9734-57bc46b99280-host-run-netns\") pod \"ovnkube-node-pqg9w\" (UID: \"1d092513-7735-4c98-9734-57bc46b99280\") " pod="openshift-ovn-kubernetes/ovnkube-node-pqg9w" Jan 29 15:27:59 crc kubenswrapper[5008]: I0129 15:27:59.846752 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d2xcc\" (UniqueName: \"kubernetes.io/projected/1d092513-7735-4c98-9734-57bc46b99280-kube-api-access-d2xcc\") pod \"ovnkube-node-pqg9w\" (UID: \"1d092513-7735-4c98-9734-57bc46b99280\") " pod="openshift-ovn-kubernetes/ovnkube-node-pqg9w" Jan 29 15:27:59 crc kubenswrapper[5008]: I0129 15:27:59.846769 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/1d092513-7735-4c98-9734-57bc46b99280-host-kubelet\") pod \"ovnkube-node-pqg9w\" (UID: \"1d092513-7735-4c98-9734-57bc46b99280\") " pod="openshift-ovn-kubernetes/ovnkube-node-pqg9w" Jan 29 15:27:59 crc kubenswrapper[5008]: I0129 15:27:59.846773 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/1d092513-7735-4c98-9734-57bc46b99280-etc-openvswitch\") pod \"ovnkube-node-pqg9w\" (UID: \"1d092513-7735-4c98-9734-57bc46b99280\") " pod="openshift-ovn-kubernetes/ovnkube-node-pqg9w" Jan 29 15:27:59 crc kubenswrapper[5008]: I0129 15:27:59.846829 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/1d092513-7735-4c98-9734-57bc46b99280-systemd-units\") pod \"ovnkube-node-pqg9w\" (UID: \"1d092513-7735-4c98-9734-57bc46b99280\") " pod="openshift-ovn-kubernetes/ovnkube-node-pqg9w" Jan 29 15:27:59 crc kubenswrapper[5008]: I0129 15:27:59.846853 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/1d092513-7735-4c98-9734-57bc46b99280-log-socket\") pod \"ovnkube-node-pqg9w\" (UID: \"1d092513-7735-4c98-9734-57bc46b99280\") " pod="openshift-ovn-kubernetes/ovnkube-node-pqg9w" Jan 29 15:27:59 crc kubenswrapper[5008]: I0129 15:27:59.846809 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/1d092513-7735-4c98-9734-57bc46b99280-log-socket\") pod \"ovnkube-node-pqg9w\" (UID: \"1d092513-7735-4c98-9734-57bc46b99280\") " pod="openshift-ovn-kubernetes/ovnkube-node-pqg9w" Jan 29 15:27:59 crc kubenswrapper[5008]: I0129 15:27:59.846867 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/1d092513-7735-4c98-9734-57bc46b99280-host-kubelet\") pod \"ovnkube-node-pqg9w\" (UID: \"1d092513-7735-4c98-9734-57bc46b99280\") " pod="openshift-ovn-kubernetes/ovnkube-node-pqg9w" Jan 29 15:27:59 crc kubenswrapper[5008]: I0129 15:27:59.846835 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/1d092513-7735-4c98-9734-57bc46b99280-host-run-netns\") pod \"ovnkube-node-pqg9w\" (UID: \"1d092513-7735-4c98-9734-57bc46b99280\") " pod="openshift-ovn-kubernetes/ovnkube-node-pqg9w" Jan 29 15:27:59 crc kubenswrapper[5008]: I0129 15:27:59.846908 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/1d092513-7735-4c98-9734-57bc46b99280-host-slash\") pod \"ovnkube-node-pqg9w\" (UID: \"1d092513-7735-4c98-9734-57bc46b99280\") " pod="openshift-ovn-kubernetes/ovnkube-node-pqg9w" Jan 29 15:27:59 crc kubenswrapper[5008]: I0129 15:27:59.846928 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/1d092513-7735-4c98-9734-57bc46b99280-host-slash\") pod \"ovnkube-node-pqg9w\" (UID: \"1d092513-7735-4c98-9734-57bc46b99280\") " pod="openshift-ovn-kubernetes/ovnkube-node-pqg9w" Jan 29 15:27:59 crc kubenswrapper[5008]: I0129 15:27:59.846938 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/1d092513-7735-4c98-9734-57bc46b99280-var-lib-openvswitch\") pod \"ovnkube-node-pqg9w\" (UID: \"1d092513-7735-4c98-9734-57bc46b99280\") " pod="openshift-ovn-kubernetes/ovnkube-node-pqg9w" Jan 29 15:27:59 crc kubenswrapper[5008]: I0129 15:27:59.846956 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/1d092513-7735-4c98-9734-57bc46b99280-host-cni-netd\") pod \"ovnkube-node-pqg9w\" (UID: \"1d092513-7735-4c98-9734-57bc46b99280\") " pod="openshift-ovn-kubernetes/ovnkube-node-pqg9w" Jan 29 15:27:59 crc kubenswrapper[5008]: I0129 15:27:59.846962 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/1d092513-7735-4c98-9734-57bc46b99280-var-lib-openvswitch\") pod \"ovnkube-node-pqg9w\" (UID: \"1d092513-7735-4c98-9734-57bc46b99280\") " pod="openshift-ovn-kubernetes/ovnkube-node-pqg9w" Jan 29 15:27:59 crc kubenswrapper[5008]: I0129 15:27:59.846972 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/1d092513-7735-4c98-9734-57bc46b99280-ovn-node-metrics-cert\") pod \"ovnkube-node-pqg9w\" (UID: \"1d092513-7735-4c98-9734-57bc46b99280\") " pod="openshift-ovn-kubernetes/ovnkube-node-pqg9w" Jan 29 15:27:59 crc kubenswrapper[5008]: I0129 15:27:59.846991 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/1d092513-7735-4c98-9734-57bc46b99280-run-systemd\") pod \"ovnkube-node-pqg9w\" (UID: \"1d092513-7735-4c98-9734-57bc46b99280\") " pod="openshift-ovn-kubernetes/ovnkube-node-pqg9w" Jan 29 15:27:59 crc kubenswrapper[5008]: I0129 15:27:59.846992 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/1d092513-7735-4c98-9734-57bc46b99280-host-cni-netd\") pod \"ovnkube-node-pqg9w\" (UID: \"1d092513-7735-4c98-9734-57bc46b99280\") " pod="openshift-ovn-kubernetes/ovnkube-node-pqg9w" Jan 29 15:27:59 crc kubenswrapper[5008]: I0129 15:27:59.847005 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/1d092513-7735-4c98-9734-57bc46b99280-host-run-ovn-kubernetes\") pod \"ovnkube-node-pqg9w\" (UID: \"1d092513-7735-4c98-9734-57bc46b99280\") " pod="openshift-ovn-kubernetes/ovnkube-node-pqg9w" Jan 29 15:27:59 crc kubenswrapper[5008]: I0129 15:27:59.847019 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/1d092513-7735-4c98-9734-57bc46b99280-host-cni-bin\") pod \"ovnkube-node-pqg9w\" (UID: \"1d092513-7735-4c98-9734-57bc46b99280\") " pod="openshift-ovn-kubernetes/ovnkube-node-pqg9w" Jan 29 15:27:59 crc kubenswrapper[5008]: I0129 15:27:59.847038 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/1d092513-7735-4c98-9734-57bc46b99280-run-ovn\") pod \"ovnkube-node-pqg9w\" (UID: \"1d092513-7735-4c98-9734-57bc46b99280\") " pod="openshift-ovn-kubernetes/ovnkube-node-pqg9w" Jan 29 15:27:59 crc kubenswrapper[5008]: I0129 15:27:59.847054 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/1d092513-7735-4c98-9734-57bc46b99280-env-overrides\") pod \"ovnkube-node-pqg9w\" (UID: \"1d092513-7735-4c98-9734-57bc46b99280\") " pod="openshift-ovn-kubernetes/ovnkube-node-pqg9w" Jan 29 15:27:59 crc kubenswrapper[5008]: I0129 15:27:59.847071 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/1d092513-7735-4c98-9734-57bc46b99280-run-openvswitch\") pod \"ovnkube-node-pqg9w\" (UID: \"1d092513-7735-4c98-9734-57bc46b99280\") " pod="openshift-ovn-kubernetes/ovnkube-node-pqg9w" Jan 29 15:27:59 crc kubenswrapper[5008]: I0129 15:27:59.847087 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/1d092513-7735-4c98-9734-57bc46b99280-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-pqg9w\" (UID: \"1d092513-7735-4c98-9734-57bc46b99280\") " pod="openshift-ovn-kubernetes/ovnkube-node-pqg9w" Jan 29 15:27:59 crc kubenswrapper[5008]: I0129 15:27:59.847110 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/1d092513-7735-4c98-9734-57bc46b99280-ovnkube-script-lib\") pod \"ovnkube-node-pqg9w\" (UID: \"1d092513-7735-4c98-9734-57bc46b99280\") " pod="openshift-ovn-kubernetes/ovnkube-node-pqg9w" Jan 29 15:27:59 crc kubenswrapper[5008]: I0129 15:27:59.847142 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/1d092513-7735-4c98-9734-57bc46b99280-node-log\") pod \"ovnkube-node-pqg9w\" (UID: \"1d092513-7735-4c98-9734-57bc46b99280\") " pod="openshift-ovn-kubernetes/ovnkube-node-pqg9w" Jan 29 15:27:59 crc kubenswrapper[5008]: I0129 15:27:59.847157 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/1d092513-7735-4c98-9734-57bc46b99280-ovnkube-config\") pod \"ovnkube-node-pqg9w\" (UID: \"1d092513-7735-4c98-9734-57bc46b99280\") " pod="openshift-ovn-kubernetes/ovnkube-node-pqg9w" Jan 29 15:27:59 crc kubenswrapper[5008]: I0129 15:27:59.847772 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/1d092513-7735-4c98-9734-57bc46b99280-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-pqg9w\" (UID: \"1d092513-7735-4c98-9734-57bc46b99280\") " pod="openshift-ovn-kubernetes/ovnkube-node-pqg9w" Jan 29 15:27:59 crc kubenswrapper[5008]: I0129 15:27:59.847806 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/1d092513-7735-4c98-9734-57bc46b99280-ovnkube-config\") pod \"ovnkube-node-pqg9w\" (UID: \"1d092513-7735-4c98-9734-57bc46b99280\") " pod="openshift-ovn-kubernetes/ovnkube-node-pqg9w" Jan 29 15:27:59 crc kubenswrapper[5008]: I0129 15:27:59.847864 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/1d092513-7735-4c98-9734-57bc46b99280-run-openvswitch\") pod \"ovnkube-node-pqg9w\" (UID: \"1d092513-7735-4c98-9734-57bc46b99280\") " pod="openshift-ovn-kubernetes/ovnkube-node-pqg9w" Jan 29 15:27:59 crc kubenswrapper[5008]: I0129 15:27:59.847900 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/1d092513-7735-4c98-9734-57bc46b99280-host-run-ovn-kubernetes\") pod \"ovnkube-node-pqg9w\" (UID: \"1d092513-7735-4c98-9734-57bc46b99280\") " pod="openshift-ovn-kubernetes/ovnkube-node-pqg9w" Jan 29 15:27:59 crc kubenswrapper[5008]: I0129 15:27:59.847903 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/1d092513-7735-4c98-9734-57bc46b99280-node-log\") pod \"ovnkube-node-pqg9w\" (UID: \"1d092513-7735-4c98-9734-57bc46b99280\") " pod="openshift-ovn-kubernetes/ovnkube-node-pqg9w" Jan 29 15:27:59 crc kubenswrapper[5008]: I0129 15:27:59.847929 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/1d092513-7735-4c98-9734-57bc46b99280-run-systemd\") pod \"ovnkube-node-pqg9w\" (UID: \"1d092513-7735-4c98-9734-57bc46b99280\") " pod="openshift-ovn-kubernetes/ovnkube-node-pqg9w" Jan 29 15:27:59 crc kubenswrapper[5008]: I0129 15:27:59.847951 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/1d092513-7735-4c98-9734-57bc46b99280-run-ovn\") pod \"ovnkube-node-pqg9w\" (UID: \"1d092513-7735-4c98-9734-57bc46b99280\") " pod="openshift-ovn-kubernetes/ovnkube-node-pqg9w" Jan 29 15:27:59 crc kubenswrapper[5008]: I0129 15:27:59.847975 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/1d092513-7735-4c98-9734-57bc46b99280-host-cni-bin\") pod \"ovnkube-node-pqg9w\" (UID: \"1d092513-7735-4c98-9734-57bc46b99280\") " pod="openshift-ovn-kubernetes/ovnkube-node-pqg9w" Jan 29 15:27:59 crc kubenswrapper[5008]: I0129 15:27:59.848234 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/1d092513-7735-4c98-9734-57bc46b99280-env-overrides\") pod \"ovnkube-node-pqg9w\" (UID: \"1d092513-7735-4c98-9734-57bc46b99280\") " pod="openshift-ovn-kubernetes/ovnkube-node-pqg9w" Jan 29 15:27:59 crc kubenswrapper[5008]: I0129 15:27:59.848322 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/1d092513-7735-4c98-9734-57bc46b99280-ovnkube-script-lib\") pod \"ovnkube-node-pqg9w\" (UID: \"1d092513-7735-4c98-9734-57bc46b99280\") " pod="openshift-ovn-kubernetes/ovnkube-node-pqg9w" Jan 29 15:27:59 crc kubenswrapper[5008]: I0129 15:27:59.851397 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/1d092513-7735-4c98-9734-57bc46b99280-ovn-node-metrics-cert\") pod \"ovnkube-node-pqg9w\" (UID: \"1d092513-7735-4c98-9734-57bc46b99280\") " pod="openshift-ovn-kubernetes/ovnkube-node-pqg9w" Jan 29 15:27:59 crc kubenswrapper[5008]: I0129 15:27:59.857520 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2624b9eb-bfe1-4c46-8825-6152c5e00565\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://12266e3ba2ed2e5d6d1e7ee893a0d59cd4575c8870cb1e129ca0fd9b8623467f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c778df6f5c031669143db37980250c01473f3d9856acc44a6ef51852822f99ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://de7c341c7443f28f5919ef6baeb21377b5571637ad807dd7515a5f28c218034b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0f710dffd08d1bbb467ff9d2c6a5d5beed779550747459407916e743506ab27\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:27:37Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:27:59Z is after 2025-08-24T17:21:41Z" Jan 29 15:27:59 crc kubenswrapper[5008]: I0129 15:27:59.886457 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d2xcc\" (UniqueName: \"kubernetes.io/projected/1d092513-7735-4c98-9734-57bc46b99280-kube-api-access-d2xcc\") pod \"ovnkube-node-pqg9w\" (UID: \"1d092513-7735-4c98-9734-57bc46b99280\") " pod="openshift-ovn-kubernetes/ovnkube-node-pqg9w" Jan 29 15:27:59 crc kubenswrapper[5008]: I0129 15:27:59.918613 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:27:59Z is after 2025-08-24T17:21:41Z" Jan 29 15:27:59 crc kubenswrapper[5008]: I0129 15:27:59.947599 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 15:27:59 crc kubenswrapper[5008]: I0129 15:27:59.947743 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 15:27:59 crc kubenswrapper[5008]: E0129 15:27:59.947775 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 15:28:01.947747347 +0000 UTC m=+25.620601584 (durationBeforeRetry 2s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:27:59 crc kubenswrapper[5008]: E0129 15:27:59.947875 5008 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 29 15:27:59 crc kubenswrapper[5008]: E0129 15:27:59.947947 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-29 15:28:01.947926692 +0000 UTC m=+25.620780949 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 29 15:27:59 crc kubenswrapper[5008]: I0129 15:27:59.958065 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-pqg9w" Jan 29 15:27:59 crc kubenswrapper[5008]: I0129 15:27:59.964684 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:27:59Z is after 2025-08-24T17:21:41Z" Jan 29 15:27:59 crc kubenswrapper[5008]: W0129 15:27:59.971498 5008 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1d092513_7735_4c98_9734_57bc46b99280.slice/crio-3ed021c49019edf6db353db02ef3c36191fef92186df2ed16a92920dd439b3d2 WatchSource:0}: Error finding container 3ed021c49019edf6db353db02ef3c36191fef92186df2ed16a92920dd439b3d2: Status 404 returned error can't find the container with id 3ed021c49019edf6db353db02ef3c36191fef92186df2ed16a92920dd439b3d2 Jan 29 15:28:00 crc kubenswrapper[5008]: I0129 15:28:00.001588 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-wtvvb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2dede057-dcce-4302-8efe-e2c3640308ec\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://63cab2ec47a6dc148b6d3554a6f4b5c1985ca43bf62bfc444ff3582273cce517\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mtnst\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:27:58Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-wtvvb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:27:59Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:00 crc kubenswrapper[5008]: I0129 15:28:00.042019 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-gk9q8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ca0fcb2d-733d-4bde-9bbf-3f7082d0e244\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6blck\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6blck\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:27:59Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-gk9q8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:00Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:00 crc kubenswrapper[5008]: I0129 15:28:00.048731 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 15:28:00 crc kubenswrapper[5008]: I0129 15:28:00.048772 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 15:28:00 crc kubenswrapper[5008]: I0129 15:28:00.048813 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 15:28:00 crc kubenswrapper[5008]: E0129 15:28:00.048904 5008 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 29 15:28:00 crc kubenswrapper[5008]: E0129 15:28:00.048918 5008 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 29 15:28:00 crc kubenswrapper[5008]: E0129 15:28:00.048928 5008 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 29 15:28:00 crc kubenswrapper[5008]: E0129 15:28:00.048967 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-29 15:28:02.048951601 +0000 UTC m=+25.721805838 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 29 15:28:00 crc kubenswrapper[5008]: E0129 15:28:00.049230 5008 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 29 15:28:00 crc kubenswrapper[5008]: E0129 15:28:00.049248 5008 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 29 15:28:00 crc kubenswrapper[5008]: E0129 15:28:00.049255 5008 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 29 15:28:00 crc kubenswrapper[5008]: E0129 15:28:00.049276 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-29 15:28:02.049269829 +0000 UTC m=+25.722124066 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 29 15:28:00 crc kubenswrapper[5008]: E0129 15:28:00.049304 5008 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 29 15:28:00 crc kubenswrapper[5008]: E0129 15:28:00.049321 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-29 15:28:02.049316441 +0000 UTC m=+25.722170678 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 29 15:28:00 crc kubenswrapper[5008]: I0129 15:28:00.088183 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pqg9w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1d092513-7735-4c98-9734-57bc46b99280\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2xcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2xcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2xcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2xcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2xcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2xcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2xcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2xcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2xcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:27:59Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-pqg9w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:00Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:00 crc kubenswrapper[5008]: I0129 15:28:00.121531 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ae42d856f5916fe3a1dace4ed5ed53a6cab552d169357b7303516719b78ef076\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:00Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:00 crc kubenswrapper[5008]: I0129 15:28:00.163149 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e5526ab405f367c31c46e86dc356f5c21ac7529cd706af08cb6cd35e54dbe33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a34142066431679db41e56f6697765165128986ad22bc919152524672e3035d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:00Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:00 crc kubenswrapper[5008]: I0129 15:28:00.200500 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-78bl2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa065d0b-d690-4a7d-9079-a8f976a7aca3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trwfk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trwfk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trwfk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trwfk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trwfk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trwfk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trwfk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:27:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-78bl2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:00Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:00 crc kubenswrapper[5008]: I0129 15:28:00.252906 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"77958faa-02ef-4792-b792-6094f922cd1d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de76f0d6e08ee14b4a5ab39a21ebdc63bdf379dcd5b648ae46a4edcc2a49f20e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1dcda54222f387e6560d3e297be72e19032a975feb916bc12a220870207a3f35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3cb618c2c44502074cb37ce1e688d187254eafae3916372a16c8ab845fed767a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff8e5fd243880ce71f07c5c532cad2cdff0e4bca2d0083280be78206a1a4c854\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7393e24277d74a2b9987e6cdc54cd65485f5bc57d93ec25a2cb8479923db1feb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://380711ea042d739a804ab6da4c0361004cc9ed9a48a5f4b006d168df6a84ebb5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://380711ea042d739a804ab6da4c0361004cc9ed9a48a5f4b006d168df6a84ebb5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:27:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d9134a6829b7df9b42aeae161ea1f3961837d6a0b322b1adcd2417c47c0f5d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1d9134a6829b7df9b42aeae161ea1f3961837d6a0b322b1adcd2417c47c0f5d5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:27:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://9206e5978757b0979fa411a384d9e5b4728b01a769f87383df38dbb8f0f18e4b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9206e5978757b0979fa411a384d9e5b4728b01a769f87383df38dbb8f0f18e4b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:27:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:27:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:27:37Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:00Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:00 crc kubenswrapper[5008]: I0129 15:28:00.283975 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2624b9eb-bfe1-4c46-8825-6152c5e00565\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://12266e3ba2ed2e5d6d1e7ee893a0d59cd4575c8870cb1e129ca0fd9b8623467f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c778df6f5c031669143db37980250c01473f3d9856acc44a6ef51852822f99ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://de7c341c7443f28f5919ef6baeb21377b5571637ad807dd7515a5f28c218034b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0f710dffd08d1bbb467ff9d2c6a5d5beed779550747459407916e743506ab27\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:27:37Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:00Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:00 crc kubenswrapper[5008]: I0129 15:28:00.285960 5008 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-08 09:41:22.739610229 +0000 UTC Jan 29 15:28:00 crc kubenswrapper[5008]: I0129 15:28:00.323071 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 15:28:00 crc kubenswrapper[5008]: I0129 15:28:00.323111 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 15:28:00 crc kubenswrapper[5008]: I0129 15:28:00.323177 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 15:28:00 crc kubenswrapper[5008]: E0129 15:28:00.323231 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 15:28:00 crc kubenswrapper[5008]: E0129 15:28:00.323330 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 15:28:00 crc kubenswrapper[5008]: E0129 15:28:00.323475 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 15:28:00 crc kubenswrapper[5008]: I0129 15:28:00.323478 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d62f7cc2-d2d7-4c9a-9432-8b4fb9f3fcf2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4702656214e54c2881bf198364622648679a9981721d09a6b1551a134c63b7d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ed1794f8b68a0810301b6f7b91e03cfb269b35084dd97b2f153789ba70970f2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://677c04a1dffb767e6149ccb064772548ca29cf553afc20cb4eb82a5f85742ff0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d710e35a02d14289e2d5fe6b35c08621e78c96b7e9e30451ffd6d51962fb761\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4d710e35a02d14289e2d5fe6b35c08621e78c96b7e9e30451ffd6d51962fb761\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"message\\\":\\\"file observer\\\\nW0129 15:27:57.701071 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0129 15:27:57.704726 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 15:27:57.707574 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-445213743/tls.crt::/tmp/serving-cert-445213743/tls.key\\\\\\\"\\\\nI0129 15:27:58.036057 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0129 15:27:58.041904 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0129 15:27:58.041936 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0129 15:27:58.041959 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0129 15:27:58.041967 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0129 15:27:58.046875 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0129 15:27:58.046901 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 15:27:58.046906 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 15:27:58.046911 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0129 15:27:58.046914 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0129 15:27:58.046917 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0129 15:27:58.046919 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0129 15:27:58.047110 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0129 15:27:58.052272 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T15:27:42Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3397d4d59fbac09e49247425eb263f25d13c62a72013146c981b606f6389165d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:40Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://25a0c747be0a011a60911a631709b27620d8ebc5afea1d21dbeb71f26d971f6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://25a0c747be0a011a60911a631709b27620d8ebc5afea1d21dbeb71f26d971f6e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:27:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:27:37Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:00Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:00 crc kubenswrapper[5008]: I0129 15:28:00.362645 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:00Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:00 crc kubenswrapper[5008]: I0129 15:28:00.400206 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:00Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:00 crc kubenswrapper[5008]: I0129 15:28:00.441211 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-42hcz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cdd8ae23-3f9f-49f8-928d-46dad823fde4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg75x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:27:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-42hcz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:00Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:00 crc kubenswrapper[5008]: I0129 15:28:00.517950 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-42hcz" event={"ID":"cdd8ae23-3f9f-49f8-928d-46dad823fde4","Type":"ContainerStarted","Data":"a44b0a7b0b53c339b51d5391ad7e0eb342bdb491b4af37a98f48788b8e2c077b"} Jan 29 15:28:00 crc kubenswrapper[5008]: I0129 15:28:00.518011 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-42hcz" event={"ID":"cdd8ae23-3f9f-49f8-928d-46dad823fde4","Type":"ContainerStarted","Data":"9483bcd1b2d3148e3e1c18b543c80ce2fa9143c3acccb478fc92b911e23621f6"} Jan 29 15:28:00 crc kubenswrapper[5008]: I0129 15:28:00.519646 5008 generic.go:334] "Generic (PLEG): container finished" podID="fa065d0b-d690-4a7d-9079-a8f976a7aca3" containerID="dce68b57fb66d0f4fb38e7ba2da32746311a7705ec80e7dbaaee405bf6175456" exitCode=0 Jan 29 15:28:00 crc kubenswrapper[5008]: I0129 15:28:00.519708 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-78bl2" event={"ID":"fa065d0b-d690-4a7d-9079-a8f976a7aca3","Type":"ContainerDied","Data":"dce68b57fb66d0f4fb38e7ba2da32746311a7705ec80e7dbaaee405bf6175456"} Jan 29 15:28:00 crc kubenswrapper[5008]: I0129 15:28:00.519729 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-78bl2" event={"ID":"fa065d0b-d690-4a7d-9079-a8f976a7aca3","Type":"ContainerStarted","Data":"feeb0c23c07da0fef24a102842931147d1529065121b9e0131ef3ac1a002c490"} Jan 29 15:28:00 crc kubenswrapper[5008]: I0129 15:28:00.521720 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-gk9q8" event={"ID":"ca0fcb2d-733d-4bde-9bbf-3f7082d0e244","Type":"ContainerStarted","Data":"fda885d25c8fd46bd297810d4fb6c23ec0d4bb76993e94ea75a623b0feeed247"} Jan 29 15:28:00 crc kubenswrapper[5008]: I0129 15:28:00.521756 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-gk9q8" event={"ID":"ca0fcb2d-733d-4bde-9bbf-3f7082d0e244","Type":"ContainerStarted","Data":"b4781ea933d8ce868cf1da4b2890797c16012b434ce074870a59307d61a3c731"} Jan 29 15:28:00 crc kubenswrapper[5008]: I0129 15:28:00.521770 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-gk9q8" event={"ID":"ca0fcb2d-733d-4bde-9bbf-3f7082d0e244","Type":"ContainerStarted","Data":"5322962a9ed8ffae5b21db73f40150f5b6ddce142937397a45c4a59534a8a608"} Jan 29 15:28:00 crc kubenswrapper[5008]: I0129 15:28:00.523184 5008 generic.go:334] "Generic (PLEG): container finished" podID="1d092513-7735-4c98-9734-57bc46b99280" containerID="6807035e51f1a1b563d2c2de6ad73607b2a3bbb9b4336cb9dfeea693d35fdda6" exitCode=0 Jan 29 15:28:00 crc kubenswrapper[5008]: I0129 15:28:00.523255 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pqg9w" event={"ID":"1d092513-7735-4c98-9734-57bc46b99280","Type":"ContainerDied","Data":"6807035e51f1a1b563d2c2de6ad73607b2a3bbb9b4336cb9dfeea693d35fdda6"} Jan 29 15:28:00 crc kubenswrapper[5008]: I0129 15:28:00.523292 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pqg9w" event={"ID":"1d092513-7735-4c98-9734-57bc46b99280","Type":"ContainerStarted","Data":"3ed021c49019edf6db353db02ef3c36191fef92186df2ed16a92920dd439b3d2"} Jan 29 15:28:00 crc kubenswrapper[5008]: I0129 15:28:00.532140 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-gk9q8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ca0fcb2d-733d-4bde-9bbf-3f7082d0e244\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6blck\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6blck\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:27:59Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-gk9q8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:00Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:00 crc kubenswrapper[5008]: I0129 15:28:00.552696 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pqg9w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1d092513-7735-4c98-9734-57bc46b99280\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2xcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2xcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2xcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2xcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2xcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2xcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2xcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2xcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2xcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:27:59Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-pqg9w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:00Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:00 crc kubenswrapper[5008]: I0129 15:28:00.565761 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:00Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:00 crc kubenswrapper[5008]: I0129 15:28:00.597391 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:00Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:00 crc kubenswrapper[5008]: I0129 15:28:00.635579 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-wtvvb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2dede057-dcce-4302-8efe-e2c3640308ec\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://63cab2ec47a6dc148b6d3554a6f4b5c1985ca43bf62bfc444ff3582273cce517\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mtnst\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:27:58Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-wtvvb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:00Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:00 crc kubenswrapper[5008]: I0129 15:28:00.684342 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ae42d856f5916fe3a1dace4ed5ed53a6cab552d169357b7303516719b78ef076\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:00Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:00 crc kubenswrapper[5008]: I0129 15:28:00.722508 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e5526ab405f367c31c46e86dc356f5c21ac7529cd706af08cb6cd35e54dbe33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a34142066431679db41e56f6697765165128986ad22bc919152524672e3035d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:00Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:00 crc kubenswrapper[5008]: I0129 15:28:00.761695 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-78bl2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa065d0b-d690-4a7d-9079-a8f976a7aca3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trwfk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trwfk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trwfk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trwfk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trwfk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trwfk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trwfk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:27:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-78bl2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:00Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:00 crc kubenswrapper[5008]: I0129 15:28:00.800295 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:00Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:00 crc kubenswrapper[5008]: I0129 15:28:00.838410 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-42hcz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cdd8ae23-3f9f-49f8-928d-46dad823fde4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a44b0a7b0b53c339b51d5391ad7e0eb342bdb491b4af37a98f48788b8e2c077b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg75x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:27:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-42hcz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:00Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:00 crc kubenswrapper[5008]: I0129 15:28:00.888884 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"77958faa-02ef-4792-b792-6094f922cd1d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de76f0d6e08ee14b4a5ab39a21ebdc63bdf379dcd5b648ae46a4edcc2a49f20e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1dcda54222f387e6560d3e297be72e19032a975feb916bc12a220870207a3f35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3cb618c2c44502074cb37ce1e688d187254eafae3916372a16c8ab845fed767a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff8e5fd243880ce71f07c5c532cad2cdff0e4bca2d0083280be78206a1a4c854\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7393e24277d74a2b9987e6cdc54cd65485f5bc57d93ec25a2cb8479923db1feb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://380711ea042d739a804ab6da4c0361004cc9ed9a48a5f4b006d168df6a84ebb5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://380711ea042d739a804ab6da4c0361004cc9ed9a48a5f4b006d168df6a84ebb5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:27:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d9134a6829b7df9b42aeae161ea1f3961837d6a0b322b1adcd2417c47c0f5d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1d9134a6829b7df9b42aeae161ea1f3961837d6a0b322b1adcd2417c47c0f5d5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:27:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://9206e5978757b0979fa411a384d9e5b4728b01a769f87383df38dbb8f0f18e4b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9206e5978757b0979fa411a384d9e5b4728b01a769f87383df38dbb8f0f18e4b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:27:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:27:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:27:37Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:00Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:00 crc kubenswrapper[5008]: I0129 15:28:00.918144 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2624b9eb-bfe1-4c46-8825-6152c5e00565\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://12266e3ba2ed2e5d6d1e7ee893a0d59cd4575c8870cb1e129ca0fd9b8623467f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c778df6f5c031669143db37980250c01473f3d9856acc44a6ef51852822f99ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://de7c341c7443f28f5919ef6baeb21377b5571637ad807dd7515a5f28c218034b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0f710dffd08d1bbb467ff9d2c6a5d5beed779550747459407916e743506ab27\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:27:37Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:00Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:00 crc kubenswrapper[5008]: I0129 15:28:00.964766 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d62f7cc2-d2d7-4c9a-9432-8b4fb9f3fcf2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4702656214e54c2881bf198364622648679a9981721d09a6b1551a134c63b7d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ed1794f8b68a0810301b6f7b91e03cfb269b35084dd97b2f153789ba70970f2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://677c04a1dffb767e6149ccb064772548ca29cf553afc20cb4eb82a5f85742ff0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d710e35a02d14289e2d5fe6b35c08621e78c96b7e9e30451ffd6d51962fb761\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4d710e35a02d14289e2d5fe6b35c08621e78c96b7e9e30451ffd6d51962fb761\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"message\\\":\\\"file observer\\\\nW0129 15:27:57.701071 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0129 15:27:57.704726 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 15:27:57.707574 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-445213743/tls.crt::/tmp/serving-cert-445213743/tls.key\\\\\\\"\\\\nI0129 15:27:58.036057 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0129 15:27:58.041904 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0129 15:27:58.041936 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0129 15:27:58.041959 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0129 15:27:58.041967 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0129 15:27:58.046875 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0129 15:27:58.046901 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 15:27:58.046906 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 15:27:58.046911 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0129 15:27:58.046914 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0129 15:27:58.046917 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0129 15:27:58.046919 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0129 15:27:58.047110 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0129 15:27:58.052272 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T15:27:42Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3397d4d59fbac09e49247425eb263f25d13c62a72013146c981b606f6389165d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:40Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://25a0c747be0a011a60911a631709b27620d8ebc5afea1d21dbeb71f26d971f6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://25a0c747be0a011a60911a631709b27620d8ebc5afea1d21dbeb71f26d971f6e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:27:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:27:37Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:00Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:01 crc kubenswrapper[5008]: I0129 15:28:01.000665 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:00Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:01 crc kubenswrapper[5008]: I0129 15:28:01.044830 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"77958faa-02ef-4792-b792-6094f922cd1d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de76f0d6e08ee14b4a5ab39a21ebdc63bdf379dcd5b648ae46a4edcc2a49f20e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1dcda54222f387e6560d3e297be72e19032a975feb916bc12a220870207a3f35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3cb618c2c44502074cb37ce1e688d187254eafae3916372a16c8ab845fed767a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff8e5fd243880ce71f07c5c532cad2cdff0e4bca2d0083280be78206a1a4c854\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7393e24277d74a2b9987e6cdc54cd65485f5bc57d93ec25a2cb8479923db1feb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://380711ea042d739a804ab6da4c0361004cc9ed9a48a5f4b006d168df6a84ebb5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://380711ea042d739a804ab6da4c0361004cc9ed9a48a5f4b006d168df6a84ebb5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:27:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d9134a6829b7df9b42aeae161ea1f3961837d6a0b322b1adcd2417c47c0f5d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1d9134a6829b7df9b42aeae161ea1f3961837d6a0b322b1adcd2417c47c0f5d5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:27:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://9206e5978757b0979fa411a384d9e5b4728b01a769f87383df38dbb8f0f18e4b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9206e5978757b0979fa411a384d9e5b4728b01a769f87383df38dbb8f0f18e4b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:27:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:27:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:27:37Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:01Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:01 crc kubenswrapper[5008]: I0129 15:28:01.084483 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2624b9eb-bfe1-4c46-8825-6152c5e00565\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://12266e3ba2ed2e5d6d1e7ee893a0d59cd4575c8870cb1e129ca0fd9b8623467f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c778df6f5c031669143db37980250c01473f3d9856acc44a6ef51852822f99ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://de7c341c7443f28f5919ef6baeb21377b5571637ad807dd7515a5f28c218034b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0f710dffd08d1bbb467ff9d2c6a5d5beed779550747459407916e743506ab27\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:27:37Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:01Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:01 crc kubenswrapper[5008]: I0129 15:28:01.120826 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d62f7cc2-d2d7-4c9a-9432-8b4fb9f3fcf2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4702656214e54c2881bf198364622648679a9981721d09a6b1551a134c63b7d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ed1794f8b68a0810301b6f7b91e03cfb269b35084dd97b2f153789ba70970f2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://677c04a1dffb767e6149ccb064772548ca29cf553afc20cb4eb82a5f85742ff0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d710e35a02d14289e2d5fe6b35c08621e78c96b7e9e30451ffd6d51962fb761\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4d710e35a02d14289e2d5fe6b35c08621e78c96b7e9e30451ffd6d51962fb761\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"message\\\":\\\"file observer\\\\nW0129 15:27:57.701071 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0129 15:27:57.704726 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 15:27:57.707574 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-445213743/tls.crt::/tmp/serving-cert-445213743/tls.key\\\\\\\"\\\\nI0129 15:27:58.036057 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0129 15:27:58.041904 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0129 15:27:58.041936 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0129 15:27:58.041959 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0129 15:27:58.041967 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0129 15:27:58.046875 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0129 15:27:58.046901 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 15:27:58.046906 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 15:27:58.046911 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0129 15:27:58.046914 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0129 15:27:58.046917 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0129 15:27:58.046919 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0129 15:27:58.047110 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0129 15:27:58.052272 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T15:27:42Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3397d4d59fbac09e49247425eb263f25d13c62a72013146c981b606f6389165d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:40Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://25a0c747be0a011a60911a631709b27620d8ebc5afea1d21dbeb71f26d971f6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://25a0c747be0a011a60911a631709b27620d8ebc5afea1d21dbeb71f26d971f6e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:27:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:27:37Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:01Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:01 crc kubenswrapper[5008]: I0129 15:28:01.123578 5008 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/node-ca-qj8wb"] Jan 29 15:28:01 crc kubenswrapper[5008]: I0129 15:28:01.124002 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-qj8wb" Jan 29 15:28:01 crc kubenswrapper[5008]: I0129 15:28:01.150568 5008 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Jan 29 15:28:01 crc kubenswrapper[5008]: I0129 15:28:01.169636 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Jan 29 15:28:01 crc kubenswrapper[5008]: I0129 15:28:01.189718 5008 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Jan 29 15:28:01 crc kubenswrapper[5008]: I0129 15:28:01.209066 5008 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Jan 29 15:28:01 crc kubenswrapper[5008]: I0129 15:28:01.238911 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:01Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:01 crc kubenswrapper[5008]: I0129 15:28:01.263509 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8mvmz\" (UniqueName: \"kubernetes.io/projected/9ffbfcf6-99e5-450c-8c72-b2db9365d93e-kube-api-access-8mvmz\") pod \"node-ca-qj8wb\" (UID: \"9ffbfcf6-99e5-450c-8c72-b2db9365d93e\") " pod="openshift-image-registry/node-ca-qj8wb" Jan 29 15:28:01 crc kubenswrapper[5008]: I0129 15:28:01.263554 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/9ffbfcf6-99e5-450c-8c72-b2db9365d93e-host\") pod \"node-ca-qj8wb\" (UID: \"9ffbfcf6-99e5-450c-8c72-b2db9365d93e\") " pod="openshift-image-registry/node-ca-qj8wb" Jan 29 15:28:01 crc kubenswrapper[5008]: I0129 15:28:01.263586 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/9ffbfcf6-99e5-450c-8c72-b2db9365d93e-serviceca\") pod \"node-ca-qj8wb\" (UID: \"9ffbfcf6-99e5-450c-8c72-b2db9365d93e\") " pod="openshift-image-registry/node-ca-qj8wb" Jan 29 15:28:01 crc kubenswrapper[5008]: I0129 15:28:01.279964 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:01Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:01 crc kubenswrapper[5008]: I0129 15:28:01.286124 5008 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-29 14:05:04.607345531 +0000 UTC Jan 29 15:28:01 crc kubenswrapper[5008]: I0129 15:28:01.319037 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-42hcz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cdd8ae23-3f9f-49f8-928d-46dad823fde4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a44b0a7b0b53c339b51d5391ad7e0eb342bdb491b4af37a98f48788b8e2c077b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg75x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:27:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-42hcz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:01Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:01 crc kubenswrapper[5008]: I0129 15:28:01.358975 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:01Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:01 crc kubenswrapper[5008]: I0129 15:28:01.364453 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8mvmz\" (UniqueName: \"kubernetes.io/projected/9ffbfcf6-99e5-450c-8c72-b2db9365d93e-kube-api-access-8mvmz\") pod \"node-ca-qj8wb\" (UID: \"9ffbfcf6-99e5-450c-8c72-b2db9365d93e\") " pod="openshift-image-registry/node-ca-qj8wb" Jan 29 15:28:01 crc kubenswrapper[5008]: I0129 15:28:01.364518 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/9ffbfcf6-99e5-450c-8c72-b2db9365d93e-host\") pod \"node-ca-qj8wb\" (UID: \"9ffbfcf6-99e5-450c-8c72-b2db9365d93e\") " pod="openshift-image-registry/node-ca-qj8wb" Jan 29 15:28:01 crc kubenswrapper[5008]: I0129 15:28:01.364578 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/9ffbfcf6-99e5-450c-8c72-b2db9365d93e-serviceca\") pod \"node-ca-qj8wb\" (UID: \"9ffbfcf6-99e5-450c-8c72-b2db9365d93e\") " pod="openshift-image-registry/node-ca-qj8wb" Jan 29 15:28:01 crc kubenswrapper[5008]: I0129 15:28:01.364751 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/9ffbfcf6-99e5-450c-8c72-b2db9365d93e-host\") pod \"node-ca-qj8wb\" (UID: \"9ffbfcf6-99e5-450c-8c72-b2db9365d93e\") " pod="openshift-image-registry/node-ca-qj8wb" Jan 29 15:28:01 crc kubenswrapper[5008]: I0129 15:28:01.366079 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/9ffbfcf6-99e5-450c-8c72-b2db9365d93e-serviceca\") pod \"node-ca-qj8wb\" (UID: \"9ffbfcf6-99e5-450c-8c72-b2db9365d93e\") " pod="openshift-image-registry/node-ca-qj8wb" Jan 29 15:28:01 crc kubenswrapper[5008]: I0129 15:28:01.405346 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8mvmz\" (UniqueName: \"kubernetes.io/projected/9ffbfcf6-99e5-450c-8c72-b2db9365d93e-kube-api-access-8mvmz\") pod \"node-ca-qj8wb\" (UID: \"9ffbfcf6-99e5-450c-8c72-b2db9365d93e\") " pod="openshift-image-registry/node-ca-qj8wb" Jan 29 15:28:01 crc kubenswrapper[5008]: I0129 15:28:01.417847 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:01Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:01 crc kubenswrapper[5008]: I0129 15:28:01.455220 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-qj8wb" Jan 29 15:28:01 crc kubenswrapper[5008]: I0129 15:28:01.456882 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-wtvvb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2dede057-dcce-4302-8efe-e2c3640308ec\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://63cab2ec47a6dc148b6d3554a6f4b5c1985ca43bf62bfc444ff3582273cce517\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mtnst\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:27:58Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-wtvvb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:01Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:01 crc kubenswrapper[5008]: W0129 15:28:01.467381 5008 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9ffbfcf6_99e5_450c_8c72_b2db9365d93e.slice/crio-a6b80a39848f736368b0549175dee0c41b1ba8ed0a33449123018e7ca70c4f44 WatchSource:0}: Error finding container a6b80a39848f736368b0549175dee0c41b1ba8ed0a33449123018e7ca70c4f44: Status 404 returned error can't find the container with id a6b80a39848f736368b0549175dee0c41b1ba8ed0a33449123018e7ca70c4f44 Jan 29 15:28:01 crc kubenswrapper[5008]: I0129 15:28:01.499507 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-gk9q8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ca0fcb2d-733d-4bde-9bbf-3f7082d0e244\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fda885d25c8fd46bd297810d4fb6c23ec0d4bb76993e94ea75a623b0feeed247\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6blck\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4781ea933d8ce868cf1da4b2890797c16012b434ce074870a59307d61a3c731\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6blck\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:27:59Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-gk9q8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:01Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:01 crc kubenswrapper[5008]: I0129 15:28:01.537935 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"c04122903ba8ec9ecb21ba42f430520d0a097fff8cea9572b066e146d519cf91"} Jan 29 15:28:01 crc kubenswrapper[5008]: I0129 15:28:01.540531 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-qj8wb" event={"ID":"9ffbfcf6-99e5-450c-8c72-b2db9365d93e","Type":"ContainerStarted","Data":"a6b80a39848f736368b0549175dee0c41b1ba8ed0a33449123018e7ca70c4f44"} Jan 29 15:28:01 crc kubenswrapper[5008]: I0129 15:28:01.544315 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pqg9w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1d092513-7735-4c98-9734-57bc46b99280\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2xcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2xcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2xcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2xcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2xcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2xcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2xcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2xcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6807035e51f1a1b563d2c2de6ad73607b2a3bbb9b4336cb9dfeea693d35fdda6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6807035e51f1a1b563d2c2de6ad73607b2a3bbb9b4336cb9dfeea693d35fdda6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2xcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:27:59Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-pqg9w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:01Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:01 crc kubenswrapper[5008]: I0129 15:28:01.545291 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-78bl2" event={"ID":"fa065d0b-d690-4a7d-9079-a8f976a7aca3","Type":"ContainerStarted","Data":"be2538011dc9cfea90fe3fdf861804d4f36944262a852e2efe4c6a215019fb7e"} Jan 29 15:28:01 crc kubenswrapper[5008]: I0129 15:28:01.549164 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pqg9w" event={"ID":"1d092513-7735-4c98-9734-57bc46b99280","Type":"ContainerStarted","Data":"b82de879355c27b3c577b5d5a292b2c1db266e6d92a8e01409bf87ede71ba420"} Jan 29 15:28:01 crc kubenswrapper[5008]: I0129 15:28:01.549208 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pqg9w" event={"ID":"1d092513-7735-4c98-9734-57bc46b99280","Type":"ContainerStarted","Data":"84bee79a5084a74e833cfe4bac65bc4b319e7a41e9f3e8c7ee7de383385da1a5"} Jan 29 15:28:01 crc kubenswrapper[5008]: I0129 15:28:01.549220 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pqg9w" event={"ID":"1d092513-7735-4c98-9734-57bc46b99280","Type":"ContainerStarted","Data":"08beb10f1715c1ca4bbe5b5ecf918e595f3befca424a2b65a06e682936dcc9c1"} Jan 29 15:28:01 crc kubenswrapper[5008]: I0129 15:28:01.549232 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pqg9w" event={"ID":"1d092513-7735-4c98-9734-57bc46b99280","Type":"ContainerStarted","Data":"3e0b6c0db5ed1e87ffade45aa1c7194322bbf680050f9b7328a3584db57e1554"} Jan 29 15:28:01 crc kubenswrapper[5008]: I0129 15:28:01.549244 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pqg9w" event={"ID":"1d092513-7735-4c98-9734-57bc46b99280","Type":"ContainerStarted","Data":"676b28dc78242b0ec7c7a3643a048da9020c807de1f4ddd0cd801f60a1bf41a8"} Jan 29 15:28:01 crc kubenswrapper[5008]: I0129 15:28:01.583475 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ae42d856f5916fe3a1dace4ed5ed53a6cab552d169357b7303516719b78ef076\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:01Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:01 crc kubenswrapper[5008]: I0129 15:28:01.652962 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e5526ab405f367c31c46e86dc356f5c21ac7529cd706af08cb6cd35e54dbe33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a34142066431679db41e56f6697765165128986ad22bc919152524672e3035d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:01Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:01 crc kubenswrapper[5008]: I0129 15:28:01.676069 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-78bl2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa065d0b-d690-4a7d-9079-a8f976a7aca3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trwfk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dce68b57fb66d0f4fb38e7ba2da32746311a7705ec80e7dbaaee405bf6175456\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dce68b57fb66d0f4fb38e7ba2da32746311a7705ec80e7dbaaee405bf6175456\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:27:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trwfk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trwfk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trwfk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trwfk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trwfk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trwfk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:27:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-78bl2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:01Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:01 crc kubenswrapper[5008]: I0129 15:28:01.703470 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:01Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:01 crc kubenswrapper[5008]: I0129 15:28:01.738315 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-42hcz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cdd8ae23-3f9f-49f8-928d-46dad823fde4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a44b0a7b0b53c339b51d5391ad7e0eb342bdb491b4af37a98f48788b8e2c077b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg75x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:27:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-42hcz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:01Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:01 crc kubenswrapper[5008]: I0129 15:28:01.784602 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"77958faa-02ef-4792-b792-6094f922cd1d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de76f0d6e08ee14b4a5ab39a21ebdc63bdf379dcd5b648ae46a4edcc2a49f20e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1dcda54222f387e6560d3e297be72e19032a975feb916bc12a220870207a3f35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3cb618c2c44502074cb37ce1e688d187254eafae3916372a16c8ab845fed767a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff8e5fd243880ce71f07c5c532cad2cdff0e4bca2d0083280be78206a1a4c854\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7393e24277d74a2b9987e6cdc54cd65485f5bc57d93ec25a2cb8479923db1feb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://380711ea042d739a804ab6da4c0361004cc9ed9a48a5f4b006d168df6a84ebb5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://380711ea042d739a804ab6da4c0361004cc9ed9a48a5f4b006d168df6a84ebb5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:27:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d9134a6829b7df9b42aeae161ea1f3961837d6a0b322b1adcd2417c47c0f5d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1d9134a6829b7df9b42aeae161ea1f3961837d6a0b322b1adcd2417c47c0f5d5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:27:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://9206e5978757b0979fa411a384d9e5b4728b01a769f87383df38dbb8f0f18e4b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9206e5978757b0979fa411a384d9e5b4728b01a769f87383df38dbb8f0f18e4b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:27:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:27:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:27:37Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:01Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:01 crc kubenswrapper[5008]: I0129 15:28:01.817086 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2624b9eb-bfe1-4c46-8825-6152c5e00565\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://12266e3ba2ed2e5d6d1e7ee893a0d59cd4575c8870cb1e129ca0fd9b8623467f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c778df6f5c031669143db37980250c01473f3d9856acc44a6ef51852822f99ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://de7c341c7443f28f5919ef6baeb21377b5571637ad807dd7515a5f28c218034b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0f710dffd08d1bbb467ff9d2c6a5d5beed779550747459407916e743506ab27\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:27:37Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:01Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:01 crc kubenswrapper[5008]: I0129 15:28:01.860223 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d62f7cc2-d2d7-4c9a-9432-8b4fb9f3fcf2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4702656214e54c2881bf198364622648679a9981721d09a6b1551a134c63b7d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ed1794f8b68a0810301b6f7b91e03cfb269b35084dd97b2f153789ba70970f2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://677c04a1dffb767e6149ccb064772548ca29cf553afc20cb4eb82a5f85742ff0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d710e35a02d14289e2d5fe6b35c08621e78c96b7e9e30451ffd6d51962fb761\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4d710e35a02d14289e2d5fe6b35c08621e78c96b7e9e30451ffd6d51962fb761\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"message\\\":\\\"file observer\\\\nW0129 15:27:57.701071 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0129 15:27:57.704726 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 15:27:57.707574 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-445213743/tls.crt::/tmp/serving-cert-445213743/tls.key\\\\\\\"\\\\nI0129 15:27:58.036057 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0129 15:27:58.041904 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0129 15:27:58.041936 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0129 15:27:58.041959 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0129 15:27:58.041967 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0129 15:27:58.046875 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0129 15:27:58.046901 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 15:27:58.046906 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 15:27:58.046911 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0129 15:27:58.046914 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0129 15:27:58.046917 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0129 15:27:58.046919 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0129 15:27:58.047110 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0129 15:27:58.052272 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T15:27:42Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3397d4d59fbac09e49247425eb263f25d13c62a72013146c981b606f6389165d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:40Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://25a0c747be0a011a60911a631709b27620d8ebc5afea1d21dbeb71f26d971f6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://25a0c747be0a011a60911a631709b27620d8ebc5afea1d21dbeb71f26d971f6e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:27:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:27:37Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:01Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:01 crc kubenswrapper[5008]: I0129 15:28:01.899302 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:01Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:01 crc kubenswrapper[5008]: I0129 15:28:01.939465 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-gk9q8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ca0fcb2d-733d-4bde-9bbf-3f7082d0e244\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fda885d25c8fd46bd297810d4fb6c23ec0d4bb76993e94ea75a623b0feeed247\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6blck\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4781ea933d8ce868cf1da4b2890797c16012b434ce074870a59307d61a3c731\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6blck\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:27:59Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-gk9q8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:01Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:01 crc kubenswrapper[5008]: I0129 15:28:01.979401 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 15:28:01 crc kubenswrapper[5008]: I0129 15:28:01.979489 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 15:28:01 crc kubenswrapper[5008]: E0129 15:28:01.979559 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 15:28:05.979526084 +0000 UTC m=+29.652380321 (durationBeforeRetry 4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:28:01 crc kubenswrapper[5008]: E0129 15:28:01.979609 5008 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 29 15:28:01 crc kubenswrapper[5008]: E0129 15:28:01.979664 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-29 15:28:05.979643528 +0000 UTC m=+29.652497765 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 29 15:28:01 crc kubenswrapper[5008]: I0129 15:28:01.984856 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pqg9w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1d092513-7735-4c98-9734-57bc46b99280\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2xcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2xcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2xcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2xcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2xcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2xcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2xcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2xcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6807035e51f1a1b563d2c2de6ad73607b2a3bbb9b4336cb9dfeea693d35fdda6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6807035e51f1a1b563d2c2de6ad73607b2a3bbb9b4336cb9dfeea693d35fdda6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2xcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:27:59Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-pqg9w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:01Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:02 crc kubenswrapper[5008]: I0129 15:28:02.020275 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c04122903ba8ec9ecb21ba42f430520d0a097fff8cea9572b066e146d519cf91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:02Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:02 crc kubenswrapper[5008]: I0129 15:28:02.060142 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:02Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:02 crc kubenswrapper[5008]: I0129 15:28:02.080311 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 15:28:02 crc kubenswrapper[5008]: I0129 15:28:02.080370 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 15:28:02 crc kubenswrapper[5008]: I0129 15:28:02.080405 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 15:28:02 crc kubenswrapper[5008]: E0129 15:28:02.080461 5008 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 29 15:28:02 crc kubenswrapper[5008]: E0129 15:28:02.080464 5008 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 29 15:28:02 crc kubenswrapper[5008]: E0129 15:28:02.080496 5008 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 29 15:28:02 crc kubenswrapper[5008]: E0129 15:28:02.080508 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-29 15:28:06.080494021 +0000 UTC m=+29.753348258 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 29 15:28:02 crc kubenswrapper[5008]: E0129 15:28:02.080511 5008 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 29 15:28:02 crc kubenswrapper[5008]: E0129 15:28:02.080554 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-29 15:28:06.080542562 +0000 UTC m=+29.753396799 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 29 15:28:02 crc kubenswrapper[5008]: E0129 15:28:02.080609 5008 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 29 15:28:02 crc kubenswrapper[5008]: E0129 15:28:02.080646 5008 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 29 15:28:02 crc kubenswrapper[5008]: E0129 15:28:02.080687 5008 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 29 15:28:02 crc kubenswrapper[5008]: E0129 15:28:02.080753 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-29 15:28:06.080732377 +0000 UTC m=+29.753586674 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 29 15:28:02 crc kubenswrapper[5008]: I0129 15:28:02.097513 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-wtvvb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2dede057-dcce-4302-8efe-e2c3640308ec\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://63cab2ec47a6dc148b6d3554a6f4b5c1985ca43bf62bfc444ff3582273cce517\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mtnst\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:27:58Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-wtvvb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:02Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:02 crc kubenswrapper[5008]: I0129 15:28:02.140180 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ae42d856f5916fe3a1dace4ed5ed53a6cab552d169357b7303516719b78ef076\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:02Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:02 crc kubenswrapper[5008]: I0129 15:28:02.180553 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e5526ab405f367c31c46e86dc356f5c21ac7529cd706af08cb6cd35e54dbe33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a34142066431679db41e56f6697765165128986ad22bc919152524672e3035d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:02Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:02 crc kubenswrapper[5008]: I0129 15:28:02.222409 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-78bl2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa065d0b-d690-4a7d-9079-a8f976a7aca3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trwfk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dce68b57fb66d0f4fb38e7ba2da32746311a7705ec80e7dbaaee405bf6175456\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dce68b57fb66d0f4fb38e7ba2da32746311a7705ec80e7dbaaee405bf6175456\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:27:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trwfk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be2538011dc9cfea90fe3fdf861804d4f36944262a852e2efe4c6a215019fb7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trwfk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trwfk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trwfk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trwfk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trwfk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:27:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-78bl2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:02Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:02 crc kubenswrapper[5008]: I0129 15:28:02.258986 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-qj8wb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ffbfcf6-99e5-450c-8c72-b2db9365d93e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:01Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:01Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8mvmz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:28:01Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-qj8wb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:02Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:02 crc kubenswrapper[5008]: I0129 15:28:02.287172 5008 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-17 15:59:45.234006706 +0000 UTC Jan 29 15:28:02 crc kubenswrapper[5008]: I0129 15:28:02.323857 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 15:28:02 crc kubenswrapper[5008]: I0129 15:28:02.323876 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 15:28:02 crc kubenswrapper[5008]: I0129 15:28:02.324079 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 15:28:02 crc kubenswrapper[5008]: E0129 15:28:02.324238 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 15:28:02 crc kubenswrapper[5008]: E0129 15:28:02.324336 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 15:28:02 crc kubenswrapper[5008]: E0129 15:28:02.324066 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 15:28:02 crc kubenswrapper[5008]: I0129 15:28:02.556518 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pqg9w" event={"ID":"1d092513-7735-4c98-9734-57bc46b99280","Type":"ContainerStarted","Data":"eddc7bcf8b28e2d71e41dbad61e84e0e0ac1e2702628a400e9c16dcc4303cad1"} Jan 29 15:28:02 crc kubenswrapper[5008]: I0129 15:28:02.558178 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-qj8wb" event={"ID":"9ffbfcf6-99e5-450c-8c72-b2db9365d93e","Type":"ContainerStarted","Data":"eb113f45b58a5039b88d2c176d718d5a012e21c1785781c1fcda5843d529a9af"} Jan 29 15:28:02 crc kubenswrapper[5008]: I0129 15:28:02.561196 5008 generic.go:334] "Generic (PLEG): container finished" podID="fa065d0b-d690-4a7d-9079-a8f976a7aca3" containerID="be2538011dc9cfea90fe3fdf861804d4f36944262a852e2efe4c6a215019fb7e" exitCode=0 Jan 29 15:28:02 crc kubenswrapper[5008]: I0129 15:28:02.561281 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-78bl2" event={"ID":"fa065d0b-d690-4a7d-9079-a8f976a7aca3","Type":"ContainerDied","Data":"be2538011dc9cfea90fe3fdf861804d4f36944262a852e2efe4c6a215019fb7e"} Jan 29 15:28:02 crc kubenswrapper[5008]: I0129 15:28:02.576399 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e5526ab405f367c31c46e86dc356f5c21ac7529cd706af08cb6cd35e54dbe33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a34142066431679db41e56f6697765165128986ad22bc919152524672e3035d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:02Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:02 crc kubenswrapper[5008]: I0129 15:28:02.592827 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ae42d856f5916fe3a1dace4ed5ed53a6cab552d169357b7303516719b78ef076\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:02Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:02 crc kubenswrapper[5008]: I0129 15:28:02.609562 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-78bl2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa065d0b-d690-4a7d-9079-a8f976a7aca3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trwfk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dce68b57fb66d0f4fb38e7ba2da32746311a7705ec80e7dbaaee405bf6175456\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dce68b57fb66d0f4fb38e7ba2da32746311a7705ec80e7dbaaee405bf6175456\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:27:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trwfk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be2538011dc9cfea90fe3fdf861804d4f36944262a852e2efe4c6a215019fb7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trwfk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trwfk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trwfk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trwfk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trwfk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:27:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-78bl2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:02Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:02 crc kubenswrapper[5008]: I0129 15:28:02.619744 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-qj8wb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ffbfcf6-99e5-450c-8c72-b2db9365d93e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb113f45b58a5039b88d2c176d718d5a012e21c1785781c1fcda5843d529a9af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8mvmz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:28:01Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-qj8wb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:02Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:02 crc kubenswrapper[5008]: I0129 15:28:02.633602 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2624b9eb-bfe1-4c46-8825-6152c5e00565\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://12266e3ba2ed2e5d6d1e7ee893a0d59cd4575c8870cb1e129ca0fd9b8623467f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c778df6f5c031669143db37980250c01473f3d9856acc44a6ef51852822f99ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://de7c341c7443f28f5919ef6baeb21377b5571637ad807dd7515a5f28c218034b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0f710dffd08d1bbb467ff9d2c6a5d5beed779550747459407916e743506ab27\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:27:37Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:02Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:02 crc kubenswrapper[5008]: I0129 15:28:02.649194 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d62f7cc2-d2d7-4c9a-9432-8b4fb9f3fcf2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4702656214e54c2881bf198364622648679a9981721d09a6b1551a134c63b7d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ed1794f8b68a0810301b6f7b91e03cfb269b35084dd97b2f153789ba70970f2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://677c04a1dffb767e6149ccb064772548ca29cf553afc20cb4eb82a5f85742ff0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d710e35a02d14289e2d5fe6b35c08621e78c96b7e9e30451ffd6d51962fb761\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4d710e35a02d14289e2d5fe6b35c08621e78c96b7e9e30451ffd6d51962fb761\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"message\\\":\\\"file observer\\\\nW0129 15:27:57.701071 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0129 15:27:57.704726 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 15:27:57.707574 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-445213743/tls.crt::/tmp/serving-cert-445213743/tls.key\\\\\\\"\\\\nI0129 15:27:58.036057 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0129 15:27:58.041904 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0129 15:27:58.041936 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0129 15:27:58.041959 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0129 15:27:58.041967 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0129 15:27:58.046875 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0129 15:27:58.046901 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 15:27:58.046906 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 15:27:58.046911 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0129 15:27:58.046914 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0129 15:27:58.046917 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0129 15:27:58.046919 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0129 15:27:58.047110 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0129 15:27:58.052272 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T15:27:42Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3397d4d59fbac09e49247425eb263f25d13c62a72013146c981b606f6389165d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:40Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://25a0c747be0a011a60911a631709b27620d8ebc5afea1d21dbeb71f26d971f6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://25a0c747be0a011a60911a631709b27620d8ebc5afea1d21dbeb71f26d971f6e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:27:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:27:37Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:02Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:02 crc kubenswrapper[5008]: I0129 15:28:02.661038 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:02Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:02 crc kubenswrapper[5008]: I0129 15:28:02.672679 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:02Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:02 crc kubenswrapper[5008]: I0129 15:28:02.685996 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-42hcz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cdd8ae23-3f9f-49f8-928d-46dad823fde4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a44b0a7b0b53c339b51d5391ad7e0eb342bdb491b4af37a98f48788b8e2c077b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg75x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:27:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-42hcz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:02Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:02 crc kubenswrapper[5008]: I0129 15:28:02.708362 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"77958faa-02ef-4792-b792-6094f922cd1d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de76f0d6e08ee14b4a5ab39a21ebdc63bdf379dcd5b648ae46a4edcc2a49f20e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1dcda54222f387e6560d3e297be72e19032a975feb916bc12a220870207a3f35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3cb618c2c44502074cb37ce1e688d187254eafae3916372a16c8ab845fed767a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff8e5fd243880ce71f07c5c532cad2cdff0e4bca2d0083280be78206a1a4c854\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7393e24277d74a2b9987e6cdc54cd65485f5bc57d93ec25a2cb8479923db1feb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://380711ea042d739a804ab6da4c0361004cc9ed9a48a5f4b006d168df6a84ebb5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://380711ea042d739a804ab6da4c0361004cc9ed9a48a5f4b006d168df6a84ebb5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:27:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d9134a6829b7df9b42aeae161ea1f3961837d6a0b322b1adcd2417c47c0f5d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1d9134a6829b7df9b42aeae161ea1f3961837d6a0b322b1adcd2417c47c0f5d5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:27:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://9206e5978757b0979fa411a384d9e5b4728b01a769f87383df38dbb8f0f18e4b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9206e5978757b0979fa411a384d9e5b4728b01a769f87383df38dbb8f0f18e4b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:27:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:27:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:27:37Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:02Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:02 crc kubenswrapper[5008]: I0129 15:28:02.720061 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c04122903ba8ec9ecb21ba42f430520d0a097fff8cea9572b066e146d519cf91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:02Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:02 crc kubenswrapper[5008]: I0129 15:28:02.739663 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:02Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:02 crc kubenswrapper[5008]: I0129 15:28:02.780129 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-wtvvb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2dede057-dcce-4302-8efe-e2c3640308ec\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://63cab2ec47a6dc148b6d3554a6f4b5c1985ca43bf62bfc444ff3582273cce517\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mtnst\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:27:58Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-wtvvb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:02Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:02 crc kubenswrapper[5008]: I0129 15:28:02.819845 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-gk9q8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ca0fcb2d-733d-4bde-9bbf-3f7082d0e244\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fda885d25c8fd46bd297810d4fb6c23ec0d4bb76993e94ea75a623b0feeed247\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6blck\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4781ea933d8ce868cf1da4b2890797c16012b434ce074870a59307d61a3c731\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6blck\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:27:59Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-gk9q8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:02Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:02 crc kubenswrapper[5008]: I0129 15:28:02.863973 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pqg9w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1d092513-7735-4c98-9734-57bc46b99280\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2xcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2xcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2xcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2xcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2xcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2xcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2xcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2xcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6807035e51f1a1b563d2c2de6ad73607b2a3bbb9b4336cb9dfeea693d35fdda6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6807035e51f1a1b563d2c2de6ad73607b2a3bbb9b4336cb9dfeea693d35fdda6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2xcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:27:59Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-pqg9w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:02Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:02 crc kubenswrapper[5008]: I0129 15:28:02.898292 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ae42d856f5916fe3a1dace4ed5ed53a6cab552d169357b7303516719b78ef076\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:02Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:02 crc kubenswrapper[5008]: I0129 15:28:02.937980 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e5526ab405f367c31c46e86dc356f5c21ac7529cd706af08cb6cd35e54dbe33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a34142066431679db41e56f6697765165128986ad22bc919152524672e3035d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:02Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:02 crc kubenswrapper[5008]: I0129 15:28:02.982155 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-78bl2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa065d0b-d690-4a7d-9079-a8f976a7aca3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trwfk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dce68b57fb66d0f4fb38e7ba2da32746311a7705ec80e7dbaaee405bf6175456\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dce68b57fb66d0f4fb38e7ba2da32746311a7705ec80e7dbaaee405bf6175456\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:27:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trwfk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be2538011dc9cfea90fe3fdf861804d4f36944262a852e2efe4c6a215019fb7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://be2538011dc9cfea90fe3fdf861804d4f36944262a852e2efe4c6a215019fb7e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trwfk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trwfk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trwfk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trwfk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trwfk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:27:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-78bl2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:02Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:03 crc kubenswrapper[5008]: I0129 15:28:03.019483 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-qj8wb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ffbfcf6-99e5-450c-8c72-b2db9365d93e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb113f45b58a5039b88d2c176d718d5a012e21c1785781c1fcda5843d529a9af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8mvmz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:28:01Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-qj8wb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:03Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:03 crc kubenswrapper[5008]: I0129 15:28:03.057957 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-42hcz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cdd8ae23-3f9f-49f8-928d-46dad823fde4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a44b0a7b0b53c339b51d5391ad7e0eb342bdb491b4af37a98f48788b8e2c077b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg75x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:27:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-42hcz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:03Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:03 crc kubenswrapper[5008]: I0129 15:28:03.106611 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"77958faa-02ef-4792-b792-6094f922cd1d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de76f0d6e08ee14b4a5ab39a21ebdc63bdf379dcd5b648ae46a4edcc2a49f20e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1dcda54222f387e6560d3e297be72e19032a975feb916bc12a220870207a3f35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3cb618c2c44502074cb37ce1e688d187254eafae3916372a16c8ab845fed767a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff8e5fd243880ce71f07c5c532cad2cdff0e4bca2d0083280be78206a1a4c854\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7393e24277d74a2b9987e6cdc54cd65485f5bc57d93ec25a2cb8479923db1feb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://380711ea042d739a804ab6da4c0361004cc9ed9a48a5f4b006d168df6a84ebb5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://380711ea042d739a804ab6da4c0361004cc9ed9a48a5f4b006d168df6a84ebb5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:27:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d9134a6829b7df9b42aeae161ea1f3961837d6a0b322b1adcd2417c47c0f5d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1d9134a6829b7df9b42aeae161ea1f3961837d6a0b322b1adcd2417c47c0f5d5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:27:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://9206e5978757b0979fa411a384d9e5b4728b01a769f87383df38dbb8f0f18e4b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9206e5978757b0979fa411a384d9e5b4728b01a769f87383df38dbb8f0f18e4b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:27:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:27:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:27:37Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:03Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:03 crc kubenswrapper[5008]: I0129 15:28:03.139268 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2624b9eb-bfe1-4c46-8825-6152c5e00565\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://12266e3ba2ed2e5d6d1e7ee893a0d59cd4575c8870cb1e129ca0fd9b8623467f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c778df6f5c031669143db37980250c01473f3d9856acc44a6ef51852822f99ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://de7c341c7443f28f5919ef6baeb21377b5571637ad807dd7515a5f28c218034b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0f710dffd08d1bbb467ff9d2c6a5d5beed779550747459407916e743506ab27\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:27:37Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:03Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:03 crc kubenswrapper[5008]: I0129 15:28:03.179466 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d62f7cc2-d2d7-4c9a-9432-8b4fb9f3fcf2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4702656214e54c2881bf198364622648679a9981721d09a6b1551a134c63b7d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ed1794f8b68a0810301b6f7b91e03cfb269b35084dd97b2f153789ba70970f2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://677c04a1dffb767e6149ccb064772548ca29cf553afc20cb4eb82a5f85742ff0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d710e35a02d14289e2d5fe6b35c08621e78c96b7e9e30451ffd6d51962fb761\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4d710e35a02d14289e2d5fe6b35c08621e78c96b7e9e30451ffd6d51962fb761\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"message\\\":\\\"file observer\\\\nW0129 15:27:57.701071 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0129 15:27:57.704726 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 15:27:57.707574 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-445213743/tls.crt::/tmp/serving-cert-445213743/tls.key\\\\\\\"\\\\nI0129 15:27:58.036057 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0129 15:27:58.041904 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0129 15:27:58.041936 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0129 15:27:58.041959 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0129 15:27:58.041967 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0129 15:27:58.046875 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0129 15:27:58.046901 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 15:27:58.046906 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 15:27:58.046911 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0129 15:27:58.046914 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0129 15:27:58.046917 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0129 15:27:58.046919 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0129 15:27:58.047110 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0129 15:27:58.052272 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T15:27:42Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3397d4d59fbac09e49247425eb263f25d13c62a72013146c981b606f6389165d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:40Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://25a0c747be0a011a60911a631709b27620d8ebc5afea1d21dbeb71f26d971f6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://25a0c747be0a011a60911a631709b27620d8ebc5afea1d21dbeb71f26d971f6e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:27:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:27:37Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:03Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:03 crc kubenswrapper[5008]: I0129 15:28:03.219999 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:03Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:03 crc kubenswrapper[5008]: I0129 15:28:03.260378 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:03Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:03 crc kubenswrapper[5008]: I0129 15:28:03.288080 5008 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-21 00:08:03.429161567 +0000 UTC Jan 29 15:28:03 crc kubenswrapper[5008]: I0129 15:28:03.302469 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pqg9w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1d092513-7735-4c98-9734-57bc46b99280\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2xcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2xcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2xcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2xcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2xcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2xcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2xcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2xcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6807035e51f1a1b563d2c2de6ad73607b2a3bbb9b4336cb9dfeea693d35fdda6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6807035e51f1a1b563d2c2de6ad73607b2a3bbb9b4336cb9dfeea693d35fdda6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2xcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:27:59Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-pqg9w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:03Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:03 crc kubenswrapper[5008]: I0129 15:28:03.337615 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c04122903ba8ec9ecb21ba42f430520d0a097fff8cea9572b066e146d519cf91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:03Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:03 crc kubenswrapper[5008]: I0129 15:28:03.382591 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:03Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:03 crc kubenswrapper[5008]: I0129 15:28:03.417037 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-wtvvb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2dede057-dcce-4302-8efe-e2c3640308ec\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://63cab2ec47a6dc148b6d3554a6f4b5c1985ca43bf62bfc444ff3582273cce517\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mtnst\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:27:58Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-wtvvb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:03Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:03 crc kubenswrapper[5008]: I0129 15:28:03.466392 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-gk9q8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ca0fcb2d-733d-4bde-9bbf-3f7082d0e244\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fda885d25c8fd46bd297810d4fb6c23ec0d4bb76993e94ea75a623b0feeed247\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6blck\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4781ea933d8ce868cf1da4b2890797c16012b434ce074870a59307d61a3c731\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6blck\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:27:59Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-gk9q8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:03Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:03 crc kubenswrapper[5008]: I0129 15:28:03.567155 5008 generic.go:334] "Generic (PLEG): container finished" podID="fa065d0b-d690-4a7d-9079-a8f976a7aca3" containerID="c1e468924dd5d2c21d28331698458147151b2c74b04a9154c3f0638b271ffb36" exitCode=0 Jan 29 15:28:03 crc kubenswrapper[5008]: I0129 15:28:03.567237 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-78bl2" event={"ID":"fa065d0b-d690-4a7d-9079-a8f976a7aca3","Type":"ContainerDied","Data":"c1e468924dd5d2c21d28331698458147151b2c74b04a9154c3f0638b271ffb36"} Jan 29 15:28:03 crc kubenswrapper[5008]: I0129 15:28:03.579162 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c04122903ba8ec9ecb21ba42f430520d0a097fff8cea9572b066e146d519cf91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:03Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:03 crc kubenswrapper[5008]: I0129 15:28:03.599468 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:03Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:03 crc kubenswrapper[5008]: I0129 15:28:03.616576 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-wtvvb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2dede057-dcce-4302-8efe-e2c3640308ec\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://63cab2ec47a6dc148b6d3554a6f4b5c1985ca43bf62bfc444ff3582273cce517\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mtnst\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:27:58Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-wtvvb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:03Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:03 crc kubenswrapper[5008]: I0129 15:28:03.629715 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-gk9q8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ca0fcb2d-733d-4bde-9bbf-3f7082d0e244\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fda885d25c8fd46bd297810d4fb6c23ec0d4bb76993e94ea75a623b0feeed247\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6blck\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4781ea933d8ce868cf1da4b2890797c16012b434ce074870a59307d61a3c731\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6blck\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:27:59Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-gk9q8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:03Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:03 crc kubenswrapper[5008]: I0129 15:28:03.662428 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pqg9w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1d092513-7735-4c98-9734-57bc46b99280\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2xcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2xcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2xcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2xcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2xcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2xcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2xcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2xcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6807035e51f1a1b563d2c2de6ad73607b2a3bbb9b4336cb9dfeea693d35fdda6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6807035e51f1a1b563d2c2de6ad73607b2a3bbb9b4336cb9dfeea693d35fdda6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2xcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:27:59Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-pqg9w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:03Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:03 crc kubenswrapper[5008]: I0129 15:28:03.700347 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e5526ab405f367c31c46e86dc356f5c21ac7529cd706af08cb6cd35e54dbe33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a34142066431679db41e56f6697765165128986ad22bc919152524672e3035d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:03Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:03 crc kubenswrapper[5008]: I0129 15:28:03.740425 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ae42d856f5916fe3a1dace4ed5ed53a6cab552d169357b7303516719b78ef076\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:03Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:03 crc kubenswrapper[5008]: I0129 15:28:03.783540 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-78bl2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa065d0b-d690-4a7d-9079-a8f976a7aca3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"message\\\":\\\"containers with incomplete status: [routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trwfk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dce68b57fb66d0f4fb38e7ba2da32746311a7705ec80e7dbaaee405bf6175456\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dce68b57fb66d0f4fb38e7ba2da32746311a7705ec80e7dbaaee405bf6175456\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:27:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trwfk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be2538011dc9cfea90fe3fdf861804d4f36944262a852e2efe4c6a215019fb7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://be2538011dc9cfea90fe3fdf861804d4f36944262a852e2efe4c6a215019fb7e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trwfk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1e468924dd5d2c21d28331698458147151b2c74b04a9154c3f0638b271ffb36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c1e468924dd5d2c21d28331698458147151b2c74b04a9154c3f0638b271ffb36\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trwfk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trwfk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trwfk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trwfk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:27:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-78bl2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:03Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:03 crc kubenswrapper[5008]: I0129 15:28:03.820228 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-qj8wb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ffbfcf6-99e5-450c-8c72-b2db9365d93e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb113f45b58a5039b88d2c176d718d5a012e21c1785781c1fcda5843d529a9af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8mvmz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:28:01Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-qj8wb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:03Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:03 crc kubenswrapper[5008]: I0129 15:28:03.862752 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2624b9eb-bfe1-4c46-8825-6152c5e00565\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://12266e3ba2ed2e5d6d1e7ee893a0d59cd4575c8870cb1e129ca0fd9b8623467f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c778df6f5c031669143db37980250c01473f3d9856acc44a6ef51852822f99ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://de7c341c7443f28f5919ef6baeb21377b5571637ad807dd7515a5f28c218034b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0f710dffd08d1bbb467ff9d2c6a5d5beed779550747459407916e743506ab27\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:27:37Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:03Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:03 crc kubenswrapper[5008]: I0129 15:28:03.903967 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d62f7cc2-d2d7-4c9a-9432-8b4fb9f3fcf2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4702656214e54c2881bf198364622648679a9981721d09a6b1551a134c63b7d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ed1794f8b68a0810301b6f7b91e03cfb269b35084dd97b2f153789ba70970f2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://677c04a1dffb767e6149ccb064772548ca29cf553afc20cb4eb82a5f85742ff0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d710e35a02d14289e2d5fe6b35c08621e78c96b7e9e30451ffd6d51962fb761\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4d710e35a02d14289e2d5fe6b35c08621e78c96b7e9e30451ffd6d51962fb761\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"message\\\":\\\"file observer\\\\nW0129 15:27:57.701071 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0129 15:27:57.704726 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 15:27:57.707574 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-445213743/tls.crt::/tmp/serving-cert-445213743/tls.key\\\\\\\"\\\\nI0129 15:27:58.036057 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0129 15:27:58.041904 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0129 15:27:58.041936 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0129 15:27:58.041959 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0129 15:27:58.041967 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0129 15:27:58.046875 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0129 15:27:58.046901 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 15:27:58.046906 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 15:27:58.046911 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0129 15:27:58.046914 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0129 15:27:58.046917 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0129 15:27:58.046919 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0129 15:27:58.047110 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0129 15:27:58.052272 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T15:27:42Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3397d4d59fbac09e49247425eb263f25d13c62a72013146c981b606f6389165d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:40Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://25a0c747be0a011a60911a631709b27620d8ebc5afea1d21dbeb71f26d971f6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://25a0c747be0a011a60911a631709b27620d8ebc5afea1d21dbeb71f26d971f6e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:27:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:27:37Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:03Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:03 crc kubenswrapper[5008]: I0129 15:28:03.942085 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:03Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:03 crc kubenswrapper[5008]: I0129 15:28:03.981669 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:03Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:04 crc kubenswrapper[5008]: I0129 15:28:04.019181 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-42hcz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cdd8ae23-3f9f-49f8-928d-46dad823fde4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a44b0a7b0b53c339b51d5391ad7e0eb342bdb491b4af37a98f48788b8e2c077b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg75x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:27:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-42hcz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:04Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:04 crc kubenswrapper[5008]: I0129 15:28:04.073059 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"77958faa-02ef-4792-b792-6094f922cd1d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de76f0d6e08ee14b4a5ab39a21ebdc63bdf379dcd5b648ae46a4edcc2a49f20e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1dcda54222f387e6560d3e297be72e19032a975feb916bc12a220870207a3f35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3cb618c2c44502074cb37ce1e688d187254eafae3916372a16c8ab845fed767a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff8e5fd243880ce71f07c5c532cad2cdff0e4bca2d0083280be78206a1a4c854\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7393e24277d74a2b9987e6cdc54cd65485f5bc57d93ec25a2cb8479923db1feb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://380711ea042d739a804ab6da4c0361004cc9ed9a48a5f4b006d168df6a84ebb5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://380711ea042d739a804ab6da4c0361004cc9ed9a48a5f4b006d168df6a84ebb5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:27:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d9134a6829b7df9b42aeae161ea1f3961837d6a0b322b1adcd2417c47c0f5d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1d9134a6829b7df9b42aeae161ea1f3961837d6a0b322b1adcd2417c47c0f5d5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:27:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://9206e5978757b0979fa411a384d9e5b4728b01a769f87383df38dbb8f0f18e4b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9206e5978757b0979fa411a384d9e5b4728b01a769f87383df38dbb8f0f18e4b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:27:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:27:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:27:37Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:04Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:04 crc kubenswrapper[5008]: I0129 15:28:04.105812 5008 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 15:28:04 crc kubenswrapper[5008]: I0129 15:28:04.107535 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:04 crc kubenswrapper[5008]: I0129 15:28:04.107579 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:04 crc kubenswrapper[5008]: I0129 15:28:04.107593 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:04 crc kubenswrapper[5008]: I0129 15:28:04.107744 5008 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 29 15:28:04 crc kubenswrapper[5008]: I0129 15:28:04.115430 5008 kubelet_node_status.go:115] "Node was previously registered" node="crc" Jan 29 15:28:04 crc kubenswrapper[5008]: I0129 15:28:04.115912 5008 kubelet_node_status.go:79] "Successfully registered node" node="crc" Jan 29 15:28:04 crc kubenswrapper[5008]: I0129 15:28:04.117400 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:04 crc kubenswrapper[5008]: I0129 15:28:04.117544 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:04 crc kubenswrapper[5008]: I0129 15:28:04.117620 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:04 crc kubenswrapper[5008]: I0129 15:28:04.117684 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:04 crc kubenswrapper[5008]: I0129 15:28:04.117751 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:04Z","lastTransitionTime":"2026-01-29T15:28:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:04 crc kubenswrapper[5008]: E0129 15:28:04.153941 5008 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:28:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:28:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:04Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:28:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:28:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:04Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"23463cb0-4db2-46f4-86c5-cabe2301deff\\\",\\\"systemUUID\\\":\\\"ad986a03-9926-4209-a3e1-d38e666bee86\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:04Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:04 crc kubenswrapper[5008]: I0129 15:28:04.159686 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:04 crc kubenswrapper[5008]: I0129 15:28:04.159738 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:04 crc kubenswrapper[5008]: I0129 15:28:04.159754 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:04 crc kubenswrapper[5008]: I0129 15:28:04.159774 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:04 crc kubenswrapper[5008]: I0129 15:28:04.159810 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:04Z","lastTransitionTime":"2026-01-29T15:28:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:04 crc kubenswrapper[5008]: E0129 15:28:04.174169 5008 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:28:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:28:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:04Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:28:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:28:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:04Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"23463cb0-4db2-46f4-86c5-cabe2301deff\\\",\\\"systemUUID\\\":\\\"ad986a03-9926-4209-a3e1-d38e666bee86\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:04Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:04 crc kubenswrapper[5008]: I0129 15:28:04.177646 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:04 crc kubenswrapper[5008]: I0129 15:28:04.177690 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:04 crc kubenswrapper[5008]: I0129 15:28:04.177700 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:04 crc kubenswrapper[5008]: I0129 15:28:04.177718 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:04 crc kubenswrapper[5008]: I0129 15:28:04.177731 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:04Z","lastTransitionTime":"2026-01-29T15:28:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:04 crc kubenswrapper[5008]: E0129 15:28:04.190339 5008 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:28:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:28:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:04Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:28:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:28:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:04Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"23463cb0-4db2-46f4-86c5-cabe2301deff\\\",\\\"systemUUID\\\":\\\"ad986a03-9926-4209-a3e1-d38e666bee86\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:04Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:04 crc kubenswrapper[5008]: I0129 15:28:04.193807 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:04 crc kubenswrapper[5008]: I0129 15:28:04.193847 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:04 crc kubenswrapper[5008]: I0129 15:28:04.193860 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:04 crc kubenswrapper[5008]: I0129 15:28:04.193877 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:04 crc kubenswrapper[5008]: I0129 15:28:04.193890 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:04Z","lastTransitionTime":"2026-01-29T15:28:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:04 crc kubenswrapper[5008]: E0129 15:28:04.205237 5008 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:28:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:28:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:04Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:28:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:28:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:04Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"23463cb0-4db2-46f4-86c5-cabe2301deff\\\",\\\"systemUUID\\\":\\\"ad986a03-9926-4209-a3e1-d38e666bee86\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:04Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:04 crc kubenswrapper[5008]: I0129 15:28:04.208238 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:04 crc kubenswrapper[5008]: I0129 15:28:04.208278 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:04 crc kubenswrapper[5008]: I0129 15:28:04.208290 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:04 crc kubenswrapper[5008]: I0129 15:28:04.208304 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:04 crc kubenswrapper[5008]: I0129 15:28:04.208313 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:04Z","lastTransitionTime":"2026-01-29T15:28:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:04 crc kubenswrapper[5008]: E0129 15:28:04.218920 5008 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:28:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:28:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:04Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:28:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:28:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:04Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"23463cb0-4db2-46f4-86c5-cabe2301deff\\\",\\\"systemUUID\\\":\\\"ad986a03-9926-4209-a3e1-d38e666bee86\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:04Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:04 crc kubenswrapper[5008]: E0129 15:28:04.219036 5008 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 29 15:28:04 crc kubenswrapper[5008]: I0129 15:28:04.220408 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:04 crc kubenswrapper[5008]: I0129 15:28:04.220435 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:04 crc kubenswrapper[5008]: I0129 15:28:04.220444 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:04 crc kubenswrapper[5008]: I0129 15:28:04.220459 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:04 crc kubenswrapper[5008]: I0129 15:28:04.220469 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:04Z","lastTransitionTime":"2026-01-29T15:28:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:04 crc kubenswrapper[5008]: I0129 15:28:04.289163 5008 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-30 08:38:40.177708255 +0000 UTC Jan 29 15:28:04 crc kubenswrapper[5008]: I0129 15:28:04.322985 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 15:28:04 crc kubenswrapper[5008]: I0129 15:28:04.323020 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 15:28:04 crc kubenswrapper[5008]: E0129 15:28:04.323150 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 15:28:04 crc kubenswrapper[5008]: E0129 15:28:04.323297 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 15:28:04 crc kubenswrapper[5008]: I0129 15:28:04.323457 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 15:28:04 crc kubenswrapper[5008]: E0129 15:28:04.323671 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 15:28:04 crc kubenswrapper[5008]: I0129 15:28:04.323813 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:04 crc kubenswrapper[5008]: I0129 15:28:04.324009 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:04 crc kubenswrapper[5008]: I0129 15:28:04.324152 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:04 crc kubenswrapper[5008]: I0129 15:28:04.324303 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:04 crc kubenswrapper[5008]: I0129 15:28:04.324451 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:04Z","lastTransitionTime":"2026-01-29T15:28:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:04 crc kubenswrapper[5008]: I0129 15:28:04.426643 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:04 crc kubenswrapper[5008]: I0129 15:28:04.427009 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:04 crc kubenswrapper[5008]: I0129 15:28:04.427021 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:04 crc kubenswrapper[5008]: I0129 15:28:04.427036 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:04 crc kubenswrapper[5008]: I0129 15:28:04.427048 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:04Z","lastTransitionTime":"2026-01-29T15:28:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:04 crc kubenswrapper[5008]: I0129 15:28:04.529707 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:04 crc kubenswrapper[5008]: I0129 15:28:04.529769 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:04 crc kubenswrapper[5008]: I0129 15:28:04.529855 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:04 crc kubenswrapper[5008]: I0129 15:28:04.529885 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:04 crc kubenswrapper[5008]: I0129 15:28:04.529904 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:04Z","lastTransitionTime":"2026-01-29T15:28:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:04 crc kubenswrapper[5008]: I0129 15:28:04.574503 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pqg9w" event={"ID":"1d092513-7735-4c98-9734-57bc46b99280","Type":"ContainerStarted","Data":"dc93128ecb53884c776154eafc7f29837e9c378a10c37df5d85d608ef14d7195"} Jan 29 15:28:04 crc kubenswrapper[5008]: I0129 15:28:04.578612 5008 generic.go:334] "Generic (PLEG): container finished" podID="fa065d0b-d690-4a7d-9079-a8f976a7aca3" containerID="bf3e60925690b1b555efc2db95efcef76510c147b6338b65b071bf0729561a6b" exitCode=0 Jan 29 15:28:04 crc kubenswrapper[5008]: I0129 15:28:04.578671 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-78bl2" event={"ID":"fa065d0b-d690-4a7d-9079-a8f976a7aca3","Type":"ContainerDied","Data":"bf3e60925690b1b555efc2db95efcef76510c147b6338b65b071bf0729561a6b"} Jan 29 15:28:04 crc kubenswrapper[5008]: I0129 15:28:04.599297 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ae42d856f5916fe3a1dace4ed5ed53a6cab552d169357b7303516719b78ef076\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:04Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:04 crc kubenswrapper[5008]: I0129 15:28:04.613416 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e5526ab405f367c31c46e86dc356f5c21ac7529cd706af08cb6cd35e54dbe33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a34142066431679db41e56f6697765165128986ad22bc919152524672e3035d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:04Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:04 crc kubenswrapper[5008]: I0129 15:28:04.628824 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-78bl2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa065d0b-d690-4a7d-9079-a8f976a7aca3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trwfk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dce68b57fb66d0f4fb38e7ba2da32746311a7705ec80e7dbaaee405bf6175456\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dce68b57fb66d0f4fb38e7ba2da32746311a7705ec80e7dbaaee405bf6175456\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:27:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trwfk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be2538011dc9cfea90fe3fdf861804d4f36944262a852e2efe4c6a215019fb7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://be2538011dc9cfea90fe3fdf861804d4f36944262a852e2efe4c6a215019fb7e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trwfk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1e468924dd5d2c21d28331698458147151b2c74b04a9154c3f0638b271ffb36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c1e468924dd5d2c21d28331698458147151b2c74b04a9154c3f0638b271ffb36\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trwfk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf3e60925690b1b555efc2db95efcef76510c147b6338b65b071bf0729561a6b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bf3e60925690b1b555efc2db95efcef76510c147b6338b65b071bf0729561a6b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trwfk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trwfk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trwfk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:27:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-78bl2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:04Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:04 crc kubenswrapper[5008]: I0129 15:28:04.632474 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:04 crc kubenswrapper[5008]: I0129 15:28:04.632501 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:04 crc kubenswrapper[5008]: I0129 15:28:04.632510 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:04 crc kubenswrapper[5008]: I0129 15:28:04.632523 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:04 crc kubenswrapper[5008]: I0129 15:28:04.632532 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:04Z","lastTransitionTime":"2026-01-29T15:28:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:04 crc kubenswrapper[5008]: I0129 15:28:04.638255 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-qj8wb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ffbfcf6-99e5-450c-8c72-b2db9365d93e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb113f45b58a5039b88d2c176d718d5a012e21c1785781c1fcda5843d529a9af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8mvmz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:28:01Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-qj8wb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:04Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:04 crc kubenswrapper[5008]: I0129 15:28:04.649599 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:04Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:04 crc kubenswrapper[5008]: I0129 15:28:04.662339 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-42hcz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cdd8ae23-3f9f-49f8-928d-46dad823fde4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a44b0a7b0b53c339b51d5391ad7e0eb342bdb491b4af37a98f48788b8e2c077b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg75x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:27:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-42hcz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:04Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:04 crc kubenswrapper[5008]: I0129 15:28:04.681708 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"77958faa-02ef-4792-b792-6094f922cd1d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de76f0d6e08ee14b4a5ab39a21ebdc63bdf379dcd5b648ae46a4edcc2a49f20e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1dcda54222f387e6560d3e297be72e19032a975feb916bc12a220870207a3f35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3cb618c2c44502074cb37ce1e688d187254eafae3916372a16c8ab845fed767a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff8e5fd243880ce71f07c5c532cad2cdff0e4bca2d0083280be78206a1a4c854\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7393e24277d74a2b9987e6cdc54cd65485f5bc57d93ec25a2cb8479923db1feb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://380711ea042d739a804ab6da4c0361004cc9ed9a48a5f4b006d168df6a84ebb5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://380711ea042d739a804ab6da4c0361004cc9ed9a48a5f4b006d168df6a84ebb5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:27:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d9134a6829b7df9b42aeae161ea1f3961837d6a0b322b1adcd2417c47c0f5d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1d9134a6829b7df9b42aeae161ea1f3961837d6a0b322b1adcd2417c47c0f5d5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:27:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://9206e5978757b0979fa411a384d9e5b4728b01a769f87383df38dbb8f0f18e4b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9206e5978757b0979fa411a384d9e5b4728b01a769f87383df38dbb8f0f18e4b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:27:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:27:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:27:37Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:04Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:04 crc kubenswrapper[5008]: I0129 15:28:04.693975 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2624b9eb-bfe1-4c46-8825-6152c5e00565\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://12266e3ba2ed2e5d6d1e7ee893a0d59cd4575c8870cb1e129ca0fd9b8623467f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c778df6f5c031669143db37980250c01473f3d9856acc44a6ef51852822f99ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://de7c341c7443f28f5919ef6baeb21377b5571637ad807dd7515a5f28c218034b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0f710dffd08d1bbb467ff9d2c6a5d5beed779550747459407916e743506ab27\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:27:37Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:04Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:04 crc kubenswrapper[5008]: I0129 15:28:04.708125 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d62f7cc2-d2d7-4c9a-9432-8b4fb9f3fcf2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4702656214e54c2881bf198364622648679a9981721d09a6b1551a134c63b7d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ed1794f8b68a0810301b6f7b91e03cfb269b35084dd97b2f153789ba70970f2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://677c04a1dffb767e6149ccb064772548ca29cf553afc20cb4eb82a5f85742ff0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d710e35a02d14289e2d5fe6b35c08621e78c96b7e9e30451ffd6d51962fb761\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4d710e35a02d14289e2d5fe6b35c08621e78c96b7e9e30451ffd6d51962fb761\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"message\\\":\\\"file observer\\\\nW0129 15:27:57.701071 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0129 15:27:57.704726 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 15:27:57.707574 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-445213743/tls.crt::/tmp/serving-cert-445213743/tls.key\\\\\\\"\\\\nI0129 15:27:58.036057 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0129 15:27:58.041904 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0129 15:27:58.041936 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0129 15:27:58.041959 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0129 15:27:58.041967 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0129 15:27:58.046875 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0129 15:27:58.046901 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 15:27:58.046906 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 15:27:58.046911 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0129 15:27:58.046914 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0129 15:27:58.046917 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0129 15:27:58.046919 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0129 15:27:58.047110 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0129 15:27:58.052272 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T15:27:42Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3397d4d59fbac09e49247425eb263f25d13c62a72013146c981b606f6389165d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:40Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://25a0c747be0a011a60911a631709b27620d8ebc5afea1d21dbeb71f26d971f6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://25a0c747be0a011a60911a631709b27620d8ebc5afea1d21dbeb71f26d971f6e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:27:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:27:37Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:04Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:04 crc kubenswrapper[5008]: I0129 15:28:04.722723 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:04Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:04 crc kubenswrapper[5008]: I0129 15:28:04.733395 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-gk9q8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ca0fcb2d-733d-4bde-9bbf-3f7082d0e244\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fda885d25c8fd46bd297810d4fb6c23ec0d4bb76993e94ea75a623b0feeed247\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6blck\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4781ea933d8ce868cf1da4b2890797c16012b434ce074870a59307d61a3c731\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6blck\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:27:59Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-gk9q8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:04Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:04 crc kubenswrapper[5008]: I0129 15:28:04.735040 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:04 crc kubenswrapper[5008]: I0129 15:28:04.735082 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:04 crc kubenswrapper[5008]: I0129 15:28:04.735093 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:04 crc kubenswrapper[5008]: I0129 15:28:04.735109 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:04 crc kubenswrapper[5008]: I0129 15:28:04.735120 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:04Z","lastTransitionTime":"2026-01-29T15:28:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:04 crc kubenswrapper[5008]: I0129 15:28:04.751361 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pqg9w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1d092513-7735-4c98-9734-57bc46b99280\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2xcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2xcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2xcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2xcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2xcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2xcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2xcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2xcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6807035e51f1a1b563d2c2de6ad73607b2a3bbb9b4336cb9dfeea693d35fdda6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6807035e51f1a1b563d2c2de6ad73607b2a3bbb9b4336cb9dfeea693d35fdda6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2xcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:27:59Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-pqg9w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:04Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:04 crc kubenswrapper[5008]: I0129 15:28:04.767021 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c04122903ba8ec9ecb21ba42f430520d0a097fff8cea9572b066e146d519cf91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:04Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:04 crc kubenswrapper[5008]: I0129 15:28:04.785461 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:04Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:04 crc kubenswrapper[5008]: I0129 15:28:04.797180 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-wtvvb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2dede057-dcce-4302-8efe-e2c3640308ec\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://63cab2ec47a6dc148b6d3554a6f4b5c1985ca43bf62bfc444ff3582273cce517\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mtnst\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:27:58Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-wtvvb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:04Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:04 crc kubenswrapper[5008]: I0129 15:28:04.837457 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:04 crc kubenswrapper[5008]: I0129 15:28:04.837494 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:04 crc kubenswrapper[5008]: I0129 15:28:04.837506 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:04 crc kubenswrapper[5008]: I0129 15:28:04.837522 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:04 crc kubenswrapper[5008]: I0129 15:28:04.837533 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:04Z","lastTransitionTime":"2026-01-29T15:28:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:04 crc kubenswrapper[5008]: I0129 15:28:04.891014 5008 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 15:28:04 crc kubenswrapper[5008]: I0129 15:28:04.891871 5008 scope.go:117] "RemoveContainer" containerID="4d710e35a02d14289e2d5fe6b35c08621e78c96b7e9e30451ffd6d51962fb761" Jan 29 15:28:04 crc kubenswrapper[5008]: E0129 15:28:04.892071 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Jan 29 15:28:04 crc kubenswrapper[5008]: I0129 15:28:04.940430 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:04 crc kubenswrapper[5008]: I0129 15:28:04.940474 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:04 crc kubenswrapper[5008]: I0129 15:28:04.940482 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:04 crc kubenswrapper[5008]: I0129 15:28:04.940496 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:04 crc kubenswrapper[5008]: I0129 15:28:04.940504 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:04Z","lastTransitionTime":"2026-01-29T15:28:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:05 crc kubenswrapper[5008]: I0129 15:28:05.042266 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:05 crc kubenswrapper[5008]: I0129 15:28:05.042301 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:05 crc kubenswrapper[5008]: I0129 15:28:05.042308 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:05 crc kubenswrapper[5008]: I0129 15:28:05.042322 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:05 crc kubenswrapper[5008]: I0129 15:28:05.042333 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:05Z","lastTransitionTime":"2026-01-29T15:28:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:05 crc kubenswrapper[5008]: I0129 15:28:05.145641 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:05 crc kubenswrapper[5008]: I0129 15:28:05.145698 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:05 crc kubenswrapper[5008]: I0129 15:28:05.145712 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:05 crc kubenswrapper[5008]: I0129 15:28:05.145844 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:05 crc kubenswrapper[5008]: I0129 15:28:05.145855 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:05Z","lastTransitionTime":"2026-01-29T15:28:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:05 crc kubenswrapper[5008]: I0129 15:28:05.248576 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:05 crc kubenswrapper[5008]: I0129 15:28:05.248639 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:05 crc kubenswrapper[5008]: I0129 15:28:05.248653 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:05 crc kubenswrapper[5008]: I0129 15:28:05.248676 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:05 crc kubenswrapper[5008]: I0129 15:28:05.248692 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:05Z","lastTransitionTime":"2026-01-29T15:28:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:05 crc kubenswrapper[5008]: I0129 15:28:05.290309 5008 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-21 14:32:22.959970613 +0000 UTC Jan 29 15:28:05 crc kubenswrapper[5008]: I0129 15:28:05.352150 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:05 crc kubenswrapper[5008]: I0129 15:28:05.352201 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:05 crc kubenswrapper[5008]: I0129 15:28:05.352212 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:05 crc kubenswrapper[5008]: I0129 15:28:05.352232 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:05 crc kubenswrapper[5008]: I0129 15:28:05.352244 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:05Z","lastTransitionTime":"2026-01-29T15:28:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:05 crc kubenswrapper[5008]: I0129 15:28:05.456265 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:05 crc kubenswrapper[5008]: I0129 15:28:05.456318 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:05 crc kubenswrapper[5008]: I0129 15:28:05.456336 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:05 crc kubenswrapper[5008]: I0129 15:28:05.456370 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:05 crc kubenswrapper[5008]: I0129 15:28:05.456390 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:05Z","lastTransitionTime":"2026-01-29T15:28:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:05 crc kubenswrapper[5008]: I0129 15:28:05.559766 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:05 crc kubenswrapper[5008]: I0129 15:28:05.559927 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:05 crc kubenswrapper[5008]: I0129 15:28:05.559955 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:05 crc kubenswrapper[5008]: I0129 15:28:05.559989 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:05 crc kubenswrapper[5008]: I0129 15:28:05.560016 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:05Z","lastTransitionTime":"2026-01-29T15:28:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:05 crc kubenswrapper[5008]: I0129 15:28:05.589626 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-78bl2" event={"ID":"fa065d0b-d690-4a7d-9079-a8f976a7aca3","Type":"ContainerStarted","Data":"83ad0f06e7035a28c9d0207484d22ac175226fac31b1d5e233ce7231cb957fb3"} Jan 29 15:28:05 crc kubenswrapper[5008]: I0129 15:28:05.609266 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ae42d856f5916fe3a1dace4ed5ed53a6cab552d169357b7303516719b78ef076\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:05Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:05 crc kubenswrapper[5008]: I0129 15:28:05.628528 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e5526ab405f367c31c46e86dc356f5c21ac7529cd706af08cb6cd35e54dbe33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a34142066431679db41e56f6697765165128986ad22bc919152524672e3035d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:05Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:05 crc kubenswrapper[5008]: I0129 15:28:05.645628 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-qj8wb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ffbfcf6-99e5-450c-8c72-b2db9365d93e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb113f45b58a5039b88d2c176d718d5a012e21c1785781c1fcda5843d529a9af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8mvmz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:28:01Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-qj8wb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:05Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:05 crc kubenswrapper[5008]: I0129 15:28:05.663126 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:05 crc kubenswrapper[5008]: I0129 15:28:05.663216 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:05 crc kubenswrapper[5008]: I0129 15:28:05.663241 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:05 crc kubenswrapper[5008]: I0129 15:28:05.663270 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:05 crc kubenswrapper[5008]: I0129 15:28:05.663288 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:05Z","lastTransitionTime":"2026-01-29T15:28:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:05 crc kubenswrapper[5008]: I0129 15:28:05.670935 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-78bl2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa065d0b-d690-4a7d-9079-a8f976a7aca3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trwfk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dce68b57fb66d0f4fb38e7ba2da32746311a7705ec80e7dbaaee405bf6175456\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dce68b57fb66d0f4fb38e7ba2da32746311a7705ec80e7dbaaee405bf6175456\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:27:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trwfk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be2538011dc9cfea90fe3fdf861804d4f36944262a852e2efe4c6a215019fb7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://be2538011dc9cfea90fe3fdf861804d4f36944262a852e2efe4c6a215019fb7e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trwfk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1e468924dd5d2c21d28331698458147151b2c74b04a9154c3f0638b271ffb36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c1e468924dd5d2c21d28331698458147151b2c74b04a9154c3f0638b271ffb36\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trwfk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf3e60925690b1b555efc2db95efcef76510c147b6338b65b071bf0729561a6b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bf3e60925690b1b555efc2db95efcef76510c147b6338b65b071bf0729561a6b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trwfk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://83ad0f06e7035a28c9d0207484d22ac175226fac31b1d5e233ce7231cb957fb3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trwfk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trwfk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:27:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-78bl2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:05Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:05 crc kubenswrapper[5008]: I0129 15:28:05.689626 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d62f7cc2-d2d7-4c9a-9432-8b4fb9f3fcf2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4702656214e54c2881bf198364622648679a9981721d09a6b1551a134c63b7d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ed1794f8b68a0810301b6f7b91e03cfb269b35084dd97b2f153789ba70970f2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://677c04a1dffb767e6149ccb064772548ca29cf553afc20cb4eb82a5f85742ff0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d710e35a02d14289e2d5fe6b35c08621e78c96b7e9e30451ffd6d51962fb761\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4d710e35a02d14289e2d5fe6b35c08621e78c96b7e9e30451ffd6d51962fb761\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"message\\\":\\\"file observer\\\\nW0129 15:27:57.701071 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0129 15:27:57.704726 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 15:27:57.707574 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-445213743/tls.crt::/tmp/serving-cert-445213743/tls.key\\\\\\\"\\\\nI0129 15:27:58.036057 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0129 15:27:58.041904 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0129 15:27:58.041936 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0129 15:27:58.041959 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0129 15:27:58.041967 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0129 15:27:58.046875 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0129 15:27:58.046901 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 15:27:58.046906 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 15:27:58.046911 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0129 15:27:58.046914 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0129 15:27:58.046917 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0129 15:27:58.046919 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0129 15:27:58.047110 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0129 15:27:58.052272 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T15:27:42Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3397d4d59fbac09e49247425eb263f25d13c62a72013146c981b606f6389165d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:40Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://25a0c747be0a011a60911a631709b27620d8ebc5afea1d21dbeb71f26d971f6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://25a0c747be0a011a60911a631709b27620d8ebc5afea1d21dbeb71f26d971f6e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:27:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:27:37Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:05Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:05 crc kubenswrapper[5008]: I0129 15:28:05.706119 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:05Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:05 crc kubenswrapper[5008]: I0129 15:28:05.730546 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:05Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:05 crc kubenswrapper[5008]: I0129 15:28:05.747896 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-42hcz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cdd8ae23-3f9f-49f8-928d-46dad823fde4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a44b0a7b0b53c339b51d5391ad7e0eb342bdb491b4af37a98f48788b8e2c077b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg75x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:27:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-42hcz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:05Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:05 crc kubenswrapper[5008]: I0129 15:28:05.766297 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:05 crc kubenswrapper[5008]: I0129 15:28:05.766428 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:05 crc kubenswrapper[5008]: I0129 15:28:05.766446 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:05 crc kubenswrapper[5008]: I0129 15:28:05.766466 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:05 crc kubenswrapper[5008]: I0129 15:28:05.766480 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:05Z","lastTransitionTime":"2026-01-29T15:28:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:05 crc kubenswrapper[5008]: I0129 15:28:05.777403 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"77958faa-02ef-4792-b792-6094f922cd1d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de76f0d6e08ee14b4a5ab39a21ebdc63bdf379dcd5b648ae46a4edcc2a49f20e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1dcda54222f387e6560d3e297be72e19032a975feb916bc12a220870207a3f35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3cb618c2c44502074cb37ce1e688d187254eafae3916372a16c8ab845fed767a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff8e5fd243880ce71f07c5c532cad2cdff0e4bca2d0083280be78206a1a4c854\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7393e24277d74a2b9987e6cdc54cd65485f5bc57d93ec25a2cb8479923db1feb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://380711ea042d739a804ab6da4c0361004cc9ed9a48a5f4b006d168df6a84ebb5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://380711ea042d739a804ab6da4c0361004cc9ed9a48a5f4b006d168df6a84ebb5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:27:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d9134a6829b7df9b42aeae161ea1f3961837d6a0b322b1adcd2417c47c0f5d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1d9134a6829b7df9b42aeae161ea1f3961837d6a0b322b1adcd2417c47c0f5d5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:27:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://9206e5978757b0979fa411a384d9e5b4728b01a769f87383df38dbb8f0f18e4b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9206e5978757b0979fa411a384d9e5b4728b01a769f87383df38dbb8f0f18e4b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:27:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:27:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:27:37Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:05Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:05 crc kubenswrapper[5008]: I0129 15:28:05.792434 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2624b9eb-bfe1-4c46-8825-6152c5e00565\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://12266e3ba2ed2e5d6d1e7ee893a0d59cd4575c8870cb1e129ca0fd9b8623467f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c778df6f5c031669143db37980250c01473f3d9856acc44a6ef51852822f99ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://de7c341c7443f28f5919ef6baeb21377b5571637ad807dd7515a5f28c218034b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0f710dffd08d1bbb467ff9d2c6a5d5beed779550747459407916e743506ab27\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:27:37Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:05Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:05 crc kubenswrapper[5008]: I0129 15:28:05.805410 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:05Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:05 crc kubenswrapper[5008]: I0129 15:28:05.816205 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-wtvvb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2dede057-dcce-4302-8efe-e2c3640308ec\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://63cab2ec47a6dc148b6d3554a6f4b5c1985ca43bf62bfc444ff3582273cce517\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mtnst\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:27:58Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-wtvvb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:05Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:05 crc kubenswrapper[5008]: I0129 15:28:05.829723 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-gk9q8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ca0fcb2d-733d-4bde-9bbf-3f7082d0e244\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fda885d25c8fd46bd297810d4fb6c23ec0d4bb76993e94ea75a623b0feeed247\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6blck\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4781ea933d8ce868cf1da4b2890797c16012b434ce074870a59307d61a3c731\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6blck\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:27:59Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-gk9q8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:05Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:05 crc kubenswrapper[5008]: I0129 15:28:05.847768 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pqg9w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1d092513-7735-4c98-9734-57bc46b99280\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2xcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2xcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2xcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2xcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2xcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2xcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2xcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2xcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6807035e51f1a1b563d2c2de6ad73607b2a3bbb9b4336cb9dfeea693d35fdda6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6807035e51f1a1b563d2c2de6ad73607b2a3bbb9b4336cb9dfeea693d35fdda6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2xcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:27:59Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-pqg9w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:05Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:05 crc kubenswrapper[5008]: I0129 15:28:05.865595 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c04122903ba8ec9ecb21ba42f430520d0a097fff8cea9572b066e146d519cf91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:05Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:05 crc kubenswrapper[5008]: I0129 15:28:05.869647 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:05 crc kubenswrapper[5008]: I0129 15:28:05.869699 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:05 crc kubenswrapper[5008]: I0129 15:28:05.869711 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:05 crc kubenswrapper[5008]: I0129 15:28:05.869735 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:05 crc kubenswrapper[5008]: I0129 15:28:05.869753 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:05Z","lastTransitionTime":"2026-01-29T15:28:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:05 crc kubenswrapper[5008]: I0129 15:28:05.972845 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:05 crc kubenswrapper[5008]: I0129 15:28:05.972890 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:05 crc kubenswrapper[5008]: I0129 15:28:05.972906 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:05 crc kubenswrapper[5008]: I0129 15:28:05.972930 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:05 crc kubenswrapper[5008]: I0129 15:28:05.972946 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:05Z","lastTransitionTime":"2026-01-29T15:28:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:06 crc kubenswrapper[5008]: I0129 15:28:06.019267 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 15:28:06 crc kubenswrapper[5008]: I0129 15:28:06.019453 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 15:28:06 crc kubenswrapper[5008]: E0129 15:28:06.019577 5008 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 29 15:28:06 crc kubenswrapper[5008]: E0129 15:28:06.019673 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 15:28:14.019585873 +0000 UTC m=+37.692440110 (durationBeforeRetry 8s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:28:06 crc kubenswrapper[5008]: E0129 15:28:06.019732 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-29 15:28:14.019722547 +0000 UTC m=+37.692576784 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 29 15:28:06 crc kubenswrapper[5008]: I0129 15:28:06.076517 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:06 crc kubenswrapper[5008]: I0129 15:28:06.076567 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:06 crc kubenswrapper[5008]: I0129 15:28:06.076579 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:06 crc kubenswrapper[5008]: I0129 15:28:06.076599 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:06 crc kubenswrapper[5008]: I0129 15:28:06.076611 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:06Z","lastTransitionTime":"2026-01-29T15:28:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:06 crc kubenswrapper[5008]: I0129 15:28:06.120986 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 15:28:06 crc kubenswrapper[5008]: I0129 15:28:06.121046 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 15:28:06 crc kubenswrapper[5008]: I0129 15:28:06.121073 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 15:28:06 crc kubenswrapper[5008]: E0129 15:28:06.121199 5008 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 29 15:28:06 crc kubenswrapper[5008]: E0129 15:28:06.121235 5008 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 29 15:28:06 crc kubenswrapper[5008]: E0129 15:28:06.121251 5008 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 29 15:28:06 crc kubenswrapper[5008]: E0129 15:28:06.121260 5008 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 29 15:28:06 crc kubenswrapper[5008]: E0129 15:28:06.121303 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-29 15:28:14.12128885 +0000 UTC m=+37.794143087 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 29 15:28:06 crc kubenswrapper[5008]: E0129 15:28:06.121348 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-29 15:28:14.12131013 +0000 UTC m=+37.794164397 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 29 15:28:06 crc kubenswrapper[5008]: E0129 15:28:06.121524 5008 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 29 15:28:06 crc kubenswrapper[5008]: E0129 15:28:06.121579 5008 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 29 15:28:06 crc kubenswrapper[5008]: E0129 15:28:06.121606 5008 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 29 15:28:06 crc kubenswrapper[5008]: E0129 15:28:06.121689 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-29 15:28:14.12166748 +0000 UTC m=+37.794521797 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 29 15:28:06 crc kubenswrapper[5008]: I0129 15:28:06.179519 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:06 crc kubenswrapper[5008]: I0129 15:28:06.179574 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:06 crc kubenswrapper[5008]: I0129 15:28:06.179584 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:06 crc kubenswrapper[5008]: I0129 15:28:06.179599 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:06 crc kubenswrapper[5008]: I0129 15:28:06.179608 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:06Z","lastTransitionTime":"2026-01-29T15:28:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:06 crc kubenswrapper[5008]: I0129 15:28:06.282470 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:06 crc kubenswrapper[5008]: I0129 15:28:06.282536 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:06 crc kubenswrapper[5008]: I0129 15:28:06.282549 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:06 crc kubenswrapper[5008]: I0129 15:28:06.282568 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:06 crc kubenswrapper[5008]: I0129 15:28:06.282580 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:06Z","lastTransitionTime":"2026-01-29T15:28:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:06 crc kubenswrapper[5008]: I0129 15:28:06.290947 5008 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-18 22:15:53.281470826 +0000 UTC Jan 29 15:28:06 crc kubenswrapper[5008]: I0129 15:28:06.323347 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 15:28:06 crc kubenswrapper[5008]: I0129 15:28:06.323382 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 15:28:06 crc kubenswrapper[5008]: E0129 15:28:06.323489 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 15:28:06 crc kubenswrapper[5008]: E0129 15:28:06.323676 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 15:28:06 crc kubenswrapper[5008]: I0129 15:28:06.323826 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 15:28:06 crc kubenswrapper[5008]: E0129 15:28:06.323939 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 15:28:06 crc kubenswrapper[5008]: I0129 15:28:06.385426 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:06 crc kubenswrapper[5008]: I0129 15:28:06.385480 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:06 crc kubenswrapper[5008]: I0129 15:28:06.385494 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:06 crc kubenswrapper[5008]: I0129 15:28:06.385509 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:06 crc kubenswrapper[5008]: I0129 15:28:06.385824 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:06Z","lastTransitionTime":"2026-01-29T15:28:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:06 crc kubenswrapper[5008]: I0129 15:28:06.490605 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:06 crc kubenswrapper[5008]: I0129 15:28:06.490662 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:06 crc kubenswrapper[5008]: I0129 15:28:06.490671 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:06 crc kubenswrapper[5008]: I0129 15:28:06.490688 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:06 crc kubenswrapper[5008]: I0129 15:28:06.490697 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:06Z","lastTransitionTime":"2026-01-29T15:28:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:06 crc kubenswrapper[5008]: I0129 15:28:06.593380 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:06 crc kubenswrapper[5008]: I0129 15:28:06.593432 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:06 crc kubenswrapper[5008]: I0129 15:28:06.593445 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:06 crc kubenswrapper[5008]: I0129 15:28:06.593465 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:06 crc kubenswrapper[5008]: I0129 15:28:06.593898 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:06Z","lastTransitionTime":"2026-01-29T15:28:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:06 crc kubenswrapper[5008]: I0129 15:28:06.597920 5008 generic.go:334] "Generic (PLEG): container finished" podID="fa065d0b-d690-4a7d-9079-a8f976a7aca3" containerID="83ad0f06e7035a28c9d0207484d22ac175226fac31b1d5e233ce7231cb957fb3" exitCode=0 Jan 29 15:28:06 crc kubenswrapper[5008]: I0129 15:28:06.597973 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-78bl2" event={"ID":"fa065d0b-d690-4a7d-9079-a8f976a7aca3","Type":"ContainerDied","Data":"83ad0f06e7035a28c9d0207484d22ac175226fac31b1d5e233ce7231cb957fb3"} Jan 29 15:28:06 crc kubenswrapper[5008]: I0129 15:28:06.619756 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-78bl2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa065d0b-d690-4a7d-9079-a8f976a7aca3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trwfk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dce68b57fb66d0f4fb38e7ba2da32746311a7705ec80e7dbaaee405bf6175456\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dce68b57fb66d0f4fb38e7ba2da32746311a7705ec80e7dbaaee405bf6175456\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:27:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trwfk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be2538011dc9cfea90fe3fdf861804d4f36944262a852e2efe4c6a215019fb7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://be2538011dc9cfea90fe3fdf861804d4f36944262a852e2efe4c6a215019fb7e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trwfk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1e468924dd5d2c21d28331698458147151b2c74b04a9154c3f0638b271ffb36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c1e468924dd5d2c21d28331698458147151b2c74b04a9154c3f0638b271ffb36\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trwfk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf3e60925690b1b555efc2db95efcef76510c147b6338b65b071bf0729561a6b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bf3e60925690b1b555efc2db95efcef76510c147b6338b65b071bf0729561a6b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trwfk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://83ad0f06e7035a28c9d0207484d22ac175226fac31b1d5e233ce7231cb957fb3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://83ad0f06e7035a28c9d0207484d22ac175226fac31b1d5e233ce7231cb957fb3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trwfk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trwfk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:27:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-78bl2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:06Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:06 crc kubenswrapper[5008]: I0129 15:28:06.635134 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-qj8wb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ffbfcf6-99e5-450c-8c72-b2db9365d93e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb113f45b58a5039b88d2c176d718d5a012e21c1785781c1fcda5843d529a9af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8mvmz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:28:01Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-qj8wb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:06Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:06 crc kubenswrapper[5008]: I0129 15:28:06.663990 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"77958faa-02ef-4792-b792-6094f922cd1d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de76f0d6e08ee14b4a5ab39a21ebdc63bdf379dcd5b648ae46a4edcc2a49f20e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1dcda54222f387e6560d3e297be72e19032a975feb916bc12a220870207a3f35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3cb618c2c44502074cb37ce1e688d187254eafae3916372a16c8ab845fed767a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff8e5fd243880ce71f07c5c532cad2cdff0e4bca2d0083280be78206a1a4c854\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7393e24277d74a2b9987e6cdc54cd65485f5bc57d93ec25a2cb8479923db1feb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://380711ea042d739a804ab6da4c0361004cc9ed9a48a5f4b006d168df6a84ebb5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://380711ea042d739a804ab6da4c0361004cc9ed9a48a5f4b006d168df6a84ebb5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:27:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d9134a6829b7df9b42aeae161ea1f3961837d6a0b322b1adcd2417c47c0f5d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1d9134a6829b7df9b42aeae161ea1f3961837d6a0b322b1adcd2417c47c0f5d5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:27:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://9206e5978757b0979fa411a384d9e5b4728b01a769f87383df38dbb8f0f18e4b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9206e5978757b0979fa411a384d9e5b4728b01a769f87383df38dbb8f0f18e4b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:27:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:27:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:27:37Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:06Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:06 crc kubenswrapper[5008]: I0129 15:28:06.681450 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2624b9eb-bfe1-4c46-8825-6152c5e00565\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://12266e3ba2ed2e5d6d1e7ee893a0d59cd4575c8870cb1e129ca0fd9b8623467f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c778df6f5c031669143db37980250c01473f3d9856acc44a6ef51852822f99ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://de7c341c7443f28f5919ef6baeb21377b5571637ad807dd7515a5f28c218034b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0f710dffd08d1bbb467ff9d2c6a5d5beed779550747459407916e743506ab27\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:27:37Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:06Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:06 crc kubenswrapper[5008]: I0129 15:28:06.695219 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d62f7cc2-d2d7-4c9a-9432-8b4fb9f3fcf2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4702656214e54c2881bf198364622648679a9981721d09a6b1551a134c63b7d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ed1794f8b68a0810301b6f7b91e03cfb269b35084dd97b2f153789ba70970f2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://677c04a1dffb767e6149ccb064772548ca29cf553afc20cb4eb82a5f85742ff0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d710e35a02d14289e2d5fe6b35c08621e78c96b7e9e30451ffd6d51962fb761\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4d710e35a02d14289e2d5fe6b35c08621e78c96b7e9e30451ffd6d51962fb761\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"message\\\":\\\"file observer\\\\nW0129 15:27:57.701071 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0129 15:27:57.704726 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 15:27:57.707574 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-445213743/tls.crt::/tmp/serving-cert-445213743/tls.key\\\\\\\"\\\\nI0129 15:27:58.036057 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0129 15:27:58.041904 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0129 15:27:58.041936 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0129 15:27:58.041959 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0129 15:27:58.041967 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0129 15:27:58.046875 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0129 15:27:58.046901 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 15:27:58.046906 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 15:27:58.046911 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0129 15:27:58.046914 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0129 15:27:58.046917 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0129 15:27:58.046919 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0129 15:27:58.047110 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0129 15:27:58.052272 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T15:27:42Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3397d4d59fbac09e49247425eb263f25d13c62a72013146c981b606f6389165d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:40Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://25a0c747be0a011a60911a631709b27620d8ebc5afea1d21dbeb71f26d971f6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://25a0c747be0a011a60911a631709b27620d8ebc5afea1d21dbeb71f26d971f6e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:27:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:27:37Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:06Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:06 crc kubenswrapper[5008]: I0129 15:28:06.697375 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:06 crc kubenswrapper[5008]: I0129 15:28:06.697397 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:06 crc kubenswrapper[5008]: I0129 15:28:06.697406 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:06 crc kubenswrapper[5008]: I0129 15:28:06.697419 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:06 crc kubenswrapper[5008]: I0129 15:28:06.697428 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:06Z","lastTransitionTime":"2026-01-29T15:28:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:06 crc kubenswrapper[5008]: I0129 15:28:06.713244 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:06Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:06 crc kubenswrapper[5008]: I0129 15:28:06.729887 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:06Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:06 crc kubenswrapper[5008]: I0129 15:28:06.746423 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-42hcz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cdd8ae23-3f9f-49f8-928d-46dad823fde4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a44b0a7b0b53c339b51d5391ad7e0eb342bdb491b4af37a98f48788b8e2c077b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg75x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:27:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-42hcz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:06Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:06 crc kubenswrapper[5008]: I0129 15:28:06.760172 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c04122903ba8ec9ecb21ba42f430520d0a097fff8cea9572b066e146d519cf91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:06Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:06 crc kubenswrapper[5008]: I0129 15:28:06.777817 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:06Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:06 crc kubenswrapper[5008]: I0129 15:28:06.795112 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-wtvvb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2dede057-dcce-4302-8efe-e2c3640308ec\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://63cab2ec47a6dc148b6d3554a6f4b5c1985ca43bf62bfc444ff3582273cce517\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mtnst\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:27:58Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-wtvvb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:06Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:06 crc kubenswrapper[5008]: I0129 15:28:06.805365 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:06 crc kubenswrapper[5008]: I0129 15:28:06.805406 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:06 crc kubenswrapper[5008]: I0129 15:28:06.805416 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:06 crc kubenswrapper[5008]: I0129 15:28:06.805433 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:06 crc kubenswrapper[5008]: I0129 15:28:06.805447 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:06Z","lastTransitionTime":"2026-01-29T15:28:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:06 crc kubenswrapper[5008]: I0129 15:28:06.808683 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-gk9q8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ca0fcb2d-733d-4bde-9bbf-3f7082d0e244\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fda885d25c8fd46bd297810d4fb6c23ec0d4bb76993e94ea75a623b0feeed247\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6blck\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4781ea933d8ce868cf1da4b2890797c16012b434ce074870a59307d61a3c731\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6blck\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:27:59Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-gk9q8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:06Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:06 crc kubenswrapper[5008]: I0129 15:28:06.829718 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pqg9w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1d092513-7735-4c98-9734-57bc46b99280\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2xcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2xcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2xcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2xcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2xcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2xcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2xcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2xcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6807035e51f1a1b563d2c2de6ad73607b2a3bbb9b4336cb9dfeea693d35fdda6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6807035e51f1a1b563d2c2de6ad73607b2a3bbb9b4336cb9dfeea693d35fdda6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2xcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:27:59Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-pqg9w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:06Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:06 crc kubenswrapper[5008]: I0129 15:28:06.844301 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ae42d856f5916fe3a1dace4ed5ed53a6cab552d169357b7303516719b78ef076\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:06Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:06 crc kubenswrapper[5008]: I0129 15:28:06.858582 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e5526ab405f367c31c46e86dc356f5c21ac7529cd706af08cb6cd35e54dbe33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a34142066431679db41e56f6697765165128986ad22bc919152524672e3035d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:06Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:06 crc kubenswrapper[5008]: I0129 15:28:06.908867 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:06 crc kubenswrapper[5008]: I0129 15:28:06.908917 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:06 crc kubenswrapper[5008]: I0129 15:28:06.908927 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:06 crc kubenswrapper[5008]: I0129 15:28:06.908947 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:06 crc kubenswrapper[5008]: I0129 15:28:06.908960 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:06Z","lastTransitionTime":"2026-01-29T15:28:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:07 crc kubenswrapper[5008]: I0129 15:28:07.012730 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:07 crc kubenswrapper[5008]: I0129 15:28:07.012767 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:07 crc kubenswrapper[5008]: I0129 15:28:07.012800 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:07 crc kubenswrapper[5008]: I0129 15:28:07.012819 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:07 crc kubenswrapper[5008]: I0129 15:28:07.012831 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:07Z","lastTransitionTime":"2026-01-29T15:28:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:07 crc kubenswrapper[5008]: I0129 15:28:07.020737 5008 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Jan 29 15:28:07 crc kubenswrapper[5008]: I0129 15:28:07.115288 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:07 crc kubenswrapper[5008]: I0129 15:28:07.115324 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:07 crc kubenswrapper[5008]: I0129 15:28:07.115333 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:07 crc kubenswrapper[5008]: I0129 15:28:07.115349 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:07 crc kubenswrapper[5008]: I0129 15:28:07.115361 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:07Z","lastTransitionTime":"2026-01-29T15:28:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:07 crc kubenswrapper[5008]: I0129 15:28:07.217609 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:07 crc kubenswrapper[5008]: I0129 15:28:07.217643 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:07 crc kubenswrapper[5008]: I0129 15:28:07.217653 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:07 crc kubenswrapper[5008]: I0129 15:28:07.217669 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:07 crc kubenswrapper[5008]: I0129 15:28:07.217681 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:07Z","lastTransitionTime":"2026-01-29T15:28:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:07 crc kubenswrapper[5008]: I0129 15:28:07.291319 5008 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-21 10:18:08.711078548 +0000 UTC Jan 29 15:28:07 crc kubenswrapper[5008]: I0129 15:28:07.320502 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:07 crc kubenswrapper[5008]: I0129 15:28:07.320555 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:07 crc kubenswrapper[5008]: I0129 15:28:07.320568 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:07 crc kubenswrapper[5008]: I0129 15:28:07.320590 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:07 crc kubenswrapper[5008]: I0129 15:28:07.320605 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:07Z","lastTransitionTime":"2026-01-29T15:28:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:07 crc kubenswrapper[5008]: I0129 15:28:07.343390 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2624b9eb-bfe1-4c46-8825-6152c5e00565\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://12266e3ba2ed2e5d6d1e7ee893a0d59cd4575c8870cb1e129ca0fd9b8623467f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c778df6f5c031669143db37980250c01473f3d9856acc44a6ef51852822f99ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://de7c341c7443f28f5919ef6baeb21377b5571637ad807dd7515a5f28c218034b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0f710dffd08d1bbb467ff9d2c6a5d5beed779550747459407916e743506ab27\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:27:37Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:07Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:07 crc kubenswrapper[5008]: I0129 15:28:07.358072 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d62f7cc2-d2d7-4c9a-9432-8b4fb9f3fcf2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4702656214e54c2881bf198364622648679a9981721d09a6b1551a134c63b7d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ed1794f8b68a0810301b6f7b91e03cfb269b35084dd97b2f153789ba70970f2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://677c04a1dffb767e6149ccb064772548ca29cf553afc20cb4eb82a5f85742ff0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d710e35a02d14289e2d5fe6b35c08621e78c96b7e9e30451ffd6d51962fb761\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4d710e35a02d14289e2d5fe6b35c08621e78c96b7e9e30451ffd6d51962fb761\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"message\\\":\\\"file observer\\\\nW0129 15:27:57.701071 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0129 15:27:57.704726 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 15:27:57.707574 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-445213743/tls.crt::/tmp/serving-cert-445213743/tls.key\\\\\\\"\\\\nI0129 15:27:58.036057 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0129 15:27:58.041904 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0129 15:27:58.041936 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0129 15:27:58.041959 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0129 15:27:58.041967 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0129 15:27:58.046875 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0129 15:27:58.046901 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 15:27:58.046906 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 15:27:58.046911 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0129 15:27:58.046914 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0129 15:27:58.046917 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0129 15:27:58.046919 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0129 15:27:58.047110 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0129 15:27:58.052272 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T15:27:42Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3397d4d59fbac09e49247425eb263f25d13c62a72013146c981b606f6389165d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:40Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://25a0c747be0a011a60911a631709b27620d8ebc5afea1d21dbeb71f26d971f6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://25a0c747be0a011a60911a631709b27620d8ebc5afea1d21dbeb71f26d971f6e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:27:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:27:37Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:07Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:07 crc kubenswrapper[5008]: I0129 15:28:07.375326 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:07Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:07 crc kubenswrapper[5008]: I0129 15:28:07.386001 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:07Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:07 crc kubenswrapper[5008]: I0129 15:28:07.398237 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-42hcz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cdd8ae23-3f9f-49f8-928d-46dad823fde4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a44b0a7b0b53c339b51d5391ad7e0eb342bdb491b4af37a98f48788b8e2c077b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg75x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:27:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-42hcz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:07Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:07 crc kubenswrapper[5008]: I0129 15:28:07.414384 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"77958faa-02ef-4792-b792-6094f922cd1d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de76f0d6e08ee14b4a5ab39a21ebdc63bdf379dcd5b648ae46a4edcc2a49f20e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1dcda54222f387e6560d3e297be72e19032a975feb916bc12a220870207a3f35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3cb618c2c44502074cb37ce1e688d187254eafae3916372a16c8ab845fed767a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff8e5fd243880ce71f07c5c532cad2cdff0e4bca2d0083280be78206a1a4c854\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7393e24277d74a2b9987e6cdc54cd65485f5bc57d93ec25a2cb8479923db1feb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://380711ea042d739a804ab6da4c0361004cc9ed9a48a5f4b006d168df6a84ebb5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://380711ea042d739a804ab6da4c0361004cc9ed9a48a5f4b006d168df6a84ebb5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:27:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d9134a6829b7df9b42aeae161ea1f3961837d6a0b322b1adcd2417c47c0f5d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1d9134a6829b7df9b42aeae161ea1f3961837d6a0b322b1adcd2417c47c0f5d5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:27:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://9206e5978757b0979fa411a384d9e5b4728b01a769f87383df38dbb8f0f18e4b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9206e5978757b0979fa411a384d9e5b4728b01a769f87383df38dbb8f0f18e4b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:27:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:27:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:27:37Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:07Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:07 crc kubenswrapper[5008]: I0129 15:28:07.422700 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:07 crc kubenswrapper[5008]: I0129 15:28:07.422746 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:07 crc kubenswrapper[5008]: I0129 15:28:07.422755 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:07 crc kubenswrapper[5008]: I0129 15:28:07.422773 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:07 crc kubenswrapper[5008]: I0129 15:28:07.422801 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:07Z","lastTransitionTime":"2026-01-29T15:28:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:07 crc kubenswrapper[5008]: I0129 15:28:07.425129 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c04122903ba8ec9ecb21ba42f430520d0a097fff8cea9572b066e146d519cf91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:07Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:07 crc kubenswrapper[5008]: I0129 15:28:07.442511 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:07Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:07 crc kubenswrapper[5008]: I0129 15:28:07.452234 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-wtvvb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2dede057-dcce-4302-8efe-e2c3640308ec\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://63cab2ec47a6dc148b6d3554a6f4b5c1985ca43bf62bfc444ff3582273cce517\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mtnst\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:27:58Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-wtvvb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:07Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:07 crc kubenswrapper[5008]: I0129 15:28:07.464951 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-gk9q8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ca0fcb2d-733d-4bde-9bbf-3f7082d0e244\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fda885d25c8fd46bd297810d4fb6c23ec0d4bb76993e94ea75a623b0feeed247\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6blck\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4781ea933d8ce868cf1da4b2890797c16012b434ce074870a59307d61a3c731\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6blck\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:27:59Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-gk9q8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:07Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:07 crc kubenswrapper[5008]: I0129 15:28:07.482624 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pqg9w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1d092513-7735-4c98-9734-57bc46b99280\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2xcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2xcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2xcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2xcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2xcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2xcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2xcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2xcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6807035e51f1a1b563d2c2de6ad73607b2a3bbb9b4336cb9dfeea693d35fdda6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6807035e51f1a1b563d2c2de6ad73607b2a3bbb9b4336cb9dfeea693d35fdda6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2xcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:27:59Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-pqg9w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:07Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:07 crc kubenswrapper[5008]: I0129 15:28:07.493417 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e5526ab405f367c31c46e86dc356f5c21ac7529cd706af08cb6cd35e54dbe33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a34142066431679db41e56f6697765165128986ad22bc919152524672e3035d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:07Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:07 crc kubenswrapper[5008]: I0129 15:28:07.507535 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ae42d856f5916fe3a1dace4ed5ed53a6cab552d169357b7303516719b78ef076\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:07Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:07 crc kubenswrapper[5008]: I0129 15:28:07.526315 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:07 crc kubenswrapper[5008]: I0129 15:28:07.526361 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:07 crc kubenswrapper[5008]: I0129 15:28:07.526375 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:07 crc kubenswrapper[5008]: I0129 15:28:07.526394 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:07 crc kubenswrapper[5008]: I0129 15:28:07.526408 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:07Z","lastTransitionTime":"2026-01-29T15:28:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:07 crc kubenswrapper[5008]: I0129 15:28:07.530548 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-78bl2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa065d0b-d690-4a7d-9079-a8f976a7aca3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trwfk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dce68b57fb66d0f4fb38e7ba2da32746311a7705ec80e7dbaaee405bf6175456\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dce68b57fb66d0f4fb38e7ba2da32746311a7705ec80e7dbaaee405bf6175456\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:27:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trwfk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be2538011dc9cfea90fe3fdf861804d4f36944262a852e2efe4c6a215019fb7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://be2538011dc9cfea90fe3fdf861804d4f36944262a852e2efe4c6a215019fb7e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trwfk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1e468924dd5d2c21d28331698458147151b2c74b04a9154c3f0638b271ffb36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c1e468924dd5d2c21d28331698458147151b2c74b04a9154c3f0638b271ffb36\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trwfk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf3e60925690b1b555efc2db95efcef76510c147b6338b65b071bf0729561a6b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bf3e60925690b1b555efc2db95efcef76510c147b6338b65b071bf0729561a6b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trwfk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://83ad0f06e7035a28c9d0207484d22ac175226fac31b1d5e233ce7231cb957fb3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://83ad0f06e7035a28c9d0207484d22ac175226fac31b1d5e233ce7231cb957fb3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trwfk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trwfk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:27:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-78bl2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:07Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:07 crc kubenswrapper[5008]: I0129 15:28:07.546463 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-qj8wb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ffbfcf6-99e5-450c-8c72-b2db9365d93e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb113f45b58a5039b88d2c176d718d5a012e21c1785781c1fcda5843d529a9af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8mvmz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:28:01Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-qj8wb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:07Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:07 crc kubenswrapper[5008]: I0129 15:28:07.603807 5008 generic.go:334] "Generic (PLEG): container finished" podID="fa065d0b-d690-4a7d-9079-a8f976a7aca3" containerID="78d2f9571b05eeb98c339f4165ca858289b85192d254ea86d4fb2eae7ea2e61e" exitCode=0 Jan 29 15:28:07 crc kubenswrapper[5008]: I0129 15:28:07.604099 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-78bl2" event={"ID":"fa065d0b-d690-4a7d-9079-a8f976a7aca3","Type":"ContainerDied","Data":"78d2f9571b05eeb98c339f4165ca858289b85192d254ea86d4fb2eae7ea2e61e"} Jan 29 15:28:07 crc kubenswrapper[5008]: I0129 15:28:07.613212 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pqg9w" event={"ID":"1d092513-7735-4c98-9734-57bc46b99280","Type":"ContainerStarted","Data":"6111e93f68c8aa5c23e0317317a19c4a1df88a0d6babfab0d89d65902410feee"} Jan 29 15:28:07 crc kubenswrapper[5008]: I0129 15:28:07.613668 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-pqg9w" Jan 29 15:28:07 crc kubenswrapper[5008]: I0129 15:28:07.613704 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-pqg9w" Jan 29 15:28:07 crc kubenswrapper[5008]: I0129 15:28:07.619383 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-78bl2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa065d0b-d690-4a7d-9079-a8f976a7aca3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trwfk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dce68b57fb66d0f4fb38e7ba2da32746311a7705ec80e7dbaaee405bf6175456\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dce68b57fb66d0f4fb38e7ba2da32746311a7705ec80e7dbaaee405bf6175456\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:27:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trwfk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be2538011dc9cfea90fe3fdf861804d4f36944262a852e2efe4c6a215019fb7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://be2538011dc9cfea90fe3fdf861804d4f36944262a852e2efe4c6a215019fb7e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trwfk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1e468924dd5d2c21d28331698458147151b2c74b04a9154c3f0638b271ffb36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c1e468924dd5d2c21d28331698458147151b2c74b04a9154c3f0638b271ffb36\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trwfk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf3e60925690b1b555efc2db95efcef76510c147b6338b65b071bf0729561a6b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bf3e60925690b1b555efc2db95efcef76510c147b6338b65b071bf0729561a6b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trwfk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://83ad0f06e7035a28c9d0207484d22ac175226fac31b1d5e233ce7231cb957fb3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://83ad0f06e7035a28c9d0207484d22ac175226fac31b1d5e233ce7231cb957fb3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trwfk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://78d2f9571b05eeb98c339f4165ca858289b85192d254ea86d4fb2eae7ea2e61e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://78d2f9571b05eeb98c339f4165ca858289b85192d254ea86d4fb2eae7ea2e61e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trwfk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:27:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-78bl2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:07Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:07 crc kubenswrapper[5008]: I0129 15:28:07.629231 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:07 crc kubenswrapper[5008]: I0129 15:28:07.629271 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:07 crc kubenswrapper[5008]: I0129 15:28:07.629282 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:07 crc kubenswrapper[5008]: I0129 15:28:07.629304 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:07 crc kubenswrapper[5008]: I0129 15:28:07.629327 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:07Z","lastTransitionTime":"2026-01-29T15:28:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:07 crc kubenswrapper[5008]: I0129 15:28:07.630065 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-qj8wb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ffbfcf6-99e5-450c-8c72-b2db9365d93e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb113f45b58a5039b88d2c176d718d5a012e21c1785781c1fcda5843d529a9af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8mvmz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:28:01Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-qj8wb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:07Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:07 crc kubenswrapper[5008]: I0129 15:28:07.644126 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:07Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:07 crc kubenswrapper[5008]: I0129 15:28:07.746092 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:07 crc kubenswrapper[5008]: I0129 15:28:07.746128 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:07 crc kubenswrapper[5008]: I0129 15:28:07.746136 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:07 crc kubenswrapper[5008]: I0129 15:28:07.746153 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:07 crc kubenswrapper[5008]: I0129 15:28:07.746163 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:07Z","lastTransitionTime":"2026-01-29T15:28:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:07 crc kubenswrapper[5008]: I0129 15:28:07.748734 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-pqg9w" Jan 29 15:28:07 crc kubenswrapper[5008]: I0129 15:28:07.749246 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-pqg9w" Jan 29 15:28:07 crc kubenswrapper[5008]: I0129 15:28:07.754426 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:07Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:07 crc kubenswrapper[5008]: I0129 15:28:07.770906 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-42hcz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cdd8ae23-3f9f-49f8-928d-46dad823fde4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a44b0a7b0b53c339b51d5391ad7e0eb342bdb491b4af37a98f48788b8e2c077b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg75x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:27:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-42hcz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:07Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:07 crc kubenswrapper[5008]: I0129 15:28:07.795851 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"77958faa-02ef-4792-b792-6094f922cd1d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de76f0d6e08ee14b4a5ab39a21ebdc63bdf379dcd5b648ae46a4edcc2a49f20e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1dcda54222f387e6560d3e297be72e19032a975feb916bc12a220870207a3f35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3cb618c2c44502074cb37ce1e688d187254eafae3916372a16c8ab845fed767a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff8e5fd243880ce71f07c5c532cad2cdff0e4bca2d0083280be78206a1a4c854\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7393e24277d74a2b9987e6cdc54cd65485f5bc57d93ec25a2cb8479923db1feb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://380711ea042d739a804ab6da4c0361004cc9ed9a48a5f4b006d168df6a84ebb5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://380711ea042d739a804ab6da4c0361004cc9ed9a48a5f4b006d168df6a84ebb5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:27:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d9134a6829b7df9b42aeae161ea1f3961837d6a0b322b1adcd2417c47c0f5d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1d9134a6829b7df9b42aeae161ea1f3961837d6a0b322b1adcd2417c47c0f5d5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:27:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://9206e5978757b0979fa411a384d9e5b4728b01a769f87383df38dbb8f0f18e4b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9206e5978757b0979fa411a384d9e5b4728b01a769f87383df38dbb8f0f18e4b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:27:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:27:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:27:37Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:07Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:07 crc kubenswrapper[5008]: I0129 15:28:07.810283 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2624b9eb-bfe1-4c46-8825-6152c5e00565\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://12266e3ba2ed2e5d6d1e7ee893a0d59cd4575c8870cb1e129ca0fd9b8623467f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c778df6f5c031669143db37980250c01473f3d9856acc44a6ef51852822f99ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://de7c341c7443f28f5919ef6baeb21377b5571637ad807dd7515a5f28c218034b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0f710dffd08d1bbb467ff9d2c6a5d5beed779550747459407916e743506ab27\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:27:37Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:07Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:07 crc kubenswrapper[5008]: I0129 15:28:07.826279 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d62f7cc2-d2d7-4c9a-9432-8b4fb9f3fcf2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4702656214e54c2881bf198364622648679a9981721d09a6b1551a134c63b7d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ed1794f8b68a0810301b6f7b91e03cfb269b35084dd97b2f153789ba70970f2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://677c04a1dffb767e6149ccb064772548ca29cf553afc20cb4eb82a5f85742ff0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d710e35a02d14289e2d5fe6b35c08621e78c96b7e9e30451ffd6d51962fb761\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4d710e35a02d14289e2d5fe6b35c08621e78c96b7e9e30451ffd6d51962fb761\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"message\\\":\\\"file observer\\\\nW0129 15:27:57.701071 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0129 15:27:57.704726 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 15:27:57.707574 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-445213743/tls.crt::/tmp/serving-cert-445213743/tls.key\\\\\\\"\\\\nI0129 15:27:58.036057 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0129 15:27:58.041904 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0129 15:27:58.041936 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0129 15:27:58.041959 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0129 15:27:58.041967 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0129 15:27:58.046875 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0129 15:27:58.046901 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 15:27:58.046906 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 15:27:58.046911 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0129 15:27:58.046914 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0129 15:27:58.046917 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0129 15:27:58.046919 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0129 15:27:58.047110 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0129 15:27:58.052272 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T15:27:42Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3397d4d59fbac09e49247425eb263f25d13c62a72013146c981b606f6389165d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:40Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://25a0c747be0a011a60911a631709b27620d8ebc5afea1d21dbeb71f26d971f6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://25a0c747be0a011a60911a631709b27620d8ebc5afea1d21dbeb71f26d971f6e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:27:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:27:37Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:07Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:07 crc kubenswrapper[5008]: I0129 15:28:07.835666 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-wtvvb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2dede057-dcce-4302-8efe-e2c3640308ec\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://63cab2ec47a6dc148b6d3554a6f4b5c1985ca43bf62bfc444ff3582273cce517\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mtnst\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:27:58Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-wtvvb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:07Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:07 crc kubenswrapper[5008]: I0129 15:28:07.847527 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-gk9q8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ca0fcb2d-733d-4bde-9bbf-3f7082d0e244\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fda885d25c8fd46bd297810d4fb6c23ec0d4bb76993e94ea75a623b0feeed247\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6blck\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4781ea933d8ce868cf1da4b2890797c16012b434ce074870a59307d61a3c731\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6blck\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:27:59Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-gk9q8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:07Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:07 crc kubenswrapper[5008]: I0129 15:28:07.854866 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:07 crc kubenswrapper[5008]: I0129 15:28:07.854895 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:07 crc kubenswrapper[5008]: I0129 15:28:07.854904 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:07 crc kubenswrapper[5008]: I0129 15:28:07.854916 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:07 crc kubenswrapper[5008]: I0129 15:28:07.854925 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:07Z","lastTransitionTime":"2026-01-29T15:28:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:07 crc kubenswrapper[5008]: I0129 15:28:07.865666 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pqg9w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1d092513-7735-4c98-9734-57bc46b99280\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2xcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2xcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2xcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2xcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2xcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2xcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2xcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2xcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6807035e51f1a1b563d2c2de6ad73607b2a3bbb9b4336cb9dfeea693d35fdda6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6807035e51f1a1b563d2c2de6ad73607b2a3bbb9b4336cb9dfeea693d35fdda6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2xcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:27:59Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-pqg9w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:07Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:07 crc kubenswrapper[5008]: I0129 15:28:07.876801 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c04122903ba8ec9ecb21ba42f430520d0a097fff8cea9572b066e146d519cf91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:07Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:07 crc kubenswrapper[5008]: I0129 15:28:07.891817 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:07Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:07 crc kubenswrapper[5008]: I0129 15:28:07.906254 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ae42d856f5916fe3a1dace4ed5ed53a6cab552d169357b7303516719b78ef076\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:07Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:07 crc kubenswrapper[5008]: I0129 15:28:07.919753 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e5526ab405f367c31c46e86dc356f5c21ac7529cd706af08cb6cd35e54dbe33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a34142066431679db41e56f6697765165128986ad22bc919152524672e3035d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:07Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:07 crc kubenswrapper[5008]: I0129 15:28:07.935042 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:07Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:07 crc kubenswrapper[5008]: I0129 15:28:07.949460 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-42hcz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cdd8ae23-3f9f-49f8-928d-46dad823fde4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a44b0a7b0b53c339b51d5391ad7e0eb342bdb491b4af37a98f48788b8e2c077b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg75x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:27:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-42hcz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:07Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:07 crc kubenswrapper[5008]: I0129 15:28:07.958355 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:07 crc kubenswrapper[5008]: I0129 15:28:07.958402 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:07 crc kubenswrapper[5008]: I0129 15:28:07.958414 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:07 crc kubenswrapper[5008]: I0129 15:28:07.958463 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:07 crc kubenswrapper[5008]: I0129 15:28:07.958487 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:07Z","lastTransitionTime":"2026-01-29T15:28:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:07 crc kubenswrapper[5008]: I0129 15:28:07.970080 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"77958faa-02ef-4792-b792-6094f922cd1d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de76f0d6e08ee14b4a5ab39a21ebdc63bdf379dcd5b648ae46a4edcc2a49f20e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1dcda54222f387e6560d3e297be72e19032a975feb916bc12a220870207a3f35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3cb618c2c44502074cb37ce1e688d187254eafae3916372a16c8ab845fed767a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff8e5fd243880ce71f07c5c532cad2cdff0e4bca2d0083280be78206a1a4c854\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7393e24277d74a2b9987e6cdc54cd65485f5bc57d93ec25a2cb8479923db1feb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://380711ea042d739a804ab6da4c0361004cc9ed9a48a5f4b006d168df6a84ebb5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://380711ea042d739a804ab6da4c0361004cc9ed9a48a5f4b006d168df6a84ebb5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:27:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d9134a6829b7df9b42aeae161ea1f3961837d6a0b322b1adcd2417c47c0f5d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1d9134a6829b7df9b42aeae161ea1f3961837d6a0b322b1adcd2417c47c0f5d5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:27:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://9206e5978757b0979fa411a384d9e5b4728b01a769f87383df38dbb8f0f18e4b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9206e5978757b0979fa411a384d9e5b4728b01a769f87383df38dbb8f0f18e4b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:27:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:27:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:27:37Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:07Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:07 crc kubenswrapper[5008]: I0129 15:28:07.981149 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2624b9eb-bfe1-4c46-8825-6152c5e00565\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://12266e3ba2ed2e5d6d1e7ee893a0d59cd4575c8870cb1e129ca0fd9b8623467f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c778df6f5c031669143db37980250c01473f3d9856acc44a6ef51852822f99ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://de7c341c7443f28f5919ef6baeb21377b5571637ad807dd7515a5f28c218034b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0f710dffd08d1bbb467ff9d2c6a5d5beed779550747459407916e743506ab27\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:27:37Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:07Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:07 crc kubenswrapper[5008]: I0129 15:28:07.993442 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d62f7cc2-d2d7-4c9a-9432-8b4fb9f3fcf2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4702656214e54c2881bf198364622648679a9981721d09a6b1551a134c63b7d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ed1794f8b68a0810301b6f7b91e03cfb269b35084dd97b2f153789ba70970f2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://677c04a1dffb767e6149ccb064772548ca29cf553afc20cb4eb82a5f85742ff0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d710e35a02d14289e2d5fe6b35c08621e78c96b7e9e30451ffd6d51962fb761\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4d710e35a02d14289e2d5fe6b35c08621e78c96b7e9e30451ffd6d51962fb761\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"message\\\":\\\"file observer\\\\nW0129 15:27:57.701071 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0129 15:27:57.704726 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 15:27:57.707574 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-445213743/tls.crt::/tmp/serving-cert-445213743/tls.key\\\\\\\"\\\\nI0129 15:27:58.036057 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0129 15:27:58.041904 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0129 15:27:58.041936 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0129 15:27:58.041959 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0129 15:27:58.041967 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0129 15:27:58.046875 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0129 15:27:58.046901 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 15:27:58.046906 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 15:27:58.046911 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0129 15:27:58.046914 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0129 15:27:58.046917 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0129 15:27:58.046919 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0129 15:27:58.047110 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0129 15:27:58.052272 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T15:27:42Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3397d4d59fbac09e49247425eb263f25d13c62a72013146c981b606f6389165d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:40Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://25a0c747be0a011a60911a631709b27620d8ebc5afea1d21dbeb71f26d971f6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://25a0c747be0a011a60911a631709b27620d8ebc5afea1d21dbeb71f26d971f6e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:27:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:27:37Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:07Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:08 crc kubenswrapper[5008]: I0129 15:28:08.004451 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:08Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:08 crc kubenswrapper[5008]: I0129 15:28:08.014234 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-gk9q8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ca0fcb2d-733d-4bde-9bbf-3f7082d0e244\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fda885d25c8fd46bd297810d4fb6c23ec0d4bb76993e94ea75a623b0feeed247\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6blck\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4781ea933d8ce868cf1da4b2890797c16012b434ce074870a59307d61a3c731\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6blck\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:27:59Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-gk9q8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:08Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:08 crc kubenswrapper[5008]: I0129 15:28:08.031334 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pqg9w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1d092513-7735-4c98-9734-57bc46b99280\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://08beb10f1715c1ca4bbe5b5ecf918e595f3befca424a2b65a06e682936dcc9c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2xcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://84bee79a5084a74e833cfe4bac65bc4b319e7a41e9f3e8c7ee7de383385da1a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2xcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eddc7bcf8b28e2d71e41dbad61e84e0e0ac1e2702628a400e9c16dcc4303cad1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2xcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b82de879355c27b3c577b5d5a292b2c1db266e6d92a8e01409bf87ede71ba420\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2xcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e0b6c0db5ed1e87ffade45aa1c7194322bbf680050f9b7328a3584db57e1554\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2xcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://676b28dc78242b0ec7c7a3643a048da9020c807de1f4ddd0cd801f60a1bf41a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2xcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6111e93f68c8aa5c23e0317317a19c4a1df88a0d6babfab0d89d65902410feee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2xcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dc93128ecb53884c776154eafc7f29837e9c378a10c37df5d85d608ef14d7195\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2xcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6807035e51f1a1b563d2c2de6ad73607b2a3bbb9b4336cb9dfeea693d35fdda6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6807035e51f1a1b563d2c2de6ad73607b2a3bbb9b4336cb9dfeea693d35fdda6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2xcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:27:59Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-pqg9w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:08Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:08 crc kubenswrapper[5008]: I0129 15:28:08.042047 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c04122903ba8ec9ecb21ba42f430520d0a097fff8cea9572b066e146d519cf91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:08Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:08 crc kubenswrapper[5008]: I0129 15:28:08.055325 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:08Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:08 crc kubenswrapper[5008]: I0129 15:28:08.060526 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:08 crc kubenswrapper[5008]: I0129 15:28:08.060551 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:08 crc kubenswrapper[5008]: I0129 15:28:08.060559 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:08 crc kubenswrapper[5008]: I0129 15:28:08.060571 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:08 crc kubenswrapper[5008]: I0129 15:28:08.060581 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:08Z","lastTransitionTime":"2026-01-29T15:28:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:08 crc kubenswrapper[5008]: I0129 15:28:08.068885 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-wtvvb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2dede057-dcce-4302-8efe-e2c3640308ec\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://63cab2ec47a6dc148b6d3554a6f4b5c1985ca43bf62bfc444ff3582273cce517\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mtnst\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:27:58Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-wtvvb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:08Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:08 crc kubenswrapper[5008]: I0129 15:28:08.084562 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ae42d856f5916fe3a1dace4ed5ed53a6cab552d169357b7303516719b78ef076\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:08Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:08 crc kubenswrapper[5008]: I0129 15:28:08.098903 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e5526ab405f367c31c46e86dc356f5c21ac7529cd706af08cb6cd35e54dbe33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a34142066431679db41e56f6697765165128986ad22bc919152524672e3035d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:08Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:08 crc kubenswrapper[5008]: I0129 15:28:08.116977 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-78bl2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa065d0b-d690-4a7d-9079-a8f976a7aca3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trwfk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dce68b57fb66d0f4fb38e7ba2da32746311a7705ec80e7dbaaee405bf6175456\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dce68b57fb66d0f4fb38e7ba2da32746311a7705ec80e7dbaaee405bf6175456\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:27:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trwfk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be2538011dc9cfea90fe3fdf861804d4f36944262a852e2efe4c6a215019fb7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://be2538011dc9cfea90fe3fdf861804d4f36944262a852e2efe4c6a215019fb7e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trwfk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1e468924dd5d2c21d28331698458147151b2c74b04a9154c3f0638b271ffb36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c1e468924dd5d2c21d28331698458147151b2c74b04a9154c3f0638b271ffb36\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trwfk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf3e60925690b1b555efc2db95efcef76510c147b6338b65b071bf0729561a6b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bf3e60925690b1b555efc2db95efcef76510c147b6338b65b071bf0729561a6b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trwfk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://83ad0f06e7035a28c9d0207484d22ac175226fac31b1d5e233ce7231cb957fb3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://83ad0f06e7035a28c9d0207484d22ac175226fac31b1d5e233ce7231cb957fb3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trwfk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://78d2f9571b05eeb98c339f4165ca858289b85192d254ea86d4fb2eae7ea2e61e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://78d2f9571b05eeb98c339f4165ca858289b85192d254ea86d4fb2eae7ea2e61e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trwfk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:27:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-78bl2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:08Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:08 crc kubenswrapper[5008]: I0129 15:28:08.129993 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-qj8wb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ffbfcf6-99e5-450c-8c72-b2db9365d93e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb113f45b58a5039b88d2c176d718d5a012e21c1785781c1fcda5843d529a9af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8mvmz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:28:01Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-qj8wb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:08Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:08 crc kubenswrapper[5008]: I0129 15:28:08.163134 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:08 crc kubenswrapper[5008]: I0129 15:28:08.163179 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:08 crc kubenswrapper[5008]: I0129 15:28:08.163191 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:08 crc kubenswrapper[5008]: I0129 15:28:08.163210 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:08 crc kubenswrapper[5008]: I0129 15:28:08.163223 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:08Z","lastTransitionTime":"2026-01-29T15:28:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:08 crc kubenswrapper[5008]: I0129 15:28:08.266205 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:08 crc kubenswrapper[5008]: I0129 15:28:08.266257 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:08 crc kubenswrapper[5008]: I0129 15:28:08.266267 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:08 crc kubenswrapper[5008]: I0129 15:28:08.266336 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:08 crc kubenswrapper[5008]: I0129 15:28:08.266349 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:08Z","lastTransitionTime":"2026-01-29T15:28:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:08 crc kubenswrapper[5008]: I0129 15:28:08.291633 5008 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-16 20:01:14.073301513 +0000 UTC Jan 29 15:28:08 crc kubenswrapper[5008]: I0129 15:28:08.323032 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 15:28:08 crc kubenswrapper[5008]: I0129 15:28:08.323063 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 15:28:08 crc kubenswrapper[5008]: I0129 15:28:08.323124 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 15:28:08 crc kubenswrapper[5008]: E0129 15:28:08.323205 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 15:28:08 crc kubenswrapper[5008]: E0129 15:28:08.323336 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 15:28:08 crc kubenswrapper[5008]: E0129 15:28:08.323405 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 15:28:08 crc kubenswrapper[5008]: I0129 15:28:08.369082 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:08 crc kubenswrapper[5008]: I0129 15:28:08.369125 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:08 crc kubenswrapper[5008]: I0129 15:28:08.369144 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:08 crc kubenswrapper[5008]: I0129 15:28:08.369163 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:08 crc kubenswrapper[5008]: I0129 15:28:08.369174 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:08Z","lastTransitionTime":"2026-01-29T15:28:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:08 crc kubenswrapper[5008]: I0129 15:28:08.471885 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:08 crc kubenswrapper[5008]: I0129 15:28:08.471922 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:08 crc kubenswrapper[5008]: I0129 15:28:08.471930 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:08 crc kubenswrapper[5008]: I0129 15:28:08.471944 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:08 crc kubenswrapper[5008]: I0129 15:28:08.471955 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:08Z","lastTransitionTime":"2026-01-29T15:28:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:08 crc kubenswrapper[5008]: I0129 15:28:08.574537 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:08 crc kubenswrapper[5008]: I0129 15:28:08.574580 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:08 crc kubenswrapper[5008]: I0129 15:28:08.574591 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:08 crc kubenswrapper[5008]: I0129 15:28:08.574609 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:08 crc kubenswrapper[5008]: I0129 15:28:08.574619 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:08Z","lastTransitionTime":"2026-01-29T15:28:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:08 crc kubenswrapper[5008]: I0129 15:28:08.619409 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-78bl2" event={"ID":"fa065d0b-d690-4a7d-9079-a8f976a7aca3","Type":"ContainerStarted","Data":"bb7be81711617226cfa9af5ce71166ad176fc477581c03ba781a2746d64bbf31"} Jan 29 15:28:08 crc kubenswrapper[5008]: I0129 15:28:08.619491 5008 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 29 15:28:08 crc kubenswrapper[5008]: I0129 15:28:08.640654 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:08Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:08 crc kubenswrapper[5008]: I0129 15:28:08.656004 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-wtvvb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2dede057-dcce-4302-8efe-e2c3640308ec\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://63cab2ec47a6dc148b6d3554a6f4b5c1985ca43bf62bfc444ff3582273cce517\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mtnst\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:27:58Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-wtvvb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:08Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:08 crc kubenswrapper[5008]: I0129 15:28:08.667411 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-gk9q8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ca0fcb2d-733d-4bde-9bbf-3f7082d0e244\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fda885d25c8fd46bd297810d4fb6c23ec0d4bb76993e94ea75a623b0feeed247\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6blck\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4781ea933d8ce868cf1da4b2890797c16012b434ce074870a59307d61a3c731\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6blck\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:27:59Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-gk9q8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:08Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:08 crc kubenswrapper[5008]: I0129 15:28:08.677960 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:08 crc kubenswrapper[5008]: I0129 15:28:08.678026 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:08 crc kubenswrapper[5008]: I0129 15:28:08.678045 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:08 crc kubenswrapper[5008]: I0129 15:28:08.678070 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:08 crc kubenswrapper[5008]: I0129 15:28:08.678129 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:08Z","lastTransitionTime":"2026-01-29T15:28:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:08 crc kubenswrapper[5008]: I0129 15:28:08.698625 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pqg9w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1d092513-7735-4c98-9734-57bc46b99280\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://08beb10f1715c1ca4bbe5b5ecf918e595f3befca424a2b65a06e682936dcc9c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2xcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://84bee79a5084a74e833cfe4bac65bc4b319e7a41e9f3e8c7ee7de383385da1a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2xcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eddc7bcf8b28e2d71e41dbad61e84e0e0ac1e2702628a400e9c16dcc4303cad1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2xcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b82de879355c27b3c577b5d5a292b2c1db266e6d92a8e01409bf87ede71ba420\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2xcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e0b6c0db5ed1e87ffade45aa1c7194322bbf680050f9b7328a3584db57e1554\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2xcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://676b28dc78242b0ec7c7a3643a048da9020c807de1f4ddd0cd801f60a1bf41a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2xcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6111e93f68c8aa5c23e0317317a19c4a1df88a0d6babfab0d89d65902410feee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2xcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dc93128ecb53884c776154eafc7f29837e9c378a10c37df5d85d608ef14d7195\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2xcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6807035e51f1a1b563d2c2de6ad73607b2a3bbb9b4336cb9dfeea693d35fdda6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6807035e51f1a1b563d2c2de6ad73607b2a3bbb9b4336cb9dfeea693d35fdda6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2xcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:27:59Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-pqg9w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:08Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:08 crc kubenswrapper[5008]: I0129 15:28:08.713925 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c04122903ba8ec9ecb21ba42f430520d0a097fff8cea9572b066e146d519cf91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:08Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:08 crc kubenswrapper[5008]: I0129 15:28:08.731288 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ae42d856f5916fe3a1dace4ed5ed53a6cab552d169357b7303516719b78ef076\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:08Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:08 crc kubenswrapper[5008]: I0129 15:28:08.748442 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e5526ab405f367c31c46e86dc356f5c21ac7529cd706af08cb6cd35e54dbe33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a34142066431679db41e56f6697765165128986ad22bc919152524672e3035d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:08Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:08 crc kubenswrapper[5008]: I0129 15:28:08.764227 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-qj8wb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ffbfcf6-99e5-450c-8c72-b2db9365d93e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb113f45b58a5039b88d2c176d718d5a012e21c1785781c1fcda5843d529a9af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8mvmz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:28:01Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-qj8wb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:08Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:08 crc kubenswrapper[5008]: I0129 15:28:08.780601 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:08 crc kubenswrapper[5008]: I0129 15:28:08.780649 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:08 crc kubenswrapper[5008]: I0129 15:28:08.780662 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:08 crc kubenswrapper[5008]: I0129 15:28:08.780677 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:08 crc kubenswrapper[5008]: I0129 15:28:08.780688 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:08Z","lastTransitionTime":"2026-01-29T15:28:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:08 crc kubenswrapper[5008]: I0129 15:28:08.785248 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-78bl2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa065d0b-d690-4a7d-9079-a8f976a7aca3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb7be81711617226cfa9af5ce71166ad176fc477581c03ba781a2746d64bbf31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trwfk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dce68b57fb66d0f4fb38e7ba2da32746311a7705ec80e7dbaaee405bf6175456\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dce68b57fb66d0f4fb38e7ba2da32746311a7705ec80e7dbaaee405bf6175456\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:27:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trwfk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be2538011dc9cfea90fe3fdf861804d4f36944262a852e2efe4c6a215019fb7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://be2538011dc9cfea90fe3fdf861804d4f36944262a852e2efe4c6a215019fb7e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trwfk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1e468924dd5d2c21d28331698458147151b2c74b04a9154c3f0638b271ffb36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c1e468924dd5d2c21d28331698458147151b2c74b04a9154c3f0638b271ffb36\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trwfk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf3e60925690b1b555efc2db95efcef76510c147b6338b65b071bf0729561a6b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bf3e60925690b1b555efc2db95efcef76510c147b6338b65b071bf0729561a6b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trwfk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://83ad0f06e7035a28c9d0207484d22ac175226fac31b1d5e233ce7231cb957fb3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://83ad0f06e7035a28c9d0207484d22ac175226fac31b1d5e233ce7231cb957fb3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trwfk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://78d2f9571b05eeb98c339f4165ca858289b85192d254ea86d4fb2eae7ea2e61e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://78d2f9571b05eeb98c339f4165ca858289b85192d254ea86d4fb2eae7ea2e61e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trwfk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:27:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-78bl2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:08Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:08 crc kubenswrapper[5008]: I0129 15:28:08.801730 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d62f7cc2-d2d7-4c9a-9432-8b4fb9f3fcf2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4702656214e54c2881bf198364622648679a9981721d09a6b1551a134c63b7d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ed1794f8b68a0810301b6f7b91e03cfb269b35084dd97b2f153789ba70970f2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://677c04a1dffb767e6149ccb064772548ca29cf553afc20cb4eb82a5f85742ff0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d710e35a02d14289e2d5fe6b35c08621e78c96b7e9e30451ffd6d51962fb761\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4d710e35a02d14289e2d5fe6b35c08621e78c96b7e9e30451ffd6d51962fb761\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"message\\\":\\\"file observer\\\\nW0129 15:27:57.701071 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0129 15:27:57.704726 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 15:27:57.707574 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-445213743/tls.crt::/tmp/serving-cert-445213743/tls.key\\\\\\\"\\\\nI0129 15:27:58.036057 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0129 15:27:58.041904 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0129 15:27:58.041936 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0129 15:27:58.041959 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0129 15:27:58.041967 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0129 15:27:58.046875 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0129 15:27:58.046901 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 15:27:58.046906 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 15:27:58.046911 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0129 15:27:58.046914 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0129 15:27:58.046917 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0129 15:27:58.046919 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0129 15:27:58.047110 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0129 15:27:58.052272 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T15:27:42Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3397d4d59fbac09e49247425eb263f25d13c62a72013146c981b606f6389165d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:40Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://25a0c747be0a011a60911a631709b27620d8ebc5afea1d21dbeb71f26d971f6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://25a0c747be0a011a60911a631709b27620d8ebc5afea1d21dbeb71f26d971f6e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:27:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:27:37Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:08Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:08 crc kubenswrapper[5008]: I0129 15:28:08.813497 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:08Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:08 crc kubenswrapper[5008]: I0129 15:28:08.825227 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:08Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:08 crc kubenswrapper[5008]: I0129 15:28:08.841044 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-42hcz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cdd8ae23-3f9f-49f8-928d-46dad823fde4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a44b0a7b0b53c339b51d5391ad7e0eb342bdb491b4af37a98f48788b8e2c077b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg75x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:27:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-42hcz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:08Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:08 crc kubenswrapper[5008]: I0129 15:28:08.862622 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"77958faa-02ef-4792-b792-6094f922cd1d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de76f0d6e08ee14b4a5ab39a21ebdc63bdf379dcd5b648ae46a4edcc2a49f20e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1dcda54222f387e6560d3e297be72e19032a975feb916bc12a220870207a3f35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3cb618c2c44502074cb37ce1e688d187254eafae3916372a16c8ab845fed767a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff8e5fd243880ce71f07c5c532cad2cdff0e4bca2d0083280be78206a1a4c854\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7393e24277d74a2b9987e6cdc54cd65485f5bc57d93ec25a2cb8479923db1feb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://380711ea042d739a804ab6da4c0361004cc9ed9a48a5f4b006d168df6a84ebb5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://380711ea042d739a804ab6da4c0361004cc9ed9a48a5f4b006d168df6a84ebb5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:27:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d9134a6829b7df9b42aeae161ea1f3961837d6a0b322b1adcd2417c47c0f5d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1d9134a6829b7df9b42aeae161ea1f3961837d6a0b322b1adcd2417c47c0f5d5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:27:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://9206e5978757b0979fa411a384d9e5b4728b01a769f87383df38dbb8f0f18e4b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9206e5978757b0979fa411a384d9e5b4728b01a769f87383df38dbb8f0f18e4b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:27:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:27:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:27:37Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:08Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:08 crc kubenswrapper[5008]: I0129 15:28:08.877835 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2624b9eb-bfe1-4c46-8825-6152c5e00565\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://12266e3ba2ed2e5d6d1e7ee893a0d59cd4575c8870cb1e129ca0fd9b8623467f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c778df6f5c031669143db37980250c01473f3d9856acc44a6ef51852822f99ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://de7c341c7443f28f5919ef6baeb21377b5571637ad807dd7515a5f28c218034b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0f710dffd08d1bbb467ff9d2c6a5d5beed779550747459407916e743506ab27\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:27:37Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:08Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:08 crc kubenswrapper[5008]: I0129 15:28:08.883906 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:08 crc kubenswrapper[5008]: I0129 15:28:08.883941 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:08 crc kubenswrapper[5008]: I0129 15:28:08.883951 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:08 crc kubenswrapper[5008]: I0129 15:28:08.883965 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:08 crc kubenswrapper[5008]: I0129 15:28:08.883975 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:08Z","lastTransitionTime":"2026-01-29T15:28:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:08 crc kubenswrapper[5008]: I0129 15:28:08.986199 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:08 crc kubenswrapper[5008]: I0129 15:28:08.986256 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:08 crc kubenswrapper[5008]: I0129 15:28:08.986270 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:08 crc kubenswrapper[5008]: I0129 15:28:08.986288 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:08 crc kubenswrapper[5008]: I0129 15:28:08.986302 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:08Z","lastTransitionTime":"2026-01-29T15:28:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:09 crc kubenswrapper[5008]: I0129 15:28:09.091256 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:09 crc kubenswrapper[5008]: I0129 15:28:09.091320 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:09 crc kubenswrapper[5008]: I0129 15:28:09.091336 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:09 crc kubenswrapper[5008]: I0129 15:28:09.091355 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:09 crc kubenswrapper[5008]: I0129 15:28:09.091370 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:09Z","lastTransitionTime":"2026-01-29T15:28:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:09 crc kubenswrapper[5008]: I0129 15:28:09.193696 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:09 crc kubenswrapper[5008]: I0129 15:28:09.193772 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:09 crc kubenswrapper[5008]: I0129 15:28:09.193825 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:09 crc kubenswrapper[5008]: I0129 15:28:09.193898 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:09 crc kubenswrapper[5008]: I0129 15:28:09.193937 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:09Z","lastTransitionTime":"2026-01-29T15:28:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:09 crc kubenswrapper[5008]: I0129 15:28:09.292313 5008 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-03 19:20:46.140382029 +0000 UTC Jan 29 15:28:09 crc kubenswrapper[5008]: I0129 15:28:09.296211 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:09 crc kubenswrapper[5008]: I0129 15:28:09.296257 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:09 crc kubenswrapper[5008]: I0129 15:28:09.296266 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:09 crc kubenswrapper[5008]: I0129 15:28:09.296281 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:09 crc kubenswrapper[5008]: I0129 15:28:09.296290 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:09Z","lastTransitionTime":"2026-01-29T15:28:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:09 crc kubenswrapper[5008]: I0129 15:28:09.398571 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:09 crc kubenswrapper[5008]: I0129 15:28:09.398642 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:09 crc kubenswrapper[5008]: I0129 15:28:09.398665 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:09 crc kubenswrapper[5008]: I0129 15:28:09.398695 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:09 crc kubenswrapper[5008]: I0129 15:28:09.398717 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:09Z","lastTransitionTime":"2026-01-29T15:28:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:09 crc kubenswrapper[5008]: I0129 15:28:09.501657 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:09 crc kubenswrapper[5008]: I0129 15:28:09.501738 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:09 crc kubenswrapper[5008]: I0129 15:28:09.501751 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:09 crc kubenswrapper[5008]: I0129 15:28:09.501769 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:09 crc kubenswrapper[5008]: I0129 15:28:09.501799 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:09Z","lastTransitionTime":"2026-01-29T15:28:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:09 crc kubenswrapper[5008]: I0129 15:28:09.604629 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:09 crc kubenswrapper[5008]: I0129 15:28:09.604705 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:09 crc kubenswrapper[5008]: I0129 15:28:09.604729 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:09 crc kubenswrapper[5008]: I0129 15:28:09.604763 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:09 crc kubenswrapper[5008]: I0129 15:28:09.604822 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:09Z","lastTransitionTime":"2026-01-29T15:28:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:09 crc kubenswrapper[5008]: I0129 15:28:09.622331 5008 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 29 15:28:09 crc kubenswrapper[5008]: I0129 15:28:09.707192 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:09 crc kubenswrapper[5008]: I0129 15:28:09.707234 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:09 crc kubenswrapper[5008]: I0129 15:28:09.707245 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:09 crc kubenswrapper[5008]: I0129 15:28:09.707262 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:09 crc kubenswrapper[5008]: I0129 15:28:09.707273 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:09Z","lastTransitionTime":"2026-01-29T15:28:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:09 crc kubenswrapper[5008]: I0129 15:28:09.809706 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:09 crc kubenswrapper[5008]: I0129 15:28:09.809763 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:09 crc kubenswrapper[5008]: I0129 15:28:09.809777 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:09 crc kubenswrapper[5008]: I0129 15:28:09.809830 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:09 crc kubenswrapper[5008]: I0129 15:28:09.809855 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:09Z","lastTransitionTime":"2026-01-29T15:28:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:09 crc kubenswrapper[5008]: I0129 15:28:09.912192 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:09 crc kubenswrapper[5008]: I0129 15:28:09.912259 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:09 crc kubenswrapper[5008]: I0129 15:28:09.912271 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:09 crc kubenswrapper[5008]: I0129 15:28:09.912290 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:09 crc kubenswrapper[5008]: I0129 15:28:09.912302 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:09Z","lastTransitionTime":"2026-01-29T15:28:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:10 crc kubenswrapper[5008]: I0129 15:28:10.015412 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:10 crc kubenswrapper[5008]: I0129 15:28:10.015473 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:10 crc kubenswrapper[5008]: I0129 15:28:10.015486 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:10 crc kubenswrapper[5008]: I0129 15:28:10.015508 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:10 crc kubenswrapper[5008]: I0129 15:28:10.015522 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:10Z","lastTransitionTime":"2026-01-29T15:28:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:10 crc kubenswrapper[5008]: I0129 15:28:10.124194 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:10 crc kubenswrapper[5008]: I0129 15:28:10.124272 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:10 crc kubenswrapper[5008]: I0129 15:28:10.124285 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:10 crc kubenswrapper[5008]: I0129 15:28:10.124305 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:10 crc kubenswrapper[5008]: I0129 15:28:10.124323 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:10Z","lastTransitionTime":"2026-01-29T15:28:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:10 crc kubenswrapper[5008]: I0129 15:28:10.227605 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:10 crc kubenswrapper[5008]: I0129 15:28:10.227647 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:10 crc kubenswrapper[5008]: I0129 15:28:10.227659 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:10 crc kubenswrapper[5008]: I0129 15:28:10.227673 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:10 crc kubenswrapper[5008]: I0129 15:28:10.227685 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:10Z","lastTransitionTime":"2026-01-29T15:28:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:10 crc kubenswrapper[5008]: I0129 15:28:10.293054 5008 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-08 09:18:23.896959013 +0000 UTC Jan 29 15:28:10 crc kubenswrapper[5008]: I0129 15:28:10.323457 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 15:28:10 crc kubenswrapper[5008]: I0129 15:28:10.323475 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 15:28:10 crc kubenswrapper[5008]: I0129 15:28:10.323538 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 15:28:10 crc kubenswrapper[5008]: E0129 15:28:10.323648 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 15:28:10 crc kubenswrapper[5008]: E0129 15:28:10.323717 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 15:28:10 crc kubenswrapper[5008]: E0129 15:28:10.323812 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 15:28:10 crc kubenswrapper[5008]: I0129 15:28:10.330570 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:10 crc kubenswrapper[5008]: I0129 15:28:10.330608 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:10 crc kubenswrapper[5008]: I0129 15:28:10.330619 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:10 crc kubenswrapper[5008]: I0129 15:28:10.330636 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:10 crc kubenswrapper[5008]: I0129 15:28:10.330647 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:10Z","lastTransitionTime":"2026-01-29T15:28:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:10 crc kubenswrapper[5008]: I0129 15:28:10.433676 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:10 crc kubenswrapper[5008]: I0129 15:28:10.433722 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:10 crc kubenswrapper[5008]: I0129 15:28:10.433733 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:10 crc kubenswrapper[5008]: I0129 15:28:10.433752 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:10 crc kubenswrapper[5008]: I0129 15:28:10.433763 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:10Z","lastTransitionTime":"2026-01-29T15:28:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:10 crc kubenswrapper[5008]: I0129 15:28:10.537191 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:10 crc kubenswrapper[5008]: I0129 15:28:10.537280 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:10 crc kubenswrapper[5008]: I0129 15:28:10.537300 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:10 crc kubenswrapper[5008]: I0129 15:28:10.537324 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:10 crc kubenswrapper[5008]: I0129 15:28:10.537341 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:10Z","lastTransitionTime":"2026-01-29T15:28:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:10 crc kubenswrapper[5008]: I0129 15:28:10.643825 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:10 crc kubenswrapper[5008]: I0129 15:28:10.643873 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:10 crc kubenswrapper[5008]: I0129 15:28:10.643885 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:10 crc kubenswrapper[5008]: I0129 15:28:10.643901 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:10 crc kubenswrapper[5008]: I0129 15:28:10.643914 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:10Z","lastTransitionTime":"2026-01-29T15:28:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:10 crc kubenswrapper[5008]: I0129 15:28:10.747266 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:10 crc kubenswrapper[5008]: I0129 15:28:10.747345 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:10 crc kubenswrapper[5008]: I0129 15:28:10.747373 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:10 crc kubenswrapper[5008]: I0129 15:28:10.747406 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:10 crc kubenswrapper[5008]: I0129 15:28:10.747431 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:10Z","lastTransitionTime":"2026-01-29T15:28:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:10 crc kubenswrapper[5008]: I0129 15:28:10.851252 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:10 crc kubenswrapper[5008]: I0129 15:28:10.851312 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:10 crc kubenswrapper[5008]: I0129 15:28:10.851320 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:10 crc kubenswrapper[5008]: I0129 15:28:10.851344 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:10 crc kubenswrapper[5008]: I0129 15:28:10.851360 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:10Z","lastTransitionTime":"2026-01-29T15:28:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:10 crc kubenswrapper[5008]: I0129 15:28:10.954220 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:10 crc kubenswrapper[5008]: I0129 15:28:10.954266 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:10 crc kubenswrapper[5008]: I0129 15:28:10.954278 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:10 crc kubenswrapper[5008]: I0129 15:28:10.954293 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:10 crc kubenswrapper[5008]: I0129 15:28:10.954306 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:10Z","lastTransitionTime":"2026-01-29T15:28:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:11 crc kubenswrapper[5008]: I0129 15:28:11.057468 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:11 crc kubenswrapper[5008]: I0129 15:28:11.057519 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:11 crc kubenswrapper[5008]: I0129 15:28:11.057527 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:11 crc kubenswrapper[5008]: I0129 15:28:11.057542 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:11 crc kubenswrapper[5008]: I0129 15:28:11.057551 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:11Z","lastTransitionTime":"2026-01-29T15:28:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:11 crc kubenswrapper[5008]: I0129 15:28:11.160616 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:11 crc kubenswrapper[5008]: I0129 15:28:11.160700 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:11 crc kubenswrapper[5008]: I0129 15:28:11.160720 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:11 crc kubenswrapper[5008]: I0129 15:28:11.160743 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:11 crc kubenswrapper[5008]: I0129 15:28:11.160763 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:11Z","lastTransitionTime":"2026-01-29T15:28:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:11 crc kubenswrapper[5008]: I0129 15:28:11.263704 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:11 crc kubenswrapper[5008]: I0129 15:28:11.264143 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:11 crc kubenswrapper[5008]: I0129 15:28:11.264167 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:11 crc kubenswrapper[5008]: I0129 15:28:11.264195 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:11 crc kubenswrapper[5008]: I0129 15:28:11.264216 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:11Z","lastTransitionTime":"2026-01-29T15:28:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:11 crc kubenswrapper[5008]: I0129 15:28:11.293527 5008 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-07 15:23:51.504832049 +0000 UTC Jan 29 15:28:11 crc kubenswrapper[5008]: I0129 15:28:11.368174 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:11 crc kubenswrapper[5008]: I0129 15:28:11.368251 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:11 crc kubenswrapper[5008]: I0129 15:28:11.368275 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:11 crc kubenswrapper[5008]: I0129 15:28:11.368323 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:11 crc kubenswrapper[5008]: I0129 15:28:11.368353 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:11Z","lastTransitionTime":"2026-01-29T15:28:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:11 crc kubenswrapper[5008]: I0129 15:28:11.471687 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:11 crc kubenswrapper[5008]: I0129 15:28:11.471747 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:11 crc kubenswrapper[5008]: I0129 15:28:11.471759 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:11 crc kubenswrapper[5008]: I0129 15:28:11.471805 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:11 crc kubenswrapper[5008]: I0129 15:28:11.471820 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:11Z","lastTransitionTime":"2026-01-29T15:28:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:11 crc kubenswrapper[5008]: I0129 15:28:11.581471 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:11 crc kubenswrapper[5008]: I0129 15:28:11.581544 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:11 crc kubenswrapper[5008]: I0129 15:28:11.581563 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:11 crc kubenswrapper[5008]: I0129 15:28:11.581596 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:11 crc kubenswrapper[5008]: I0129 15:28:11.581613 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:11Z","lastTransitionTime":"2026-01-29T15:28:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:11 crc kubenswrapper[5008]: I0129 15:28:11.616757 5008 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-p5kdp"] Jan 29 15:28:11 crc kubenswrapper[5008]: I0129 15:28:11.617281 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-p5kdp" Jan 29 15:28:11 crc kubenswrapper[5008]: I0129 15:28:11.620452 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Jan 29 15:28:11 crc kubenswrapper[5008]: I0129 15:28:11.620473 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Jan 29 15:28:11 crc kubenswrapper[5008]: I0129 15:28:11.631503 5008 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-pqg9w_1d092513-7735-4c98-9734-57bc46b99280/ovnkube-controller/0.log" Jan 29 15:28:11 crc kubenswrapper[5008]: I0129 15:28:11.634963 5008 generic.go:334] "Generic (PLEG): container finished" podID="1d092513-7735-4c98-9734-57bc46b99280" containerID="6111e93f68c8aa5c23e0317317a19c4a1df88a0d6babfab0d89d65902410feee" exitCode=1 Jan 29 15:28:11 crc kubenswrapper[5008]: I0129 15:28:11.635034 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pqg9w" event={"ID":"1d092513-7735-4c98-9734-57bc46b99280","Type":"ContainerDied","Data":"6111e93f68c8aa5c23e0317317a19c4a1df88a0d6babfab0d89d65902410feee"} Jan 29 15:28:11 crc kubenswrapper[5008]: I0129 15:28:11.636232 5008 scope.go:117] "RemoveContainer" containerID="6111e93f68c8aa5c23e0317317a19c4a1df88a0d6babfab0d89d65902410feee" Jan 29 15:28:11 crc kubenswrapper[5008]: I0129 15:28:11.640066 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c04122903ba8ec9ecb21ba42f430520d0a097fff8cea9572b066e146d519cf91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:11Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:11 crc kubenswrapper[5008]: I0129 15:28:11.659217 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:11Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:11 crc kubenswrapper[5008]: I0129 15:28:11.672980 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-wtvvb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2dede057-dcce-4302-8efe-e2c3640308ec\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://63cab2ec47a6dc148b6d3554a6f4b5c1985ca43bf62bfc444ff3582273cce517\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mtnst\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:27:58Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-wtvvb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:11Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:11 crc kubenswrapper[5008]: I0129 15:28:11.680287 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/4f5a0b69-5edd-467c-a822-093f1689df1d-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-p5kdp\" (UID: \"4f5a0b69-5edd-467c-a822-093f1689df1d\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-p5kdp" Jan 29 15:28:11 crc kubenswrapper[5008]: I0129 15:28:11.680357 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/4f5a0b69-5edd-467c-a822-093f1689df1d-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-p5kdp\" (UID: \"4f5a0b69-5edd-467c-a822-093f1689df1d\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-p5kdp" Jan 29 15:28:11 crc kubenswrapper[5008]: I0129 15:28:11.680407 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/4f5a0b69-5edd-467c-a822-093f1689df1d-env-overrides\") pod \"ovnkube-control-plane-749d76644c-p5kdp\" (UID: \"4f5a0b69-5edd-467c-a822-093f1689df1d\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-p5kdp" Jan 29 15:28:11 crc kubenswrapper[5008]: I0129 15:28:11.680433 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gq2fz\" (UniqueName: \"kubernetes.io/projected/4f5a0b69-5edd-467c-a822-093f1689df1d-kube-api-access-gq2fz\") pod \"ovnkube-control-plane-749d76644c-p5kdp\" (UID: \"4f5a0b69-5edd-467c-a822-093f1689df1d\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-p5kdp" Jan 29 15:28:11 crc kubenswrapper[5008]: I0129 15:28:11.683945 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:11 crc kubenswrapper[5008]: I0129 15:28:11.683994 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:11 crc kubenswrapper[5008]: I0129 15:28:11.684007 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:11 crc kubenswrapper[5008]: I0129 15:28:11.684028 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:11 crc kubenswrapper[5008]: I0129 15:28:11.684043 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:11Z","lastTransitionTime":"2026-01-29T15:28:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:11 crc kubenswrapper[5008]: I0129 15:28:11.693219 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-gk9q8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ca0fcb2d-733d-4bde-9bbf-3f7082d0e244\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fda885d25c8fd46bd297810d4fb6c23ec0d4bb76993e94ea75a623b0feeed247\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6blck\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4781ea933d8ce868cf1da4b2890797c16012b434ce074870a59307d61a3c731\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6blck\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:27:59Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-gk9q8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:11Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:11 crc kubenswrapper[5008]: I0129 15:28:11.715000 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pqg9w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1d092513-7735-4c98-9734-57bc46b99280\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://08beb10f1715c1ca4bbe5b5ecf918e595f3befca424a2b65a06e682936dcc9c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2xcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://84bee79a5084a74e833cfe4bac65bc4b319e7a41e9f3e8c7ee7de383385da1a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2xcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eddc7bcf8b28e2d71e41dbad61e84e0e0ac1e2702628a400e9c16dcc4303cad1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2xcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b82de879355c27b3c577b5d5a292b2c1db266e6d92a8e01409bf87ede71ba420\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2xcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e0b6c0db5ed1e87ffade45aa1c7194322bbf680050f9b7328a3584db57e1554\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2xcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://676b28dc78242b0ec7c7a3643a048da9020c807de1f4ddd0cd801f60a1bf41a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2xcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6111e93f68c8aa5c23e0317317a19c4a1df88a0d6babfab0d89d65902410feee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2xcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dc93128ecb53884c776154eafc7f29837e9c378a10c37df5d85d608ef14d7195\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2xcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6807035e51f1a1b563d2c2de6ad73607b2a3bbb9b4336cb9dfeea693d35fdda6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6807035e51f1a1b563d2c2de6ad73607b2a3bbb9b4336cb9dfeea693d35fdda6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2xcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:27:59Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-pqg9w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:11Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:11 crc kubenswrapper[5008]: I0129 15:28:11.728617 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ae42d856f5916fe3a1dace4ed5ed53a6cab552d169357b7303516719b78ef076\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:11Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:11 crc kubenswrapper[5008]: I0129 15:28:11.739411 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e5526ab405f367c31c46e86dc356f5c21ac7529cd706af08cb6cd35e54dbe33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a34142066431679db41e56f6697765165128986ad22bc919152524672e3035d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:11Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:11 crc kubenswrapper[5008]: I0129 15:28:11.755531 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-78bl2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa065d0b-d690-4a7d-9079-a8f976a7aca3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb7be81711617226cfa9af5ce71166ad176fc477581c03ba781a2746d64bbf31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trwfk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dce68b57fb66d0f4fb38e7ba2da32746311a7705ec80e7dbaaee405bf6175456\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dce68b57fb66d0f4fb38e7ba2da32746311a7705ec80e7dbaaee405bf6175456\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:27:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trwfk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be2538011dc9cfea90fe3fdf861804d4f36944262a852e2efe4c6a215019fb7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://be2538011dc9cfea90fe3fdf861804d4f36944262a852e2efe4c6a215019fb7e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trwfk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1e468924dd5d2c21d28331698458147151b2c74b04a9154c3f0638b271ffb36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c1e468924dd5d2c21d28331698458147151b2c74b04a9154c3f0638b271ffb36\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trwfk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf3e60925690b1b555efc2db95efcef76510c147b6338b65b071bf0729561a6b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bf3e60925690b1b555efc2db95efcef76510c147b6338b65b071bf0729561a6b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trwfk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://83ad0f06e7035a28c9d0207484d22ac175226fac31b1d5e233ce7231cb957fb3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://83ad0f06e7035a28c9d0207484d22ac175226fac31b1d5e233ce7231cb957fb3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trwfk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://78d2f9571b05eeb98c339f4165ca858289b85192d254ea86d4fb2eae7ea2e61e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://78d2f9571b05eeb98c339f4165ca858289b85192d254ea86d4fb2eae7ea2e61e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trwfk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:27:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-78bl2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:11Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:11 crc kubenswrapper[5008]: I0129 15:28:11.769747 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-qj8wb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ffbfcf6-99e5-450c-8c72-b2db9365d93e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb113f45b58a5039b88d2c176d718d5a012e21c1785781c1fcda5843d529a9af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8mvmz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:28:01Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-qj8wb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:11Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:11 crc kubenswrapper[5008]: I0129 15:28:11.780641 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-p5kdp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f5a0b69-5edd-467c-a822-093f1689df1d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gq2fz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gq2fz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:28:11Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-p5kdp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:11Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:11 crc kubenswrapper[5008]: I0129 15:28:11.780894 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/4f5a0b69-5edd-467c-a822-093f1689df1d-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-p5kdp\" (UID: \"4f5a0b69-5edd-467c-a822-093f1689df1d\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-p5kdp" Jan 29 15:28:11 crc kubenswrapper[5008]: I0129 15:28:11.780932 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/4f5a0b69-5edd-467c-a822-093f1689df1d-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-p5kdp\" (UID: \"4f5a0b69-5edd-467c-a822-093f1689df1d\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-p5kdp" Jan 29 15:28:11 crc kubenswrapper[5008]: I0129 15:28:11.780957 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/4f5a0b69-5edd-467c-a822-093f1689df1d-env-overrides\") pod \"ovnkube-control-plane-749d76644c-p5kdp\" (UID: \"4f5a0b69-5edd-467c-a822-093f1689df1d\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-p5kdp" Jan 29 15:28:11 crc kubenswrapper[5008]: I0129 15:28:11.780977 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gq2fz\" (UniqueName: \"kubernetes.io/projected/4f5a0b69-5edd-467c-a822-093f1689df1d-kube-api-access-gq2fz\") pod \"ovnkube-control-plane-749d76644c-p5kdp\" (UID: \"4f5a0b69-5edd-467c-a822-093f1689df1d\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-p5kdp" Jan 29 15:28:11 crc kubenswrapper[5008]: I0129 15:28:11.782069 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/4f5a0b69-5edd-467c-a822-093f1689df1d-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-p5kdp\" (UID: \"4f5a0b69-5edd-467c-a822-093f1689df1d\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-p5kdp" Jan 29 15:28:11 crc kubenswrapper[5008]: I0129 15:28:11.782323 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/4f5a0b69-5edd-467c-a822-093f1689df1d-env-overrides\") pod \"ovnkube-control-plane-749d76644c-p5kdp\" (UID: \"4f5a0b69-5edd-467c-a822-093f1689df1d\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-p5kdp" Jan 29 15:28:11 crc kubenswrapper[5008]: I0129 15:28:11.793379 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/4f5a0b69-5edd-467c-a822-093f1689df1d-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-p5kdp\" (UID: \"4f5a0b69-5edd-467c-a822-093f1689df1d\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-p5kdp" Jan 29 15:28:11 crc kubenswrapper[5008]: I0129 15:28:11.793867 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:11 crc kubenswrapper[5008]: I0129 15:28:11.793909 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:11 crc kubenswrapper[5008]: I0129 15:28:11.793922 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:11 crc kubenswrapper[5008]: I0129 15:28:11.793941 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:11 crc kubenswrapper[5008]: I0129 15:28:11.793954 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:11Z","lastTransitionTime":"2026-01-29T15:28:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:11 crc kubenswrapper[5008]: I0129 15:28:11.797092 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gq2fz\" (UniqueName: \"kubernetes.io/projected/4f5a0b69-5edd-467c-a822-093f1689df1d-kube-api-access-gq2fz\") pod \"ovnkube-control-plane-749d76644c-p5kdp\" (UID: \"4f5a0b69-5edd-467c-a822-093f1689df1d\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-p5kdp" Jan 29 15:28:11 crc kubenswrapper[5008]: I0129 15:28:11.799213 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"77958faa-02ef-4792-b792-6094f922cd1d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de76f0d6e08ee14b4a5ab39a21ebdc63bdf379dcd5b648ae46a4edcc2a49f20e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1dcda54222f387e6560d3e297be72e19032a975feb916bc12a220870207a3f35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3cb618c2c44502074cb37ce1e688d187254eafae3916372a16c8ab845fed767a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff8e5fd243880ce71f07c5c532cad2cdff0e4bca2d0083280be78206a1a4c854\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7393e24277d74a2b9987e6cdc54cd65485f5bc57d93ec25a2cb8479923db1feb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://380711ea042d739a804ab6da4c0361004cc9ed9a48a5f4b006d168df6a84ebb5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://380711ea042d739a804ab6da4c0361004cc9ed9a48a5f4b006d168df6a84ebb5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:27:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d9134a6829b7df9b42aeae161ea1f3961837d6a0b322b1adcd2417c47c0f5d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1d9134a6829b7df9b42aeae161ea1f3961837d6a0b322b1adcd2417c47c0f5d5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:27:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://9206e5978757b0979fa411a384d9e5b4728b01a769f87383df38dbb8f0f18e4b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9206e5978757b0979fa411a384d9e5b4728b01a769f87383df38dbb8f0f18e4b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:27:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:27:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:27:37Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:11Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:11 crc kubenswrapper[5008]: I0129 15:28:11.812309 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2624b9eb-bfe1-4c46-8825-6152c5e00565\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://12266e3ba2ed2e5d6d1e7ee893a0d59cd4575c8870cb1e129ca0fd9b8623467f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c778df6f5c031669143db37980250c01473f3d9856acc44a6ef51852822f99ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://de7c341c7443f28f5919ef6baeb21377b5571637ad807dd7515a5f28c218034b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0f710dffd08d1bbb467ff9d2c6a5d5beed779550747459407916e743506ab27\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:27:37Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:11Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:11 crc kubenswrapper[5008]: I0129 15:28:11.825967 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d62f7cc2-d2d7-4c9a-9432-8b4fb9f3fcf2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4702656214e54c2881bf198364622648679a9981721d09a6b1551a134c63b7d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ed1794f8b68a0810301b6f7b91e03cfb269b35084dd97b2f153789ba70970f2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://677c04a1dffb767e6149ccb064772548ca29cf553afc20cb4eb82a5f85742ff0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d710e35a02d14289e2d5fe6b35c08621e78c96b7e9e30451ffd6d51962fb761\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4d710e35a02d14289e2d5fe6b35c08621e78c96b7e9e30451ffd6d51962fb761\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"message\\\":\\\"file observer\\\\nW0129 15:27:57.701071 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0129 15:27:57.704726 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 15:27:57.707574 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-445213743/tls.crt::/tmp/serving-cert-445213743/tls.key\\\\\\\"\\\\nI0129 15:27:58.036057 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0129 15:27:58.041904 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0129 15:27:58.041936 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0129 15:27:58.041959 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0129 15:27:58.041967 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0129 15:27:58.046875 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0129 15:27:58.046901 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 15:27:58.046906 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 15:27:58.046911 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0129 15:27:58.046914 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0129 15:27:58.046917 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0129 15:27:58.046919 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0129 15:27:58.047110 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0129 15:27:58.052272 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T15:27:42Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3397d4d59fbac09e49247425eb263f25d13c62a72013146c981b606f6389165d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:40Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://25a0c747be0a011a60911a631709b27620d8ebc5afea1d21dbeb71f26d971f6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://25a0c747be0a011a60911a631709b27620d8ebc5afea1d21dbeb71f26d971f6e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:27:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:27:37Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:11Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:11 crc kubenswrapper[5008]: I0129 15:28:11.838809 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:11Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:11 crc kubenswrapper[5008]: I0129 15:28:11.850829 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:11Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:11 crc kubenswrapper[5008]: I0129 15:28:11.864875 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-42hcz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cdd8ae23-3f9f-49f8-928d-46dad823fde4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a44b0a7b0b53c339b51d5391ad7e0eb342bdb491b4af37a98f48788b8e2c077b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg75x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:27:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-42hcz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:11Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:11 crc kubenswrapper[5008]: I0129 15:28:11.885614 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"77958faa-02ef-4792-b792-6094f922cd1d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de76f0d6e08ee14b4a5ab39a21ebdc63bdf379dcd5b648ae46a4edcc2a49f20e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1dcda54222f387e6560d3e297be72e19032a975feb916bc12a220870207a3f35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3cb618c2c44502074cb37ce1e688d187254eafae3916372a16c8ab845fed767a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff8e5fd243880ce71f07c5c532cad2cdff0e4bca2d0083280be78206a1a4c854\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7393e24277d74a2b9987e6cdc54cd65485f5bc57d93ec25a2cb8479923db1feb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://380711ea042d739a804ab6da4c0361004cc9ed9a48a5f4b006d168df6a84ebb5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://380711ea042d739a804ab6da4c0361004cc9ed9a48a5f4b006d168df6a84ebb5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:27:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d9134a6829b7df9b42aeae161ea1f3961837d6a0b322b1adcd2417c47c0f5d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1d9134a6829b7df9b42aeae161ea1f3961837d6a0b322b1adcd2417c47c0f5d5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:27:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://9206e5978757b0979fa411a384d9e5b4728b01a769f87383df38dbb8f0f18e4b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9206e5978757b0979fa411a384d9e5b4728b01a769f87383df38dbb8f0f18e4b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:27:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:27:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:27:37Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:11Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:11 crc kubenswrapper[5008]: I0129 15:28:11.896073 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:11 crc kubenswrapper[5008]: I0129 15:28:11.896330 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:11 crc kubenswrapper[5008]: I0129 15:28:11.896421 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:11 crc kubenswrapper[5008]: I0129 15:28:11.896516 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:11 crc kubenswrapper[5008]: I0129 15:28:11.896640 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:11Z","lastTransitionTime":"2026-01-29T15:28:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:11 crc kubenswrapper[5008]: I0129 15:28:11.901518 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2624b9eb-bfe1-4c46-8825-6152c5e00565\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://12266e3ba2ed2e5d6d1e7ee893a0d59cd4575c8870cb1e129ca0fd9b8623467f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c778df6f5c031669143db37980250c01473f3d9856acc44a6ef51852822f99ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://de7c341c7443f28f5919ef6baeb21377b5571637ad807dd7515a5f28c218034b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0f710dffd08d1bbb467ff9d2c6a5d5beed779550747459407916e743506ab27\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:27:37Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:11Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:11 crc kubenswrapper[5008]: I0129 15:28:11.919689 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d62f7cc2-d2d7-4c9a-9432-8b4fb9f3fcf2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4702656214e54c2881bf198364622648679a9981721d09a6b1551a134c63b7d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ed1794f8b68a0810301b6f7b91e03cfb269b35084dd97b2f153789ba70970f2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://677c04a1dffb767e6149ccb064772548ca29cf553afc20cb4eb82a5f85742ff0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d710e35a02d14289e2d5fe6b35c08621e78c96b7e9e30451ffd6d51962fb761\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4d710e35a02d14289e2d5fe6b35c08621e78c96b7e9e30451ffd6d51962fb761\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"message\\\":\\\"file observer\\\\nW0129 15:27:57.701071 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0129 15:27:57.704726 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 15:27:57.707574 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-445213743/tls.crt::/tmp/serving-cert-445213743/tls.key\\\\\\\"\\\\nI0129 15:27:58.036057 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0129 15:27:58.041904 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0129 15:27:58.041936 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0129 15:27:58.041959 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0129 15:27:58.041967 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0129 15:27:58.046875 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0129 15:27:58.046901 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 15:27:58.046906 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 15:27:58.046911 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0129 15:27:58.046914 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0129 15:27:58.046917 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0129 15:27:58.046919 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0129 15:27:58.047110 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0129 15:27:58.052272 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T15:27:42Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3397d4d59fbac09e49247425eb263f25d13c62a72013146c981b606f6389165d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:40Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://25a0c747be0a011a60911a631709b27620d8ebc5afea1d21dbeb71f26d971f6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://25a0c747be0a011a60911a631709b27620d8ebc5afea1d21dbeb71f26d971f6e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:27:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:27:37Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:11Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:11 crc kubenswrapper[5008]: I0129 15:28:11.932054 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-p5kdp" Jan 29 15:28:11 crc kubenswrapper[5008]: I0129 15:28:11.936279 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:11Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:11 crc kubenswrapper[5008]: I0129 15:28:11.955442 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:11Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:11 crc kubenswrapper[5008]: I0129 15:28:11.968732 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-42hcz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cdd8ae23-3f9f-49f8-928d-46dad823fde4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a44b0a7b0b53c339b51d5391ad7e0eb342bdb491b4af37a98f48788b8e2c077b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg75x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:27:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-42hcz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:11Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:11 crc kubenswrapper[5008]: I0129 15:28:11.984445 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-p5kdp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f5a0b69-5edd-467c-a822-093f1689df1d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gq2fz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gq2fz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:28:11Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-p5kdp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:11Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:11 crc kubenswrapper[5008]: I0129 15:28:11.996560 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c04122903ba8ec9ecb21ba42f430520d0a097fff8cea9572b066e146d519cf91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:11Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:11 crc kubenswrapper[5008]: I0129 15:28:11.999480 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:11 crc kubenswrapper[5008]: I0129 15:28:11.999512 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:11 crc kubenswrapper[5008]: I0129 15:28:11.999521 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:11 crc kubenswrapper[5008]: I0129 15:28:11.999537 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:11 crc kubenswrapper[5008]: I0129 15:28:11.999546 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:11Z","lastTransitionTime":"2026-01-29T15:28:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:12 crc kubenswrapper[5008]: I0129 15:28:12.009084 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:12Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:12 crc kubenswrapper[5008]: I0129 15:28:12.021342 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-wtvvb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2dede057-dcce-4302-8efe-e2c3640308ec\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://63cab2ec47a6dc148b6d3554a6f4b5c1985ca43bf62bfc444ff3582273cce517\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mtnst\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:27:58Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-wtvvb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:12Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:12 crc kubenswrapper[5008]: I0129 15:28:12.031859 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-gk9q8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ca0fcb2d-733d-4bde-9bbf-3f7082d0e244\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fda885d25c8fd46bd297810d4fb6c23ec0d4bb76993e94ea75a623b0feeed247\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6blck\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4781ea933d8ce868cf1da4b2890797c16012b434ce074870a59307d61a3c731\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6blck\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:27:59Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-gk9q8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:12Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:12 crc kubenswrapper[5008]: I0129 15:28:12.058037 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pqg9w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1d092513-7735-4c98-9734-57bc46b99280\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://08beb10f1715c1ca4bbe5b5ecf918e595f3befca424a2b65a06e682936dcc9c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2xcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://84bee79a5084a74e833cfe4bac65bc4b319e7a41e9f3e8c7ee7de383385da1a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2xcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eddc7bcf8b28e2d71e41dbad61e84e0e0ac1e2702628a400e9c16dcc4303cad1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2xcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b82de879355c27b3c577b5d5a292b2c1db266e6d92a8e01409bf87ede71ba420\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2xcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e0b6c0db5ed1e87ffade45aa1c7194322bbf680050f9b7328a3584db57e1554\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2xcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://676b28dc78242b0ec7c7a3643a048da9020c807de1f4ddd0cd801f60a1bf41a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2xcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6111e93f68c8aa5c23e0317317a19c4a1df88a0d6babfab0d89d65902410feee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6111e93f68c8aa5c23e0317317a19c4a1df88a0d6babfab0d89d65902410feee\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-29T15:28:11Z\\\",\\\"message\\\":\\\"emoval\\\\nI0129 15:28:10.306875 6307 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0129 15:28:10.306882 6307 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0129 15:28:10.306927 6307 factory.go:656] Stopping watch factory\\\\nI0129 15:28:10.306933 6307 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0129 15:28:10.306953 6307 handler.go:208] Removed *v1.Node event handler 7\\\\nI0129 15:28:10.306717 6307 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0129 15:28:10.307153 6307 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0129 15:28:10.307165 6307 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0129 15:28:10.307174 6307 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0129 15:28:10.307184 6307 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0129 15:28:10.307193 6307 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0129 15:28:10.307206 6307 handler.go:208] Removed *v1.Node event handler 2\\\\nI0129 15:28:10.307328 6307 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0129 15:28:10.307559 6307 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2xcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dc93128ecb53884c776154eafc7f29837e9c378a10c37df5d85d608ef14d7195\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2xcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6807035e51f1a1b563d2c2de6ad73607b2a3bbb9b4336cb9dfeea693d35fdda6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6807035e51f1a1b563d2c2de6ad73607b2a3bbb9b4336cb9dfeea693d35fdda6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2xcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:27:59Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-pqg9w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:12Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:12 crc kubenswrapper[5008]: I0129 15:28:12.071907 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ae42d856f5916fe3a1dace4ed5ed53a6cab552d169357b7303516719b78ef076\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:12Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:12 crc kubenswrapper[5008]: I0129 15:28:12.087258 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e5526ab405f367c31c46e86dc356f5c21ac7529cd706af08cb6cd35e54dbe33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a34142066431679db41e56f6697765165128986ad22bc919152524672e3035d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:12Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:12 crc kubenswrapper[5008]: I0129 15:28:12.101312 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-78bl2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa065d0b-d690-4a7d-9079-a8f976a7aca3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb7be81711617226cfa9af5ce71166ad176fc477581c03ba781a2746d64bbf31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trwfk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dce68b57fb66d0f4fb38e7ba2da32746311a7705ec80e7dbaaee405bf6175456\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dce68b57fb66d0f4fb38e7ba2da32746311a7705ec80e7dbaaee405bf6175456\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:27:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trwfk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be2538011dc9cfea90fe3fdf861804d4f36944262a852e2efe4c6a215019fb7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://be2538011dc9cfea90fe3fdf861804d4f36944262a852e2efe4c6a215019fb7e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trwfk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1e468924dd5d2c21d28331698458147151b2c74b04a9154c3f0638b271ffb36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c1e468924dd5d2c21d28331698458147151b2c74b04a9154c3f0638b271ffb36\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trwfk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf3e60925690b1b555efc2db95efcef76510c147b6338b65b071bf0729561a6b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bf3e60925690b1b555efc2db95efcef76510c147b6338b65b071bf0729561a6b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trwfk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://83ad0f06e7035a28c9d0207484d22ac175226fac31b1d5e233ce7231cb957fb3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://83ad0f06e7035a28c9d0207484d22ac175226fac31b1d5e233ce7231cb957fb3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trwfk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://78d2f9571b05eeb98c339f4165ca858289b85192d254ea86d4fb2eae7ea2e61e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://78d2f9571b05eeb98c339f4165ca858289b85192d254ea86d4fb2eae7ea2e61e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trwfk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:27:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-78bl2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:12Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:12 crc kubenswrapper[5008]: I0129 15:28:12.102977 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:12 crc kubenswrapper[5008]: I0129 15:28:12.103008 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:12 crc kubenswrapper[5008]: I0129 15:28:12.103019 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:12 crc kubenswrapper[5008]: I0129 15:28:12.103039 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:12 crc kubenswrapper[5008]: I0129 15:28:12.103049 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:12Z","lastTransitionTime":"2026-01-29T15:28:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:12 crc kubenswrapper[5008]: I0129 15:28:12.115708 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-qj8wb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ffbfcf6-99e5-450c-8c72-b2db9365d93e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb113f45b58a5039b88d2c176d718d5a012e21c1785781c1fcda5843d529a9af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8mvmz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:28:01Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-qj8wb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:12Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:12 crc kubenswrapper[5008]: I0129 15:28:12.207112 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:12 crc kubenswrapper[5008]: I0129 15:28:12.207169 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:12 crc kubenswrapper[5008]: I0129 15:28:12.207180 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:12 crc kubenswrapper[5008]: I0129 15:28:12.207204 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:12 crc kubenswrapper[5008]: I0129 15:28:12.207219 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:12Z","lastTransitionTime":"2026-01-29T15:28:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:12 crc kubenswrapper[5008]: I0129 15:28:12.294270 5008 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-24 19:18:06.009075507 +0000 UTC Jan 29 15:28:12 crc kubenswrapper[5008]: I0129 15:28:12.309731 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:12 crc kubenswrapper[5008]: I0129 15:28:12.309773 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:12 crc kubenswrapper[5008]: I0129 15:28:12.309811 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:12 crc kubenswrapper[5008]: I0129 15:28:12.309834 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:12 crc kubenswrapper[5008]: I0129 15:28:12.309847 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:12Z","lastTransitionTime":"2026-01-29T15:28:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:12 crc kubenswrapper[5008]: I0129 15:28:12.323354 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 15:28:12 crc kubenswrapper[5008]: I0129 15:28:12.323394 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 15:28:12 crc kubenswrapper[5008]: I0129 15:28:12.323481 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 15:28:12 crc kubenswrapper[5008]: E0129 15:28:12.323606 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 15:28:12 crc kubenswrapper[5008]: E0129 15:28:12.324052 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 15:28:12 crc kubenswrapper[5008]: E0129 15:28:12.324138 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 15:28:12 crc kubenswrapper[5008]: I0129 15:28:12.413934 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:12 crc kubenswrapper[5008]: I0129 15:28:12.413988 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:12 crc kubenswrapper[5008]: I0129 15:28:12.413999 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:12 crc kubenswrapper[5008]: I0129 15:28:12.414019 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:12 crc kubenswrapper[5008]: I0129 15:28:12.414031 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:12Z","lastTransitionTime":"2026-01-29T15:28:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:12 crc kubenswrapper[5008]: I0129 15:28:12.516742 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:12 crc kubenswrapper[5008]: I0129 15:28:12.516808 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:12 crc kubenswrapper[5008]: I0129 15:28:12.516821 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:12 crc kubenswrapper[5008]: I0129 15:28:12.516839 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:12 crc kubenswrapper[5008]: I0129 15:28:12.516851 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:12Z","lastTransitionTime":"2026-01-29T15:28:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:12 crc kubenswrapper[5008]: I0129 15:28:12.619581 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:12 crc kubenswrapper[5008]: I0129 15:28:12.619654 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:12 crc kubenswrapper[5008]: I0129 15:28:12.619674 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:12 crc kubenswrapper[5008]: I0129 15:28:12.619701 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:12 crc kubenswrapper[5008]: I0129 15:28:12.619722 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:12Z","lastTransitionTime":"2026-01-29T15:28:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:12 crc kubenswrapper[5008]: I0129 15:28:12.640849 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-p5kdp" event={"ID":"4f5a0b69-5edd-467c-a822-093f1689df1d","Type":"ContainerStarted","Data":"6d96a41832f35a1dded0a118e669f305e267d9975e28965d27e7226d5b16e279"} Jan 29 15:28:12 crc kubenswrapper[5008]: I0129 15:28:12.724316 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:12 crc kubenswrapper[5008]: I0129 15:28:12.724383 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:12 crc kubenswrapper[5008]: I0129 15:28:12.724402 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:12 crc kubenswrapper[5008]: I0129 15:28:12.724428 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:12 crc kubenswrapper[5008]: I0129 15:28:12.724447 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:12Z","lastTransitionTime":"2026-01-29T15:28:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:12 crc kubenswrapper[5008]: I0129 15:28:12.828092 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:12 crc kubenswrapper[5008]: I0129 15:28:12.828158 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:12 crc kubenswrapper[5008]: I0129 15:28:12.828167 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:12 crc kubenswrapper[5008]: I0129 15:28:12.828184 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:12 crc kubenswrapper[5008]: I0129 15:28:12.828194 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:12Z","lastTransitionTime":"2026-01-29T15:28:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:12 crc kubenswrapper[5008]: I0129 15:28:12.930630 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:12 crc kubenswrapper[5008]: I0129 15:28:12.930663 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:12 crc kubenswrapper[5008]: I0129 15:28:12.930672 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:12 crc kubenswrapper[5008]: I0129 15:28:12.930689 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:12 crc kubenswrapper[5008]: I0129 15:28:12.930699 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:12Z","lastTransitionTime":"2026-01-29T15:28:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:13 crc kubenswrapper[5008]: I0129 15:28:13.033322 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:13 crc kubenswrapper[5008]: I0129 15:28:13.033370 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:13 crc kubenswrapper[5008]: I0129 15:28:13.033383 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:13 crc kubenswrapper[5008]: I0129 15:28:13.033400 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:13 crc kubenswrapper[5008]: I0129 15:28:13.033413 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:13Z","lastTransitionTime":"2026-01-29T15:28:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:13 crc kubenswrapper[5008]: I0129 15:28:13.125338 5008 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/network-metrics-daemon-kkc6c"] Jan 29 15:28:13 crc kubenswrapper[5008]: I0129 15:28:13.126174 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kkc6c" Jan 29 15:28:13 crc kubenswrapper[5008]: E0129 15:28:13.126291 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kkc6c" podUID="f3716fd8-7f9b-44e2-ac3c-e907d8793dc9" Jan 29 15:28:13 crc kubenswrapper[5008]: I0129 15:28:13.136039 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:13 crc kubenswrapper[5008]: I0129 15:28:13.136089 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:13 crc kubenswrapper[5008]: I0129 15:28:13.136104 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:13 crc kubenswrapper[5008]: I0129 15:28:13.136124 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:13 crc kubenswrapper[5008]: I0129 15:28:13.136139 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:13Z","lastTransitionTime":"2026-01-29T15:28:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:13 crc kubenswrapper[5008]: I0129 15:28:13.141959 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ae42d856f5916fe3a1dace4ed5ed53a6cab552d169357b7303516719b78ef076\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:13Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:13 crc kubenswrapper[5008]: I0129 15:28:13.158382 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e5526ab405f367c31c46e86dc356f5c21ac7529cd706af08cb6cd35e54dbe33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a34142066431679db41e56f6697765165128986ad22bc919152524672e3035d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:13Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:13 crc kubenswrapper[5008]: I0129 15:28:13.174930 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-78bl2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa065d0b-d690-4a7d-9079-a8f976a7aca3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb7be81711617226cfa9af5ce71166ad176fc477581c03ba781a2746d64bbf31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trwfk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dce68b57fb66d0f4fb38e7ba2da32746311a7705ec80e7dbaaee405bf6175456\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dce68b57fb66d0f4fb38e7ba2da32746311a7705ec80e7dbaaee405bf6175456\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:27:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trwfk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be2538011dc9cfea90fe3fdf861804d4f36944262a852e2efe4c6a215019fb7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://be2538011dc9cfea90fe3fdf861804d4f36944262a852e2efe4c6a215019fb7e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trwfk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1e468924dd5d2c21d28331698458147151b2c74b04a9154c3f0638b271ffb36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c1e468924dd5d2c21d28331698458147151b2c74b04a9154c3f0638b271ffb36\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trwfk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf3e60925690b1b555efc2db95efcef76510c147b6338b65b071bf0729561a6b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bf3e60925690b1b555efc2db95efcef76510c147b6338b65b071bf0729561a6b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trwfk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://83ad0f06e7035a28c9d0207484d22ac175226fac31b1d5e233ce7231cb957fb3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://83ad0f06e7035a28c9d0207484d22ac175226fac31b1d5e233ce7231cb957fb3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trwfk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://78d2f9571b05eeb98c339f4165ca858289b85192d254ea86d4fb2eae7ea2e61e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://78d2f9571b05eeb98c339f4165ca858289b85192d254ea86d4fb2eae7ea2e61e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trwfk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:27:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-78bl2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:13Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:13 crc kubenswrapper[5008]: I0129 15:28:13.187107 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-qj8wb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ffbfcf6-99e5-450c-8c72-b2db9365d93e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb113f45b58a5039b88d2c176d718d5a012e21c1785781c1fcda5843d529a9af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8mvmz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:28:01Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-qj8wb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:13Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:13 crc kubenswrapper[5008]: I0129 15:28:13.198384 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/f3716fd8-7f9b-44e2-ac3c-e907d8793dc9-metrics-certs\") pod \"network-metrics-daemon-kkc6c\" (UID: \"f3716fd8-7f9b-44e2-ac3c-e907d8793dc9\") " pod="openshift-multus/network-metrics-daemon-kkc6c" Jan 29 15:28:13 crc kubenswrapper[5008]: I0129 15:28:13.198437 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tl4fv\" (UniqueName: \"kubernetes.io/projected/f3716fd8-7f9b-44e2-ac3c-e907d8793dc9-kube-api-access-tl4fv\") pod \"network-metrics-daemon-kkc6c\" (UID: \"f3716fd8-7f9b-44e2-ac3c-e907d8793dc9\") " pod="openshift-multus/network-metrics-daemon-kkc6c" Jan 29 15:28:13 crc kubenswrapper[5008]: I0129 15:28:13.199429 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-42hcz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cdd8ae23-3f9f-49f8-928d-46dad823fde4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a44b0a7b0b53c339b51d5391ad7e0eb342bdb491b4af37a98f48788b8e2c077b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg75x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:27:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-42hcz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:13Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:13 crc kubenswrapper[5008]: I0129 15:28:13.216671 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-p5kdp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f5a0b69-5edd-467c-a822-093f1689df1d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gq2fz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gq2fz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:28:11Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-p5kdp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:13Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:13 crc kubenswrapper[5008]: I0129 15:28:13.234932 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"77958faa-02ef-4792-b792-6094f922cd1d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de76f0d6e08ee14b4a5ab39a21ebdc63bdf379dcd5b648ae46a4edcc2a49f20e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1dcda54222f387e6560d3e297be72e19032a975feb916bc12a220870207a3f35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3cb618c2c44502074cb37ce1e688d187254eafae3916372a16c8ab845fed767a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff8e5fd243880ce71f07c5c532cad2cdff0e4bca2d0083280be78206a1a4c854\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7393e24277d74a2b9987e6cdc54cd65485f5bc57d93ec25a2cb8479923db1feb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://380711ea042d739a804ab6da4c0361004cc9ed9a48a5f4b006d168df6a84ebb5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://380711ea042d739a804ab6da4c0361004cc9ed9a48a5f4b006d168df6a84ebb5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:27:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d9134a6829b7df9b42aeae161ea1f3961837d6a0b322b1adcd2417c47c0f5d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1d9134a6829b7df9b42aeae161ea1f3961837d6a0b322b1adcd2417c47c0f5d5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:27:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://9206e5978757b0979fa411a384d9e5b4728b01a769f87383df38dbb8f0f18e4b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9206e5978757b0979fa411a384d9e5b4728b01a769f87383df38dbb8f0f18e4b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:27:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:27:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:27:37Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:13Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:13 crc kubenswrapper[5008]: I0129 15:28:13.238916 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:13 crc kubenswrapper[5008]: I0129 15:28:13.238947 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:13 crc kubenswrapper[5008]: I0129 15:28:13.238955 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:13 crc kubenswrapper[5008]: I0129 15:28:13.238971 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:13 crc kubenswrapper[5008]: I0129 15:28:13.238980 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:13Z","lastTransitionTime":"2026-01-29T15:28:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:13 crc kubenswrapper[5008]: I0129 15:28:13.255533 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2624b9eb-bfe1-4c46-8825-6152c5e00565\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://12266e3ba2ed2e5d6d1e7ee893a0d59cd4575c8870cb1e129ca0fd9b8623467f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c778df6f5c031669143db37980250c01473f3d9856acc44a6ef51852822f99ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://de7c341c7443f28f5919ef6baeb21377b5571637ad807dd7515a5f28c218034b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0f710dffd08d1bbb467ff9d2c6a5d5beed779550747459407916e743506ab27\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:27:37Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:13Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:13 crc kubenswrapper[5008]: I0129 15:28:13.273486 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d62f7cc2-d2d7-4c9a-9432-8b4fb9f3fcf2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4702656214e54c2881bf198364622648679a9981721d09a6b1551a134c63b7d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ed1794f8b68a0810301b6f7b91e03cfb269b35084dd97b2f153789ba70970f2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://677c04a1dffb767e6149ccb064772548ca29cf553afc20cb4eb82a5f85742ff0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d710e35a02d14289e2d5fe6b35c08621e78c96b7e9e30451ffd6d51962fb761\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4d710e35a02d14289e2d5fe6b35c08621e78c96b7e9e30451ffd6d51962fb761\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"message\\\":\\\"file observer\\\\nW0129 15:27:57.701071 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0129 15:27:57.704726 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 15:27:57.707574 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-445213743/tls.crt::/tmp/serving-cert-445213743/tls.key\\\\\\\"\\\\nI0129 15:27:58.036057 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0129 15:27:58.041904 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0129 15:27:58.041936 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0129 15:27:58.041959 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0129 15:27:58.041967 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0129 15:27:58.046875 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0129 15:27:58.046901 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 15:27:58.046906 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 15:27:58.046911 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0129 15:27:58.046914 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0129 15:27:58.046917 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0129 15:27:58.046919 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0129 15:27:58.047110 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0129 15:27:58.052272 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T15:27:42Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3397d4d59fbac09e49247425eb263f25d13c62a72013146c981b606f6389165d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:40Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://25a0c747be0a011a60911a631709b27620d8ebc5afea1d21dbeb71f26d971f6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://25a0c747be0a011a60911a631709b27620d8ebc5afea1d21dbeb71f26d971f6e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:27:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:27:37Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:13Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:13 crc kubenswrapper[5008]: I0129 15:28:13.285262 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:13Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:13 crc kubenswrapper[5008]: I0129 15:28:13.295229 5008 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-13 20:31:07.932536838 +0000 UTC Jan 29 15:28:13 crc kubenswrapper[5008]: I0129 15:28:13.298994 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:13Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:13 crc kubenswrapper[5008]: I0129 15:28:13.299522 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/f3716fd8-7f9b-44e2-ac3c-e907d8793dc9-metrics-certs\") pod \"network-metrics-daemon-kkc6c\" (UID: \"f3716fd8-7f9b-44e2-ac3c-e907d8793dc9\") " pod="openshift-multus/network-metrics-daemon-kkc6c" Jan 29 15:28:13 crc kubenswrapper[5008]: I0129 15:28:13.299559 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tl4fv\" (UniqueName: \"kubernetes.io/projected/f3716fd8-7f9b-44e2-ac3c-e907d8793dc9-kube-api-access-tl4fv\") pod \"network-metrics-daemon-kkc6c\" (UID: \"f3716fd8-7f9b-44e2-ac3c-e907d8793dc9\") " pod="openshift-multus/network-metrics-daemon-kkc6c" Jan 29 15:28:13 crc kubenswrapper[5008]: E0129 15:28:13.299804 5008 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 29 15:28:13 crc kubenswrapper[5008]: E0129 15:28:13.299964 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f3716fd8-7f9b-44e2-ac3c-e907d8793dc9-metrics-certs podName:f3716fd8-7f9b-44e2-ac3c-e907d8793dc9 nodeName:}" failed. No retries permitted until 2026-01-29 15:28:13.799932995 +0000 UTC m=+37.472787242 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/f3716fd8-7f9b-44e2-ac3c-e907d8793dc9-metrics-certs") pod "network-metrics-daemon-kkc6c" (UID: "f3716fd8-7f9b-44e2-ac3c-e907d8793dc9") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 29 15:28:13 crc kubenswrapper[5008]: I0129 15:28:13.317843 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pqg9w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1d092513-7735-4c98-9734-57bc46b99280\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://08beb10f1715c1ca4bbe5b5ecf918e595f3befca424a2b65a06e682936dcc9c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2xcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://84bee79a5084a74e833cfe4bac65bc4b319e7a41e9f3e8c7ee7de383385da1a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2xcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eddc7bcf8b28e2d71e41dbad61e84e0e0ac1e2702628a400e9c16dcc4303cad1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2xcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b82de879355c27b3c577b5d5a292b2c1db266e6d92a8e01409bf87ede71ba420\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2xcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e0b6c0db5ed1e87ffade45aa1c7194322bbf680050f9b7328a3584db57e1554\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2xcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://676b28dc78242b0ec7c7a3643a048da9020c807de1f4ddd0cd801f60a1bf41a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2xcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6111e93f68c8aa5c23e0317317a19c4a1df88a0d6babfab0d89d65902410feee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6111e93f68c8aa5c23e0317317a19c4a1df88a0d6babfab0d89d65902410feee\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-29T15:28:11Z\\\",\\\"message\\\":\\\"emoval\\\\nI0129 15:28:10.306875 6307 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0129 15:28:10.306882 6307 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0129 15:28:10.306927 6307 factory.go:656] Stopping watch factory\\\\nI0129 15:28:10.306933 6307 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0129 15:28:10.306953 6307 handler.go:208] Removed *v1.Node event handler 7\\\\nI0129 15:28:10.306717 6307 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0129 15:28:10.307153 6307 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0129 15:28:10.307165 6307 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0129 15:28:10.307174 6307 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0129 15:28:10.307184 6307 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0129 15:28:10.307193 6307 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0129 15:28:10.307206 6307 handler.go:208] Removed *v1.Node event handler 2\\\\nI0129 15:28:10.307328 6307 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0129 15:28:10.307559 6307 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2xcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dc93128ecb53884c776154eafc7f29837e9c378a10c37df5d85d608ef14d7195\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2xcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6807035e51f1a1b563d2c2de6ad73607b2a3bbb9b4336cb9dfeea693d35fdda6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6807035e51f1a1b563d2c2de6ad73607b2a3bbb9b4336cb9dfeea693d35fdda6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2xcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:27:59Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-pqg9w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:13Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:13 crc kubenswrapper[5008]: I0129 15:28:13.318138 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tl4fv\" (UniqueName: \"kubernetes.io/projected/f3716fd8-7f9b-44e2-ac3c-e907d8793dc9-kube-api-access-tl4fv\") pod \"network-metrics-daemon-kkc6c\" (UID: \"f3716fd8-7f9b-44e2-ac3c-e907d8793dc9\") " pod="openshift-multus/network-metrics-daemon-kkc6c" Jan 29 15:28:13 crc kubenswrapper[5008]: I0129 15:28:13.339738 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-kkc6c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f3716fd8-7f9b-44e2-ac3c-e907d8793dc9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tl4fv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tl4fv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:28:13Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-kkc6c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:13Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:13 crc kubenswrapper[5008]: I0129 15:28:13.341008 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:13 crc kubenswrapper[5008]: I0129 15:28:13.341046 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:13 crc kubenswrapper[5008]: I0129 15:28:13.341056 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:13 crc kubenswrapper[5008]: I0129 15:28:13.341068 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:13 crc kubenswrapper[5008]: I0129 15:28:13.341077 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:13Z","lastTransitionTime":"2026-01-29T15:28:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:13 crc kubenswrapper[5008]: I0129 15:28:13.353030 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c04122903ba8ec9ecb21ba42f430520d0a097fff8cea9572b066e146d519cf91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:13Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:13 crc kubenswrapper[5008]: I0129 15:28:13.364284 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:13Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:13 crc kubenswrapper[5008]: I0129 15:28:13.373986 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-wtvvb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2dede057-dcce-4302-8efe-e2c3640308ec\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://63cab2ec47a6dc148b6d3554a6f4b5c1985ca43bf62bfc444ff3582273cce517\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mtnst\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:27:58Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-wtvvb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:13Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:13 crc kubenswrapper[5008]: I0129 15:28:13.386707 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-gk9q8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ca0fcb2d-733d-4bde-9bbf-3f7082d0e244\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fda885d25c8fd46bd297810d4fb6c23ec0d4bb76993e94ea75a623b0feeed247\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6blck\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4781ea933d8ce868cf1da4b2890797c16012b434ce074870a59307d61a3c731\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6blck\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:27:59Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-gk9q8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:13Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:13 crc kubenswrapper[5008]: I0129 15:28:13.443390 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:13 crc kubenswrapper[5008]: I0129 15:28:13.443443 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:13 crc kubenswrapper[5008]: I0129 15:28:13.443458 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:13 crc kubenswrapper[5008]: I0129 15:28:13.443482 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:13 crc kubenswrapper[5008]: I0129 15:28:13.443499 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:13Z","lastTransitionTime":"2026-01-29T15:28:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:13 crc kubenswrapper[5008]: I0129 15:28:13.545968 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:13 crc kubenswrapper[5008]: I0129 15:28:13.546043 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:13 crc kubenswrapper[5008]: I0129 15:28:13.546066 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:13 crc kubenswrapper[5008]: I0129 15:28:13.546095 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:13 crc kubenswrapper[5008]: I0129 15:28:13.546116 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:13Z","lastTransitionTime":"2026-01-29T15:28:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:13 crc kubenswrapper[5008]: I0129 15:28:13.649532 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:13 crc kubenswrapper[5008]: I0129 15:28:13.649570 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:13 crc kubenswrapper[5008]: I0129 15:28:13.649580 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:13 crc kubenswrapper[5008]: I0129 15:28:13.649596 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:13 crc kubenswrapper[5008]: I0129 15:28:13.649608 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:13Z","lastTransitionTime":"2026-01-29T15:28:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:13 crc kubenswrapper[5008]: I0129 15:28:13.650340 5008 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-pqg9w_1d092513-7735-4c98-9734-57bc46b99280/ovnkube-controller/0.log" Jan 29 15:28:13 crc kubenswrapper[5008]: I0129 15:28:13.655681 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pqg9w" event={"ID":"1d092513-7735-4c98-9734-57bc46b99280","Type":"ContainerStarted","Data":"7e99cc5b72dd4558981820cab4c037fc0a5419fbf5c8f8b6cc3733fa97ccbab8"} Jan 29 15:28:13 crc kubenswrapper[5008]: I0129 15:28:13.655871 5008 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 29 15:28:13 crc kubenswrapper[5008]: I0129 15:28:13.659271 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-p5kdp" event={"ID":"4f5a0b69-5edd-467c-a822-093f1689df1d","Type":"ContainerStarted","Data":"6930478f2ddb5112eb944beac7cabb3e235fe16465a4706e8c665ce9481bc49d"} Jan 29 15:28:13 crc kubenswrapper[5008]: I0129 15:28:13.682365 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-78bl2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa065d0b-d690-4a7d-9079-a8f976a7aca3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb7be81711617226cfa9af5ce71166ad176fc477581c03ba781a2746d64bbf31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trwfk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dce68b57fb66d0f4fb38e7ba2da32746311a7705ec80e7dbaaee405bf6175456\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dce68b57fb66d0f4fb38e7ba2da32746311a7705ec80e7dbaaee405bf6175456\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:27:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trwfk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be2538011dc9cfea90fe3fdf861804d4f36944262a852e2efe4c6a215019fb7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://be2538011dc9cfea90fe3fdf861804d4f36944262a852e2efe4c6a215019fb7e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trwfk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1e468924dd5d2c21d28331698458147151b2c74b04a9154c3f0638b271ffb36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c1e468924dd5d2c21d28331698458147151b2c74b04a9154c3f0638b271ffb36\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trwfk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf3e60925690b1b555efc2db95efcef76510c147b6338b65b071bf0729561a6b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bf3e60925690b1b555efc2db95efcef76510c147b6338b65b071bf0729561a6b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trwfk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://83ad0f06e7035a28c9d0207484d22ac175226fac31b1d5e233ce7231cb957fb3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://83ad0f06e7035a28c9d0207484d22ac175226fac31b1d5e233ce7231cb957fb3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trwfk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://78d2f9571b05eeb98c339f4165ca858289b85192d254ea86d4fb2eae7ea2e61e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://78d2f9571b05eeb98c339f4165ca858289b85192d254ea86d4fb2eae7ea2e61e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trwfk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:27:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-78bl2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:13Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:13 crc kubenswrapper[5008]: I0129 15:28:13.696931 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-qj8wb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ffbfcf6-99e5-450c-8c72-b2db9365d93e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb113f45b58a5039b88d2c176d718d5a012e21c1785781c1fcda5843d529a9af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8mvmz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:28:01Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-qj8wb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:13Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:13 crc kubenswrapper[5008]: I0129 15:28:13.717651 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2624b9eb-bfe1-4c46-8825-6152c5e00565\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://12266e3ba2ed2e5d6d1e7ee893a0d59cd4575c8870cb1e129ca0fd9b8623467f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c778df6f5c031669143db37980250c01473f3d9856acc44a6ef51852822f99ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://de7c341c7443f28f5919ef6baeb21377b5571637ad807dd7515a5f28c218034b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0f710dffd08d1bbb467ff9d2c6a5d5beed779550747459407916e743506ab27\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:27:37Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:13Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:13 crc kubenswrapper[5008]: I0129 15:28:13.737069 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d62f7cc2-d2d7-4c9a-9432-8b4fb9f3fcf2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4702656214e54c2881bf198364622648679a9981721d09a6b1551a134c63b7d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ed1794f8b68a0810301b6f7b91e03cfb269b35084dd97b2f153789ba70970f2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://677c04a1dffb767e6149ccb064772548ca29cf553afc20cb4eb82a5f85742ff0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d710e35a02d14289e2d5fe6b35c08621e78c96b7e9e30451ffd6d51962fb761\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4d710e35a02d14289e2d5fe6b35c08621e78c96b7e9e30451ffd6d51962fb761\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"message\\\":\\\"file observer\\\\nW0129 15:27:57.701071 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0129 15:27:57.704726 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 15:27:57.707574 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-445213743/tls.crt::/tmp/serving-cert-445213743/tls.key\\\\\\\"\\\\nI0129 15:27:58.036057 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0129 15:27:58.041904 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0129 15:27:58.041936 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0129 15:27:58.041959 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0129 15:27:58.041967 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0129 15:27:58.046875 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0129 15:27:58.046901 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 15:27:58.046906 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 15:27:58.046911 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0129 15:27:58.046914 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0129 15:27:58.046917 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0129 15:27:58.046919 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0129 15:27:58.047110 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0129 15:27:58.052272 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T15:27:42Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3397d4d59fbac09e49247425eb263f25d13c62a72013146c981b606f6389165d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:40Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://25a0c747be0a011a60911a631709b27620d8ebc5afea1d21dbeb71f26d971f6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://25a0c747be0a011a60911a631709b27620d8ebc5afea1d21dbeb71f26d971f6e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:27:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:27:37Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:13Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:13 crc kubenswrapper[5008]: I0129 15:28:13.747771 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:13Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:13 crc kubenswrapper[5008]: I0129 15:28:13.754270 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:13 crc kubenswrapper[5008]: I0129 15:28:13.754319 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:13 crc kubenswrapper[5008]: I0129 15:28:13.754336 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:13 crc kubenswrapper[5008]: I0129 15:28:13.754363 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:13 crc kubenswrapper[5008]: I0129 15:28:13.754387 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:13Z","lastTransitionTime":"2026-01-29T15:28:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:13 crc kubenswrapper[5008]: I0129 15:28:13.767072 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:13Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:13 crc kubenswrapper[5008]: I0129 15:28:13.782015 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-42hcz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cdd8ae23-3f9f-49f8-928d-46dad823fde4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a44b0a7b0b53c339b51d5391ad7e0eb342bdb491b4af37a98f48788b8e2c077b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg75x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:27:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-42hcz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:13Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:13 crc kubenswrapper[5008]: I0129 15:28:13.798069 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-p5kdp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f5a0b69-5edd-467c-a822-093f1689df1d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gq2fz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gq2fz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:28:11Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-p5kdp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:13Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:13 crc kubenswrapper[5008]: I0129 15:28:13.805149 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/f3716fd8-7f9b-44e2-ac3c-e907d8793dc9-metrics-certs\") pod \"network-metrics-daemon-kkc6c\" (UID: \"f3716fd8-7f9b-44e2-ac3c-e907d8793dc9\") " pod="openshift-multus/network-metrics-daemon-kkc6c" Jan 29 15:28:13 crc kubenswrapper[5008]: E0129 15:28:13.805432 5008 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 29 15:28:13 crc kubenswrapper[5008]: E0129 15:28:13.805530 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f3716fd8-7f9b-44e2-ac3c-e907d8793dc9-metrics-certs podName:f3716fd8-7f9b-44e2-ac3c-e907d8793dc9 nodeName:}" failed. No retries permitted until 2026-01-29 15:28:14.805498765 +0000 UTC m=+38.478353002 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/f3716fd8-7f9b-44e2-ac3c-e907d8793dc9-metrics-certs") pod "network-metrics-daemon-kkc6c" (UID: "f3716fd8-7f9b-44e2-ac3c-e907d8793dc9") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 29 15:28:13 crc kubenswrapper[5008]: I0129 15:28:13.827231 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"77958faa-02ef-4792-b792-6094f922cd1d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de76f0d6e08ee14b4a5ab39a21ebdc63bdf379dcd5b648ae46a4edcc2a49f20e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1dcda54222f387e6560d3e297be72e19032a975feb916bc12a220870207a3f35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3cb618c2c44502074cb37ce1e688d187254eafae3916372a16c8ab845fed767a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff8e5fd243880ce71f07c5c532cad2cdff0e4bca2d0083280be78206a1a4c854\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7393e24277d74a2b9987e6cdc54cd65485f5bc57d93ec25a2cb8479923db1feb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://380711ea042d739a804ab6da4c0361004cc9ed9a48a5f4b006d168df6a84ebb5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://380711ea042d739a804ab6da4c0361004cc9ed9a48a5f4b006d168df6a84ebb5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:27:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d9134a6829b7df9b42aeae161ea1f3961837d6a0b322b1adcd2417c47c0f5d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1d9134a6829b7df9b42aeae161ea1f3961837d6a0b322b1adcd2417c47c0f5d5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:27:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://9206e5978757b0979fa411a384d9e5b4728b01a769f87383df38dbb8f0f18e4b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9206e5978757b0979fa411a384d9e5b4728b01a769f87383df38dbb8f0f18e4b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:27:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:27:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:27:37Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:13Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:13 crc kubenswrapper[5008]: I0129 15:28:13.842400 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c04122903ba8ec9ecb21ba42f430520d0a097fff8cea9572b066e146d519cf91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:13Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:13 crc kubenswrapper[5008]: I0129 15:28:13.856134 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:13Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:13 crc kubenswrapper[5008]: I0129 15:28:13.856661 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:13 crc kubenswrapper[5008]: I0129 15:28:13.856713 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:13 crc kubenswrapper[5008]: I0129 15:28:13.856723 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:13 crc kubenswrapper[5008]: I0129 15:28:13.856742 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:13 crc kubenswrapper[5008]: I0129 15:28:13.856755 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:13Z","lastTransitionTime":"2026-01-29T15:28:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:13 crc kubenswrapper[5008]: I0129 15:28:13.869071 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-wtvvb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2dede057-dcce-4302-8efe-e2c3640308ec\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://63cab2ec47a6dc148b6d3554a6f4b5c1985ca43bf62bfc444ff3582273cce517\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mtnst\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:27:58Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-wtvvb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:13Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:13 crc kubenswrapper[5008]: I0129 15:28:13.889279 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-gk9q8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ca0fcb2d-733d-4bde-9bbf-3f7082d0e244\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fda885d25c8fd46bd297810d4fb6c23ec0d4bb76993e94ea75a623b0feeed247\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6blck\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4781ea933d8ce868cf1da4b2890797c16012b434ce074870a59307d61a3c731\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6blck\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:27:59Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-gk9q8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:13Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:13 crc kubenswrapper[5008]: I0129 15:28:13.917015 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pqg9w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1d092513-7735-4c98-9734-57bc46b99280\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://08beb10f1715c1ca4bbe5b5ecf918e595f3befca424a2b65a06e682936dcc9c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2xcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://84bee79a5084a74e833cfe4bac65bc4b319e7a41e9f3e8c7ee7de383385da1a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2xcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eddc7bcf8b28e2d71e41dbad61e84e0e0ac1e2702628a400e9c16dcc4303cad1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2xcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b82de879355c27b3c577b5d5a292b2c1db266e6d92a8e01409bf87ede71ba420\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2xcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e0b6c0db5ed1e87ffade45aa1c7194322bbf680050f9b7328a3584db57e1554\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2xcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://676b28dc78242b0ec7c7a3643a048da9020c807de1f4ddd0cd801f60a1bf41a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2xcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e99cc5b72dd4558981820cab4c037fc0a5419fbf5c8f8b6cc3733fa97ccbab8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6111e93f68c8aa5c23e0317317a19c4a1df88a0d6babfab0d89d65902410feee\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-29T15:28:11Z\\\",\\\"message\\\":\\\"emoval\\\\nI0129 15:28:10.306875 6307 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0129 15:28:10.306882 6307 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0129 15:28:10.306927 6307 factory.go:656] Stopping watch factory\\\\nI0129 15:28:10.306933 6307 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0129 15:28:10.306953 6307 handler.go:208] Removed *v1.Node event handler 7\\\\nI0129 15:28:10.306717 6307 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0129 15:28:10.307153 6307 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0129 15:28:10.307165 6307 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0129 15:28:10.307174 6307 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0129 15:28:10.307184 6307 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0129 15:28:10.307193 6307 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0129 15:28:10.307206 6307 handler.go:208] Removed *v1.Node event handler 2\\\\nI0129 15:28:10.307328 6307 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0129 15:28:10.307559 6307 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:06Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2xcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dc93128ecb53884c776154eafc7f29837e9c378a10c37df5d85d608ef14d7195\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2xcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6807035e51f1a1b563d2c2de6ad73607b2a3bbb9b4336cb9dfeea693d35fdda6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6807035e51f1a1b563d2c2de6ad73607b2a3bbb9b4336cb9dfeea693d35fdda6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2xcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:27:59Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-pqg9w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:13Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:13 crc kubenswrapper[5008]: I0129 15:28:13.938521 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-kkc6c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f3716fd8-7f9b-44e2-ac3c-e907d8793dc9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tl4fv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tl4fv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:28:13Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-kkc6c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:13Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:13 crc kubenswrapper[5008]: I0129 15:28:13.956127 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e5526ab405f367c31c46e86dc356f5c21ac7529cd706af08cb6cd35e54dbe33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a34142066431679db41e56f6697765165128986ad22bc919152524672e3035d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:13Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:13 crc kubenswrapper[5008]: I0129 15:28:13.958824 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:13 crc kubenswrapper[5008]: I0129 15:28:13.958866 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:13 crc kubenswrapper[5008]: I0129 15:28:13.958875 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:13 crc kubenswrapper[5008]: I0129 15:28:13.958897 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:13 crc kubenswrapper[5008]: I0129 15:28:13.958907 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:13Z","lastTransitionTime":"2026-01-29T15:28:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:13 crc kubenswrapper[5008]: I0129 15:28:13.973347 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ae42d856f5916fe3a1dace4ed5ed53a6cab552d169357b7303516719b78ef076\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:13Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:14 crc kubenswrapper[5008]: I0129 15:28:14.061134 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:14 crc kubenswrapper[5008]: I0129 15:28:14.061172 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:14 crc kubenswrapper[5008]: I0129 15:28:14.061182 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:14 crc kubenswrapper[5008]: I0129 15:28:14.061194 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:14 crc kubenswrapper[5008]: I0129 15:28:14.061203 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:14Z","lastTransitionTime":"2026-01-29T15:28:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:14 crc kubenswrapper[5008]: I0129 15:28:14.114204 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 15:28:14 crc kubenswrapper[5008]: I0129 15:28:14.114348 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 15:28:14 crc kubenswrapper[5008]: E0129 15:28:14.114389 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 15:28:30.114362349 +0000 UTC m=+53.787216586 (durationBeforeRetry 16s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:28:14 crc kubenswrapper[5008]: E0129 15:28:14.114456 5008 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 29 15:28:14 crc kubenswrapper[5008]: E0129 15:28:14.114513 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-29 15:28:30.114497362 +0000 UTC m=+53.787351659 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 29 15:28:14 crc kubenswrapper[5008]: I0129 15:28:14.163027 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:14 crc kubenswrapper[5008]: I0129 15:28:14.163070 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:14 crc kubenswrapper[5008]: I0129 15:28:14.163080 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:14 crc kubenswrapper[5008]: I0129 15:28:14.163095 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:14 crc kubenswrapper[5008]: I0129 15:28:14.163107 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:14Z","lastTransitionTime":"2026-01-29T15:28:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:14 crc kubenswrapper[5008]: I0129 15:28:14.215767 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 15:28:14 crc kubenswrapper[5008]: I0129 15:28:14.215846 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 15:28:14 crc kubenswrapper[5008]: E0129 15:28:14.215873 5008 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 29 15:28:14 crc kubenswrapper[5008]: I0129 15:28:14.215893 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 15:28:14 crc kubenswrapper[5008]: E0129 15:28:14.215944 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-29 15:28:30.215922562 +0000 UTC m=+53.888776809 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 29 15:28:14 crc kubenswrapper[5008]: E0129 15:28:14.216015 5008 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 29 15:28:14 crc kubenswrapper[5008]: E0129 15:28:14.216033 5008 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 29 15:28:14 crc kubenswrapper[5008]: E0129 15:28:14.216044 5008 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 29 15:28:14 crc kubenswrapper[5008]: E0129 15:28:14.216087 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-29 15:28:30.216076346 +0000 UTC m=+53.888930583 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 29 15:28:14 crc kubenswrapper[5008]: E0129 15:28:14.216088 5008 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 29 15:28:14 crc kubenswrapper[5008]: E0129 15:28:14.216127 5008 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 29 15:28:14 crc kubenswrapper[5008]: E0129 15:28:14.216141 5008 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 29 15:28:14 crc kubenswrapper[5008]: E0129 15:28:14.216211 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-29 15:28:30.216191209 +0000 UTC m=+53.889045446 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 29 15:28:14 crc kubenswrapper[5008]: I0129 15:28:14.265465 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:14 crc kubenswrapper[5008]: I0129 15:28:14.265503 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:14 crc kubenswrapper[5008]: I0129 15:28:14.265512 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:14 crc kubenswrapper[5008]: I0129 15:28:14.265525 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:14 crc kubenswrapper[5008]: I0129 15:28:14.265534 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:14Z","lastTransitionTime":"2026-01-29T15:28:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:14 crc kubenswrapper[5008]: I0129 15:28:14.295939 5008 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-01 09:02:30.440606339 +0000 UTC Jan 29 15:28:14 crc kubenswrapper[5008]: I0129 15:28:14.323664 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 15:28:14 crc kubenswrapper[5008]: I0129 15:28:14.323680 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 15:28:14 crc kubenswrapper[5008]: E0129 15:28:14.323852 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 15:28:14 crc kubenswrapper[5008]: I0129 15:28:14.323939 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 15:28:14 crc kubenswrapper[5008]: E0129 15:28:14.324062 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 15:28:14 crc kubenswrapper[5008]: E0129 15:28:14.324145 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 15:28:14 crc kubenswrapper[5008]: I0129 15:28:14.351970 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:14 crc kubenswrapper[5008]: I0129 15:28:14.352034 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:14 crc kubenswrapper[5008]: I0129 15:28:14.352044 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:14 crc kubenswrapper[5008]: I0129 15:28:14.352058 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:14 crc kubenswrapper[5008]: I0129 15:28:14.352069 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:14Z","lastTransitionTime":"2026-01-29T15:28:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:14 crc kubenswrapper[5008]: E0129 15:28:14.363979 5008 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:28:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:28:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:14Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:28:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:28:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:14Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"23463cb0-4db2-46f4-86c5-cabe2301deff\\\",\\\"systemUUID\\\":\\\"ad986a03-9926-4209-a3e1-d38e666bee86\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:14Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:14 crc kubenswrapper[5008]: I0129 15:28:14.367592 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:14 crc kubenswrapper[5008]: I0129 15:28:14.367638 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:14 crc kubenswrapper[5008]: I0129 15:28:14.367647 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:14 crc kubenswrapper[5008]: I0129 15:28:14.367663 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:14 crc kubenswrapper[5008]: I0129 15:28:14.367674 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:14Z","lastTransitionTime":"2026-01-29T15:28:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:14 crc kubenswrapper[5008]: E0129 15:28:14.381823 5008 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:28:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:28:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:14Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:28:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:28:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:14Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"23463cb0-4db2-46f4-86c5-cabe2301deff\\\",\\\"systemUUID\\\":\\\"ad986a03-9926-4209-a3e1-d38e666bee86\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:14Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:14 crc kubenswrapper[5008]: I0129 15:28:14.386034 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:14 crc kubenswrapper[5008]: I0129 15:28:14.386078 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:14 crc kubenswrapper[5008]: I0129 15:28:14.386089 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:14 crc kubenswrapper[5008]: I0129 15:28:14.386106 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:14 crc kubenswrapper[5008]: I0129 15:28:14.386117 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:14Z","lastTransitionTime":"2026-01-29T15:28:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:14 crc kubenswrapper[5008]: E0129 15:28:14.398810 5008 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:28:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:28:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:14Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:28:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:28:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:14Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"23463cb0-4db2-46f4-86c5-cabe2301deff\\\",\\\"systemUUID\\\":\\\"ad986a03-9926-4209-a3e1-d38e666bee86\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:14Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:14 crc kubenswrapper[5008]: I0129 15:28:14.401956 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:14 crc kubenswrapper[5008]: I0129 15:28:14.401988 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:14 crc kubenswrapper[5008]: I0129 15:28:14.401996 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:14 crc kubenswrapper[5008]: I0129 15:28:14.402010 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:14 crc kubenswrapper[5008]: I0129 15:28:14.402020 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:14Z","lastTransitionTime":"2026-01-29T15:28:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:14 crc kubenswrapper[5008]: E0129 15:28:14.412246 5008 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:28:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:28:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:14Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:28:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:28:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:14Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"23463cb0-4db2-46f4-86c5-cabe2301deff\\\",\\\"systemUUID\\\":\\\"ad986a03-9926-4209-a3e1-d38e666bee86\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:14Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:14 crc kubenswrapper[5008]: I0129 15:28:14.415260 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:14 crc kubenswrapper[5008]: I0129 15:28:14.415283 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:14 crc kubenswrapper[5008]: I0129 15:28:14.415293 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:14 crc kubenswrapper[5008]: I0129 15:28:14.415308 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:14 crc kubenswrapper[5008]: I0129 15:28:14.415319 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:14Z","lastTransitionTime":"2026-01-29T15:28:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:14 crc kubenswrapper[5008]: E0129 15:28:14.427417 5008 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:28:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:28:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:14Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:28:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:28:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:14Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"23463cb0-4db2-46f4-86c5-cabe2301deff\\\",\\\"systemUUID\\\":\\\"ad986a03-9926-4209-a3e1-d38e666bee86\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:14Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:14 crc kubenswrapper[5008]: E0129 15:28:14.427559 5008 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 29 15:28:14 crc kubenswrapper[5008]: I0129 15:28:14.429020 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:14 crc kubenswrapper[5008]: I0129 15:28:14.429062 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:14 crc kubenswrapper[5008]: I0129 15:28:14.429076 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:14 crc kubenswrapper[5008]: I0129 15:28:14.429093 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:14 crc kubenswrapper[5008]: I0129 15:28:14.429104 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:14Z","lastTransitionTime":"2026-01-29T15:28:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:14 crc kubenswrapper[5008]: I0129 15:28:14.531094 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:14 crc kubenswrapper[5008]: I0129 15:28:14.531143 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:14 crc kubenswrapper[5008]: I0129 15:28:14.531155 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:14 crc kubenswrapper[5008]: I0129 15:28:14.531169 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:14 crc kubenswrapper[5008]: I0129 15:28:14.531180 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:14Z","lastTransitionTime":"2026-01-29T15:28:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:14 crc kubenswrapper[5008]: I0129 15:28:14.633887 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:14 crc kubenswrapper[5008]: I0129 15:28:14.633923 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:14 crc kubenswrapper[5008]: I0129 15:28:14.633932 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:14 crc kubenswrapper[5008]: I0129 15:28:14.633945 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:14 crc kubenswrapper[5008]: I0129 15:28:14.633963 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:14Z","lastTransitionTime":"2026-01-29T15:28:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:14 crc kubenswrapper[5008]: I0129 15:28:14.664233 5008 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-pqg9w_1d092513-7735-4c98-9734-57bc46b99280/ovnkube-controller/1.log" Jan 29 15:28:14 crc kubenswrapper[5008]: I0129 15:28:14.664907 5008 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-pqg9w_1d092513-7735-4c98-9734-57bc46b99280/ovnkube-controller/0.log" Jan 29 15:28:14 crc kubenswrapper[5008]: I0129 15:28:14.668067 5008 generic.go:334] "Generic (PLEG): container finished" podID="1d092513-7735-4c98-9734-57bc46b99280" containerID="7e99cc5b72dd4558981820cab4c037fc0a5419fbf5c8f8b6cc3733fa97ccbab8" exitCode=1 Jan 29 15:28:14 crc kubenswrapper[5008]: I0129 15:28:14.668107 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pqg9w" event={"ID":"1d092513-7735-4c98-9734-57bc46b99280","Type":"ContainerDied","Data":"7e99cc5b72dd4558981820cab4c037fc0a5419fbf5c8f8b6cc3733fa97ccbab8"} Jan 29 15:28:14 crc kubenswrapper[5008]: I0129 15:28:14.668159 5008 scope.go:117] "RemoveContainer" containerID="6111e93f68c8aa5c23e0317317a19c4a1df88a0d6babfab0d89d65902410feee" Jan 29 15:28:14 crc kubenswrapper[5008]: I0129 15:28:14.668988 5008 scope.go:117] "RemoveContainer" containerID="7e99cc5b72dd4558981820cab4c037fc0a5419fbf5c8f8b6cc3733fa97ccbab8" Jan 29 15:28:14 crc kubenswrapper[5008]: E0129 15:28:14.669150 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-pqg9w_openshift-ovn-kubernetes(1d092513-7735-4c98-9734-57bc46b99280)\"" pod="openshift-ovn-kubernetes/ovnkube-node-pqg9w" podUID="1d092513-7735-4c98-9734-57bc46b99280" Jan 29 15:28:14 crc kubenswrapper[5008]: I0129 15:28:14.669636 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-p5kdp" event={"ID":"4f5a0b69-5edd-467c-a822-093f1689df1d","Type":"ContainerStarted","Data":"98ea0d7c1f2e3e9fc74e8e58ae26ab486c6b75f655273070cebee814c7c99e0c"} Jan 29 15:28:14 crc kubenswrapper[5008]: I0129 15:28:14.683729 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ae42d856f5916fe3a1dace4ed5ed53a6cab552d169357b7303516719b78ef076\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:14Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:14 crc kubenswrapper[5008]: I0129 15:28:14.695827 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e5526ab405f367c31c46e86dc356f5c21ac7529cd706af08cb6cd35e54dbe33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a34142066431679db41e56f6697765165128986ad22bc919152524672e3035d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:14Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:14 crc kubenswrapper[5008]: I0129 15:28:14.714258 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-78bl2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa065d0b-d690-4a7d-9079-a8f976a7aca3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb7be81711617226cfa9af5ce71166ad176fc477581c03ba781a2746d64bbf31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trwfk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dce68b57fb66d0f4fb38e7ba2da32746311a7705ec80e7dbaaee405bf6175456\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dce68b57fb66d0f4fb38e7ba2da32746311a7705ec80e7dbaaee405bf6175456\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:27:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trwfk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be2538011dc9cfea90fe3fdf861804d4f36944262a852e2efe4c6a215019fb7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://be2538011dc9cfea90fe3fdf861804d4f36944262a852e2efe4c6a215019fb7e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trwfk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1e468924dd5d2c21d28331698458147151b2c74b04a9154c3f0638b271ffb36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c1e468924dd5d2c21d28331698458147151b2c74b04a9154c3f0638b271ffb36\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trwfk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf3e60925690b1b555efc2db95efcef76510c147b6338b65b071bf0729561a6b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bf3e60925690b1b555efc2db95efcef76510c147b6338b65b071bf0729561a6b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trwfk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://83ad0f06e7035a28c9d0207484d22ac175226fac31b1d5e233ce7231cb957fb3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://83ad0f06e7035a28c9d0207484d22ac175226fac31b1d5e233ce7231cb957fb3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trwfk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://78d2f9571b05eeb98c339f4165ca858289b85192d254ea86d4fb2eae7ea2e61e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://78d2f9571b05eeb98c339f4165ca858289b85192d254ea86d4fb2eae7ea2e61e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trwfk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:27:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-78bl2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:14Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:14 crc kubenswrapper[5008]: I0129 15:28:14.724300 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-qj8wb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ffbfcf6-99e5-450c-8c72-b2db9365d93e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb113f45b58a5039b88d2c176d718d5a012e21c1785781c1fcda5843d529a9af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8mvmz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:28:01Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-qj8wb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:14Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:14 crc kubenswrapper[5008]: I0129 15:28:14.733103 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-p5kdp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f5a0b69-5edd-467c-a822-093f1689df1d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gq2fz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gq2fz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:28:11Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-p5kdp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:14Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:14 crc kubenswrapper[5008]: I0129 15:28:14.737256 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:14 crc kubenswrapper[5008]: I0129 15:28:14.737283 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:14 crc kubenswrapper[5008]: I0129 15:28:14.737295 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:14 crc kubenswrapper[5008]: I0129 15:28:14.737310 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:14 crc kubenswrapper[5008]: I0129 15:28:14.737321 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:14Z","lastTransitionTime":"2026-01-29T15:28:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:14 crc kubenswrapper[5008]: I0129 15:28:14.751358 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"77958faa-02ef-4792-b792-6094f922cd1d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de76f0d6e08ee14b4a5ab39a21ebdc63bdf379dcd5b648ae46a4edcc2a49f20e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1dcda54222f387e6560d3e297be72e19032a975feb916bc12a220870207a3f35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3cb618c2c44502074cb37ce1e688d187254eafae3916372a16c8ab845fed767a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff8e5fd243880ce71f07c5c532cad2cdff0e4bca2d0083280be78206a1a4c854\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7393e24277d74a2b9987e6cdc54cd65485f5bc57d93ec25a2cb8479923db1feb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://380711ea042d739a804ab6da4c0361004cc9ed9a48a5f4b006d168df6a84ebb5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://380711ea042d739a804ab6da4c0361004cc9ed9a48a5f4b006d168df6a84ebb5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:27:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d9134a6829b7df9b42aeae161ea1f3961837d6a0b322b1adcd2417c47c0f5d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1d9134a6829b7df9b42aeae161ea1f3961837d6a0b322b1adcd2417c47c0f5d5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:27:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://9206e5978757b0979fa411a384d9e5b4728b01a769f87383df38dbb8f0f18e4b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9206e5978757b0979fa411a384d9e5b4728b01a769f87383df38dbb8f0f18e4b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:27:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:27:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:27:37Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:14Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:14 crc kubenswrapper[5008]: I0129 15:28:14.762749 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2624b9eb-bfe1-4c46-8825-6152c5e00565\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://12266e3ba2ed2e5d6d1e7ee893a0d59cd4575c8870cb1e129ca0fd9b8623467f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c778df6f5c031669143db37980250c01473f3d9856acc44a6ef51852822f99ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://de7c341c7443f28f5919ef6baeb21377b5571637ad807dd7515a5f28c218034b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0f710dffd08d1bbb467ff9d2c6a5d5beed779550747459407916e743506ab27\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:27:37Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:14Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:14 crc kubenswrapper[5008]: I0129 15:28:14.777636 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d62f7cc2-d2d7-4c9a-9432-8b4fb9f3fcf2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4702656214e54c2881bf198364622648679a9981721d09a6b1551a134c63b7d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ed1794f8b68a0810301b6f7b91e03cfb269b35084dd97b2f153789ba70970f2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://677c04a1dffb767e6149ccb064772548ca29cf553afc20cb4eb82a5f85742ff0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d710e35a02d14289e2d5fe6b35c08621e78c96b7e9e30451ffd6d51962fb761\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4d710e35a02d14289e2d5fe6b35c08621e78c96b7e9e30451ffd6d51962fb761\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"message\\\":\\\"file observer\\\\nW0129 15:27:57.701071 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0129 15:27:57.704726 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 15:27:57.707574 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-445213743/tls.crt::/tmp/serving-cert-445213743/tls.key\\\\\\\"\\\\nI0129 15:27:58.036057 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0129 15:27:58.041904 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0129 15:27:58.041936 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0129 15:27:58.041959 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0129 15:27:58.041967 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0129 15:27:58.046875 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0129 15:27:58.046901 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 15:27:58.046906 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 15:27:58.046911 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0129 15:27:58.046914 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0129 15:27:58.046917 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0129 15:27:58.046919 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0129 15:27:58.047110 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0129 15:27:58.052272 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T15:27:42Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3397d4d59fbac09e49247425eb263f25d13c62a72013146c981b606f6389165d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:40Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://25a0c747be0a011a60911a631709b27620d8ebc5afea1d21dbeb71f26d971f6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://25a0c747be0a011a60911a631709b27620d8ebc5afea1d21dbeb71f26d971f6e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:27:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:27:37Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:14Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:14 crc kubenswrapper[5008]: I0129 15:28:14.788255 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:14Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:14 crc kubenswrapper[5008]: I0129 15:28:14.801109 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:14Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:14 crc kubenswrapper[5008]: I0129 15:28:14.811901 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-42hcz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cdd8ae23-3f9f-49f8-928d-46dad823fde4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a44b0a7b0b53c339b51d5391ad7e0eb342bdb491b4af37a98f48788b8e2c077b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg75x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:27:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-42hcz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:14Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:14 crc kubenswrapper[5008]: I0129 15:28:14.820946 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/f3716fd8-7f9b-44e2-ac3c-e907d8793dc9-metrics-certs\") pod \"network-metrics-daemon-kkc6c\" (UID: \"f3716fd8-7f9b-44e2-ac3c-e907d8793dc9\") " pod="openshift-multus/network-metrics-daemon-kkc6c" Jan 29 15:28:14 crc kubenswrapper[5008]: E0129 15:28:14.821136 5008 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 29 15:28:14 crc kubenswrapper[5008]: E0129 15:28:14.821207 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f3716fd8-7f9b-44e2-ac3c-e907d8793dc9-metrics-certs podName:f3716fd8-7f9b-44e2-ac3c-e907d8793dc9 nodeName:}" failed. No retries permitted until 2026-01-29 15:28:16.821187472 +0000 UTC m=+40.494041709 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/f3716fd8-7f9b-44e2-ac3c-e907d8793dc9-metrics-certs") pod "network-metrics-daemon-kkc6c" (UID: "f3716fd8-7f9b-44e2-ac3c-e907d8793dc9") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 29 15:28:14 crc kubenswrapper[5008]: I0129 15:28:14.822238 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-kkc6c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f3716fd8-7f9b-44e2-ac3c-e907d8793dc9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tl4fv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tl4fv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:28:13Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-kkc6c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:14Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:14 crc kubenswrapper[5008]: I0129 15:28:14.833205 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c04122903ba8ec9ecb21ba42f430520d0a097fff8cea9572b066e146d519cf91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:14Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:14 crc kubenswrapper[5008]: I0129 15:28:14.839975 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:14 crc kubenswrapper[5008]: I0129 15:28:14.840028 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:14 crc kubenswrapper[5008]: I0129 15:28:14.840041 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:14 crc kubenswrapper[5008]: I0129 15:28:14.840061 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:14 crc kubenswrapper[5008]: I0129 15:28:14.840074 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:14Z","lastTransitionTime":"2026-01-29T15:28:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:14 crc kubenswrapper[5008]: I0129 15:28:14.846903 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:14Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:14 crc kubenswrapper[5008]: I0129 15:28:14.857206 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-wtvvb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2dede057-dcce-4302-8efe-e2c3640308ec\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://63cab2ec47a6dc148b6d3554a6f4b5c1985ca43bf62bfc444ff3582273cce517\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mtnst\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:27:58Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-wtvvb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:14Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:14 crc kubenswrapper[5008]: I0129 15:28:14.866741 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-gk9q8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ca0fcb2d-733d-4bde-9bbf-3f7082d0e244\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fda885d25c8fd46bd297810d4fb6c23ec0d4bb76993e94ea75a623b0feeed247\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6blck\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4781ea933d8ce868cf1da4b2890797c16012b434ce074870a59307d61a3c731\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6blck\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:27:59Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-gk9q8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:14Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:14 crc kubenswrapper[5008]: I0129 15:28:14.886704 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pqg9w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1d092513-7735-4c98-9734-57bc46b99280\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://08beb10f1715c1ca4bbe5b5ecf918e595f3befca424a2b65a06e682936dcc9c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2xcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://84bee79a5084a74e833cfe4bac65bc4b319e7a41e9f3e8c7ee7de383385da1a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2xcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eddc7bcf8b28e2d71e41dbad61e84e0e0ac1e2702628a400e9c16dcc4303cad1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2xcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b82de879355c27b3c577b5d5a292b2c1db266e6d92a8e01409bf87ede71ba420\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2xcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e0b6c0db5ed1e87ffade45aa1c7194322bbf680050f9b7328a3584db57e1554\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2xcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://676b28dc78242b0ec7c7a3643a048da9020c807de1f4ddd0cd801f60a1bf41a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2xcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e99cc5b72dd4558981820cab4c037fc0a5419fbf5c8f8b6cc3733fa97ccbab8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6111e93f68c8aa5c23e0317317a19c4a1df88a0d6babfab0d89d65902410feee\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-29T15:28:11Z\\\",\\\"message\\\":\\\"emoval\\\\nI0129 15:28:10.306875 6307 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0129 15:28:10.306882 6307 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0129 15:28:10.306927 6307 factory.go:656] Stopping watch factory\\\\nI0129 15:28:10.306933 6307 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0129 15:28:10.306953 6307 handler.go:208] Removed *v1.Node event handler 7\\\\nI0129 15:28:10.306717 6307 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0129 15:28:10.307153 6307 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0129 15:28:10.307165 6307 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0129 15:28:10.307174 6307 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0129 15:28:10.307184 6307 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0129 15:28:10.307193 6307 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0129 15:28:10.307206 6307 handler.go:208] Removed *v1.Node event handler 2\\\\nI0129 15:28:10.307328 6307 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0129 15:28:10.307559 6307 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:06Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7e99cc5b72dd4558981820cab4c037fc0a5419fbf5c8f8b6cc3733fa97ccbab8\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-29T15:28:14Z\\\",\\\"message\\\":\\\"nil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0129 15:28:14.347483 6482 base_network_controller_pods.go:477] [default/openshift-network-console/networking-console-plugin-85b44fc459-gdk6g] creating logical port openshift-network-console_networking-console-plugin-85b44fc459-gdk6g for pod on switch crc\\\\nI0129 15:28:14.347479 6482 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-apiserver/api]} name:Service_openshift-apiserver/api_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.37:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {88e20c31-5b8d-4d44-bbd8-dba87b7dbaf0}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0129 15:28:14.347139 6482 services_controller.go:453] Built service openshift-ingress-canary/ingress-canary template LB for network=default: []services.LB{}\\\\nF0129 15:28:14.347332 6482 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy c\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2xcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dc93128ecb53884c776154eafc7f29837e9c378a10c37df5d85d608ef14d7195\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2xcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6807035e51f1a1b563d2c2de6ad73607b2a3bbb9b4336cb9dfeea693d35fdda6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6807035e51f1a1b563d2c2de6ad73607b2a3bbb9b4336cb9dfeea693d35fdda6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2xcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:27:59Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-pqg9w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:14Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:14 crc kubenswrapper[5008]: I0129 15:28:14.900293 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ae42d856f5916fe3a1dace4ed5ed53a6cab552d169357b7303516719b78ef076\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:14Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:14 crc kubenswrapper[5008]: I0129 15:28:14.915067 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e5526ab405f367c31c46e86dc356f5c21ac7529cd706af08cb6cd35e54dbe33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a34142066431679db41e56f6697765165128986ad22bc919152524672e3035d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:14Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:14 crc kubenswrapper[5008]: I0129 15:28:14.935082 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-78bl2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa065d0b-d690-4a7d-9079-a8f976a7aca3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb7be81711617226cfa9af5ce71166ad176fc477581c03ba781a2746d64bbf31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trwfk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dce68b57fb66d0f4fb38e7ba2da32746311a7705ec80e7dbaaee405bf6175456\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dce68b57fb66d0f4fb38e7ba2da32746311a7705ec80e7dbaaee405bf6175456\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:27:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trwfk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be2538011dc9cfea90fe3fdf861804d4f36944262a852e2efe4c6a215019fb7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://be2538011dc9cfea90fe3fdf861804d4f36944262a852e2efe4c6a215019fb7e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trwfk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1e468924dd5d2c21d28331698458147151b2c74b04a9154c3f0638b271ffb36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c1e468924dd5d2c21d28331698458147151b2c74b04a9154c3f0638b271ffb36\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trwfk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf3e60925690b1b555efc2db95efcef76510c147b6338b65b071bf0729561a6b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bf3e60925690b1b555efc2db95efcef76510c147b6338b65b071bf0729561a6b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trwfk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://83ad0f06e7035a28c9d0207484d22ac175226fac31b1d5e233ce7231cb957fb3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://83ad0f06e7035a28c9d0207484d22ac175226fac31b1d5e233ce7231cb957fb3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trwfk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://78d2f9571b05eeb98c339f4165ca858289b85192d254ea86d4fb2eae7ea2e61e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://78d2f9571b05eeb98c339f4165ca858289b85192d254ea86d4fb2eae7ea2e61e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trwfk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:27:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-78bl2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:14Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:14 crc kubenswrapper[5008]: I0129 15:28:14.941851 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:14 crc kubenswrapper[5008]: I0129 15:28:14.941900 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:14 crc kubenswrapper[5008]: I0129 15:28:14.941914 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:14 crc kubenswrapper[5008]: I0129 15:28:14.941931 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:14 crc kubenswrapper[5008]: I0129 15:28:14.941945 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:14Z","lastTransitionTime":"2026-01-29T15:28:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:14 crc kubenswrapper[5008]: I0129 15:28:14.947715 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-qj8wb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ffbfcf6-99e5-450c-8c72-b2db9365d93e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb113f45b58a5039b88d2c176d718d5a012e21c1785781c1fcda5843d529a9af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8mvmz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:28:01Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-qj8wb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:14Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:14 crc kubenswrapper[5008]: I0129 15:28:14.960553 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-42hcz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cdd8ae23-3f9f-49f8-928d-46dad823fde4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a44b0a7b0b53c339b51d5391ad7e0eb342bdb491b4af37a98f48788b8e2c077b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg75x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:27:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-42hcz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:14Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:14 crc kubenswrapper[5008]: I0129 15:28:14.973131 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-p5kdp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f5a0b69-5edd-467c-a822-093f1689df1d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6930478f2ddb5112eb944beac7cabb3e235fe16465a4706e8c665ce9481bc49d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gq2fz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://98ea0d7c1f2e3e9fc74e8e58ae26ab486c6b75f655273070cebee814c7c99e0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gq2fz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:28:11Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-p5kdp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:14Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:14 crc kubenswrapper[5008]: I0129 15:28:14.994058 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"77958faa-02ef-4792-b792-6094f922cd1d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de76f0d6e08ee14b4a5ab39a21ebdc63bdf379dcd5b648ae46a4edcc2a49f20e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1dcda54222f387e6560d3e297be72e19032a975feb916bc12a220870207a3f35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3cb618c2c44502074cb37ce1e688d187254eafae3916372a16c8ab845fed767a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff8e5fd243880ce71f07c5c532cad2cdff0e4bca2d0083280be78206a1a4c854\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7393e24277d74a2b9987e6cdc54cd65485f5bc57d93ec25a2cb8479923db1feb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://380711ea042d739a804ab6da4c0361004cc9ed9a48a5f4b006d168df6a84ebb5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://380711ea042d739a804ab6da4c0361004cc9ed9a48a5f4b006d168df6a84ebb5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:27:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d9134a6829b7df9b42aeae161ea1f3961837d6a0b322b1adcd2417c47c0f5d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1d9134a6829b7df9b42aeae161ea1f3961837d6a0b322b1adcd2417c47c0f5d5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:27:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://9206e5978757b0979fa411a384d9e5b4728b01a769f87383df38dbb8f0f18e4b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9206e5978757b0979fa411a384d9e5b4728b01a769f87383df38dbb8f0f18e4b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:27:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:27:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:27:37Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:14Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:15 crc kubenswrapper[5008]: I0129 15:28:15.007153 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2624b9eb-bfe1-4c46-8825-6152c5e00565\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://12266e3ba2ed2e5d6d1e7ee893a0d59cd4575c8870cb1e129ca0fd9b8623467f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c778df6f5c031669143db37980250c01473f3d9856acc44a6ef51852822f99ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://de7c341c7443f28f5919ef6baeb21377b5571637ad807dd7515a5f28c218034b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0f710dffd08d1bbb467ff9d2c6a5d5beed779550747459407916e743506ab27\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:27:37Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:15Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:15 crc kubenswrapper[5008]: I0129 15:28:15.029966 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d62f7cc2-d2d7-4c9a-9432-8b4fb9f3fcf2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4702656214e54c2881bf198364622648679a9981721d09a6b1551a134c63b7d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ed1794f8b68a0810301b6f7b91e03cfb269b35084dd97b2f153789ba70970f2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://677c04a1dffb767e6149ccb064772548ca29cf553afc20cb4eb82a5f85742ff0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d710e35a02d14289e2d5fe6b35c08621e78c96b7e9e30451ffd6d51962fb761\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4d710e35a02d14289e2d5fe6b35c08621e78c96b7e9e30451ffd6d51962fb761\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"message\\\":\\\"file observer\\\\nW0129 15:27:57.701071 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0129 15:27:57.704726 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 15:27:57.707574 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-445213743/tls.crt::/tmp/serving-cert-445213743/tls.key\\\\\\\"\\\\nI0129 15:27:58.036057 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0129 15:27:58.041904 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0129 15:27:58.041936 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0129 15:27:58.041959 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0129 15:27:58.041967 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0129 15:27:58.046875 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0129 15:27:58.046901 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 15:27:58.046906 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 15:27:58.046911 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0129 15:27:58.046914 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0129 15:27:58.046917 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0129 15:27:58.046919 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0129 15:27:58.047110 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0129 15:27:58.052272 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T15:27:42Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3397d4d59fbac09e49247425eb263f25d13c62a72013146c981b606f6389165d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:40Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://25a0c747be0a011a60911a631709b27620d8ebc5afea1d21dbeb71f26d971f6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://25a0c747be0a011a60911a631709b27620d8ebc5afea1d21dbeb71f26d971f6e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:27:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:27:37Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:15Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:15 crc kubenswrapper[5008]: I0129 15:28:15.045275 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:15 crc kubenswrapper[5008]: I0129 15:28:15.045542 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:15 crc kubenswrapper[5008]: I0129 15:28:15.045637 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:15 crc kubenswrapper[5008]: I0129 15:28:15.045732 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:15 crc kubenswrapper[5008]: I0129 15:28:15.045834 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:15Z","lastTransitionTime":"2026-01-29T15:28:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:15 crc kubenswrapper[5008]: I0129 15:28:15.063590 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:15Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:15 crc kubenswrapper[5008]: I0129 15:28:15.083354 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:15Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:15 crc kubenswrapper[5008]: I0129 15:28:15.109988 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pqg9w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1d092513-7735-4c98-9734-57bc46b99280\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://08beb10f1715c1ca4bbe5b5ecf918e595f3befca424a2b65a06e682936dcc9c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2xcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://84bee79a5084a74e833cfe4bac65bc4b319e7a41e9f3e8c7ee7de383385da1a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2xcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eddc7bcf8b28e2d71e41dbad61e84e0e0ac1e2702628a400e9c16dcc4303cad1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2xcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b82de879355c27b3c577b5d5a292b2c1db266e6d92a8e01409bf87ede71ba420\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2xcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e0b6c0db5ed1e87ffade45aa1c7194322bbf680050f9b7328a3584db57e1554\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2xcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://676b28dc78242b0ec7c7a3643a048da9020c807de1f4ddd0cd801f60a1bf41a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2xcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e99cc5b72dd4558981820cab4c037fc0a5419fbf5c8f8b6cc3733fa97ccbab8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6111e93f68c8aa5c23e0317317a19c4a1df88a0d6babfab0d89d65902410feee\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-29T15:28:11Z\\\",\\\"message\\\":\\\"emoval\\\\nI0129 15:28:10.306875 6307 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0129 15:28:10.306882 6307 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0129 15:28:10.306927 6307 factory.go:656] Stopping watch factory\\\\nI0129 15:28:10.306933 6307 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0129 15:28:10.306953 6307 handler.go:208] Removed *v1.Node event handler 7\\\\nI0129 15:28:10.306717 6307 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0129 15:28:10.307153 6307 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0129 15:28:10.307165 6307 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0129 15:28:10.307174 6307 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0129 15:28:10.307184 6307 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0129 15:28:10.307193 6307 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0129 15:28:10.307206 6307 handler.go:208] Removed *v1.Node event handler 2\\\\nI0129 15:28:10.307328 6307 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0129 15:28:10.307559 6307 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:06Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7e99cc5b72dd4558981820cab4c037fc0a5419fbf5c8f8b6cc3733fa97ccbab8\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-29T15:28:14Z\\\",\\\"message\\\":\\\"nil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0129 15:28:14.347483 6482 base_network_controller_pods.go:477] [default/openshift-network-console/networking-console-plugin-85b44fc459-gdk6g] creating logical port openshift-network-console_networking-console-plugin-85b44fc459-gdk6g for pod on switch crc\\\\nI0129 15:28:14.347479 6482 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-apiserver/api]} name:Service_openshift-apiserver/api_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.37:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {88e20c31-5b8d-4d44-bbd8-dba87b7dbaf0}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0129 15:28:14.347139 6482 services_controller.go:453] Built service openshift-ingress-canary/ingress-canary template LB for network=default: []services.LB{}\\\\nF0129 15:28:14.347332 6482 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy c\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2xcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dc93128ecb53884c776154eafc7f29837e9c378a10c37df5d85d608ef14d7195\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2xcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6807035e51f1a1b563d2c2de6ad73607b2a3bbb9b4336cb9dfeea693d35fdda6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6807035e51f1a1b563d2c2de6ad73607b2a3bbb9b4336cb9dfeea693d35fdda6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2xcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:27:59Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-pqg9w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:15Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:15 crc kubenswrapper[5008]: I0129 15:28:15.123433 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-kkc6c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f3716fd8-7f9b-44e2-ac3c-e907d8793dc9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tl4fv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tl4fv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:28:13Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-kkc6c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:15Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:15 crc kubenswrapper[5008]: I0129 15:28:15.136960 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c04122903ba8ec9ecb21ba42f430520d0a097fff8cea9572b066e146d519cf91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:15Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:15 crc kubenswrapper[5008]: I0129 15:28:15.147672 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:15 crc kubenswrapper[5008]: I0129 15:28:15.147723 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:15 crc kubenswrapper[5008]: I0129 15:28:15.147744 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:15 crc kubenswrapper[5008]: I0129 15:28:15.147761 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:15 crc kubenswrapper[5008]: I0129 15:28:15.147799 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:15Z","lastTransitionTime":"2026-01-29T15:28:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:15 crc kubenswrapper[5008]: I0129 15:28:15.151821 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:15Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:15 crc kubenswrapper[5008]: I0129 15:28:15.163228 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-wtvvb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2dede057-dcce-4302-8efe-e2c3640308ec\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://63cab2ec47a6dc148b6d3554a6f4b5c1985ca43bf62bfc444ff3582273cce517\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mtnst\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:27:58Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-wtvvb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:15Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:15 crc kubenswrapper[5008]: I0129 15:28:15.177690 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-gk9q8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ca0fcb2d-733d-4bde-9bbf-3f7082d0e244\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fda885d25c8fd46bd297810d4fb6c23ec0d4bb76993e94ea75a623b0feeed247\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6blck\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4781ea933d8ce868cf1da4b2890797c16012b434ce074870a59307d61a3c731\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6blck\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:27:59Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-gk9q8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:15Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:15 crc kubenswrapper[5008]: I0129 15:28:15.250574 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:15 crc kubenswrapper[5008]: I0129 15:28:15.250658 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:15 crc kubenswrapper[5008]: I0129 15:28:15.250683 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:15 crc kubenswrapper[5008]: I0129 15:28:15.250715 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:15 crc kubenswrapper[5008]: I0129 15:28:15.250742 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:15Z","lastTransitionTime":"2026-01-29T15:28:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:15 crc kubenswrapper[5008]: I0129 15:28:15.296504 5008 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-05 13:34:41.447143995 +0000 UTC Jan 29 15:28:15 crc kubenswrapper[5008]: I0129 15:28:15.323357 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kkc6c" Jan 29 15:28:15 crc kubenswrapper[5008]: E0129 15:28:15.323754 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kkc6c" podUID="f3716fd8-7f9b-44e2-ac3c-e907d8793dc9" Jan 29 15:28:15 crc kubenswrapper[5008]: I0129 15:28:15.818291 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:15 crc kubenswrapper[5008]: I0129 15:28:15.818378 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:15 crc kubenswrapper[5008]: I0129 15:28:15.818876 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:15 crc kubenswrapper[5008]: I0129 15:28:15.818920 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:15 crc kubenswrapper[5008]: I0129 15:28:15.819006 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:15Z","lastTransitionTime":"2026-01-29T15:28:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:15 crc kubenswrapper[5008]: I0129 15:28:15.822267 5008 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-pqg9w_1d092513-7735-4c98-9734-57bc46b99280/ovnkube-controller/1.log" Jan 29 15:28:15 crc kubenswrapper[5008]: I0129 15:28:15.921429 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:15 crc kubenswrapper[5008]: I0129 15:28:15.921489 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:15 crc kubenswrapper[5008]: I0129 15:28:15.921513 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:15 crc kubenswrapper[5008]: I0129 15:28:15.921538 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:15 crc kubenswrapper[5008]: I0129 15:28:15.921554 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:15Z","lastTransitionTime":"2026-01-29T15:28:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:16 crc kubenswrapper[5008]: I0129 15:28:16.024223 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:16 crc kubenswrapper[5008]: I0129 15:28:16.024303 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:16 crc kubenswrapper[5008]: I0129 15:28:16.024326 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:16 crc kubenswrapper[5008]: I0129 15:28:16.024356 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:16 crc kubenswrapper[5008]: I0129 15:28:16.024379 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:16Z","lastTransitionTime":"2026-01-29T15:28:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:16 crc kubenswrapper[5008]: I0129 15:28:16.127669 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:16 crc kubenswrapper[5008]: I0129 15:28:16.127726 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:16 crc kubenswrapper[5008]: I0129 15:28:16.127742 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:16 crc kubenswrapper[5008]: I0129 15:28:16.127761 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:16 crc kubenswrapper[5008]: I0129 15:28:16.127774 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:16Z","lastTransitionTime":"2026-01-29T15:28:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:16 crc kubenswrapper[5008]: I0129 15:28:16.230757 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:16 crc kubenswrapper[5008]: I0129 15:28:16.230836 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:16 crc kubenswrapper[5008]: I0129 15:28:16.230851 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:16 crc kubenswrapper[5008]: I0129 15:28:16.230877 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:16 crc kubenswrapper[5008]: I0129 15:28:16.230894 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:16Z","lastTransitionTime":"2026-01-29T15:28:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:16 crc kubenswrapper[5008]: I0129 15:28:16.297358 5008 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-10 09:57:28.35370186 +0000 UTC Jan 29 15:28:16 crc kubenswrapper[5008]: I0129 15:28:16.323338 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 15:28:16 crc kubenswrapper[5008]: I0129 15:28:16.323405 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 15:28:16 crc kubenswrapper[5008]: I0129 15:28:16.323421 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 15:28:16 crc kubenswrapper[5008]: E0129 15:28:16.323549 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 15:28:16 crc kubenswrapper[5008]: E0129 15:28:16.324034 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 15:28:16 crc kubenswrapper[5008]: E0129 15:28:16.324157 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 15:28:16 crc kubenswrapper[5008]: I0129 15:28:16.333555 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:16 crc kubenswrapper[5008]: I0129 15:28:16.333604 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:16 crc kubenswrapper[5008]: I0129 15:28:16.333614 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:16 crc kubenswrapper[5008]: I0129 15:28:16.333633 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:16 crc kubenswrapper[5008]: I0129 15:28:16.333646 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:16Z","lastTransitionTime":"2026-01-29T15:28:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:16 crc kubenswrapper[5008]: I0129 15:28:16.437538 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:16 crc kubenswrapper[5008]: I0129 15:28:16.437582 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:16 crc kubenswrapper[5008]: I0129 15:28:16.437591 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:16 crc kubenswrapper[5008]: I0129 15:28:16.437610 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:16 crc kubenswrapper[5008]: I0129 15:28:16.437621 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:16Z","lastTransitionTime":"2026-01-29T15:28:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:16 crc kubenswrapper[5008]: I0129 15:28:16.541185 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:16 crc kubenswrapper[5008]: I0129 15:28:16.541240 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:16 crc kubenswrapper[5008]: I0129 15:28:16.541254 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:16 crc kubenswrapper[5008]: I0129 15:28:16.541278 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:16 crc kubenswrapper[5008]: I0129 15:28:16.541298 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:16Z","lastTransitionTime":"2026-01-29T15:28:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:16 crc kubenswrapper[5008]: I0129 15:28:16.644639 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:16 crc kubenswrapper[5008]: I0129 15:28:16.644697 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:16 crc kubenswrapper[5008]: I0129 15:28:16.644714 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:16 crc kubenswrapper[5008]: I0129 15:28:16.644736 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:16 crc kubenswrapper[5008]: I0129 15:28:16.644753 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:16Z","lastTransitionTime":"2026-01-29T15:28:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:16 crc kubenswrapper[5008]: I0129 15:28:16.747986 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:16 crc kubenswrapper[5008]: I0129 15:28:16.748055 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:16 crc kubenswrapper[5008]: I0129 15:28:16.748074 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:16 crc kubenswrapper[5008]: I0129 15:28:16.748106 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:16 crc kubenswrapper[5008]: I0129 15:28:16.748123 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:16Z","lastTransitionTime":"2026-01-29T15:28:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:16 crc kubenswrapper[5008]: I0129 15:28:16.843141 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/f3716fd8-7f9b-44e2-ac3c-e907d8793dc9-metrics-certs\") pod \"network-metrics-daemon-kkc6c\" (UID: \"f3716fd8-7f9b-44e2-ac3c-e907d8793dc9\") " pod="openshift-multus/network-metrics-daemon-kkc6c" Jan 29 15:28:16 crc kubenswrapper[5008]: E0129 15:28:16.843340 5008 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 29 15:28:16 crc kubenswrapper[5008]: E0129 15:28:16.843421 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f3716fd8-7f9b-44e2-ac3c-e907d8793dc9-metrics-certs podName:f3716fd8-7f9b-44e2-ac3c-e907d8793dc9 nodeName:}" failed. No retries permitted until 2026-01-29 15:28:20.843397432 +0000 UTC m=+44.516251669 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/f3716fd8-7f9b-44e2-ac3c-e907d8793dc9-metrics-certs") pod "network-metrics-daemon-kkc6c" (UID: "f3716fd8-7f9b-44e2-ac3c-e907d8793dc9") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 29 15:28:16 crc kubenswrapper[5008]: I0129 15:28:16.850971 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:16 crc kubenswrapper[5008]: I0129 15:28:16.851026 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:16 crc kubenswrapper[5008]: I0129 15:28:16.851040 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:16 crc kubenswrapper[5008]: I0129 15:28:16.851059 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:16 crc kubenswrapper[5008]: I0129 15:28:16.851073 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:16Z","lastTransitionTime":"2026-01-29T15:28:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:16 crc kubenswrapper[5008]: I0129 15:28:16.954650 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:16 crc kubenswrapper[5008]: I0129 15:28:16.954692 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:16 crc kubenswrapper[5008]: I0129 15:28:16.954701 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:16 crc kubenswrapper[5008]: I0129 15:28:16.954719 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:16 crc kubenswrapper[5008]: I0129 15:28:16.954733 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:16Z","lastTransitionTime":"2026-01-29T15:28:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:17 crc kubenswrapper[5008]: I0129 15:28:17.057897 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:17 crc kubenswrapper[5008]: I0129 15:28:17.057940 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:17 crc kubenswrapper[5008]: I0129 15:28:17.057953 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:17 crc kubenswrapper[5008]: I0129 15:28:17.057973 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:17 crc kubenswrapper[5008]: I0129 15:28:17.057983 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:17Z","lastTransitionTime":"2026-01-29T15:28:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:17 crc kubenswrapper[5008]: I0129 15:28:17.161452 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:17 crc kubenswrapper[5008]: I0129 15:28:17.161512 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:17 crc kubenswrapper[5008]: I0129 15:28:17.161525 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:17 crc kubenswrapper[5008]: I0129 15:28:17.161543 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:17 crc kubenswrapper[5008]: I0129 15:28:17.161554 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:17Z","lastTransitionTime":"2026-01-29T15:28:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:17 crc kubenswrapper[5008]: I0129 15:28:17.263655 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:17 crc kubenswrapper[5008]: I0129 15:28:17.263716 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:17 crc kubenswrapper[5008]: I0129 15:28:17.263734 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:17 crc kubenswrapper[5008]: I0129 15:28:17.263756 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:17 crc kubenswrapper[5008]: I0129 15:28:17.263768 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:17Z","lastTransitionTime":"2026-01-29T15:28:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:17 crc kubenswrapper[5008]: I0129 15:28:17.298267 5008 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-25 14:48:20.285726841 +0000 UTC Jan 29 15:28:17 crc kubenswrapper[5008]: I0129 15:28:17.323732 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kkc6c" Jan 29 15:28:17 crc kubenswrapper[5008]: E0129 15:28:17.323904 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kkc6c" podUID="f3716fd8-7f9b-44e2-ac3c-e907d8793dc9" Jan 29 15:28:17 crc kubenswrapper[5008]: I0129 15:28:17.336714 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ae42d856f5916fe3a1dace4ed5ed53a6cab552d169357b7303516719b78ef076\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:17Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:17 crc kubenswrapper[5008]: I0129 15:28:17.350191 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e5526ab405f367c31c46e86dc356f5c21ac7529cd706af08cb6cd35e54dbe33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a34142066431679db41e56f6697765165128986ad22bc919152524672e3035d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:17Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:17 crc kubenswrapper[5008]: I0129 15:28:17.364646 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-78bl2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa065d0b-d690-4a7d-9079-a8f976a7aca3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb7be81711617226cfa9af5ce71166ad176fc477581c03ba781a2746d64bbf31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trwfk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dce68b57fb66d0f4fb38e7ba2da32746311a7705ec80e7dbaaee405bf6175456\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dce68b57fb66d0f4fb38e7ba2da32746311a7705ec80e7dbaaee405bf6175456\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:27:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trwfk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be2538011dc9cfea90fe3fdf861804d4f36944262a852e2efe4c6a215019fb7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://be2538011dc9cfea90fe3fdf861804d4f36944262a852e2efe4c6a215019fb7e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trwfk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1e468924dd5d2c21d28331698458147151b2c74b04a9154c3f0638b271ffb36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c1e468924dd5d2c21d28331698458147151b2c74b04a9154c3f0638b271ffb36\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trwfk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf3e60925690b1b555efc2db95efcef76510c147b6338b65b071bf0729561a6b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bf3e60925690b1b555efc2db95efcef76510c147b6338b65b071bf0729561a6b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trwfk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://83ad0f06e7035a28c9d0207484d22ac175226fac31b1d5e233ce7231cb957fb3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://83ad0f06e7035a28c9d0207484d22ac175226fac31b1d5e233ce7231cb957fb3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trwfk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://78d2f9571b05eeb98c339f4165ca858289b85192d254ea86d4fb2eae7ea2e61e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://78d2f9571b05eeb98c339f4165ca858289b85192d254ea86d4fb2eae7ea2e61e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trwfk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:27:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-78bl2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:17Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:17 crc kubenswrapper[5008]: I0129 15:28:17.366135 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:17 crc kubenswrapper[5008]: I0129 15:28:17.366178 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:17 crc kubenswrapper[5008]: I0129 15:28:17.366190 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:17 crc kubenswrapper[5008]: I0129 15:28:17.366207 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:17 crc kubenswrapper[5008]: I0129 15:28:17.366219 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:17Z","lastTransitionTime":"2026-01-29T15:28:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:17 crc kubenswrapper[5008]: I0129 15:28:17.376713 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-qj8wb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ffbfcf6-99e5-450c-8c72-b2db9365d93e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb113f45b58a5039b88d2c176d718d5a012e21c1785781c1fcda5843d529a9af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8mvmz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:28:01Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-qj8wb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:17Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:17 crc kubenswrapper[5008]: I0129 15:28:17.412280 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"77958faa-02ef-4792-b792-6094f922cd1d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de76f0d6e08ee14b4a5ab39a21ebdc63bdf379dcd5b648ae46a4edcc2a49f20e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1dcda54222f387e6560d3e297be72e19032a975feb916bc12a220870207a3f35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3cb618c2c44502074cb37ce1e688d187254eafae3916372a16c8ab845fed767a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff8e5fd243880ce71f07c5c532cad2cdff0e4bca2d0083280be78206a1a4c854\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7393e24277d74a2b9987e6cdc54cd65485f5bc57d93ec25a2cb8479923db1feb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://380711ea042d739a804ab6da4c0361004cc9ed9a48a5f4b006d168df6a84ebb5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://380711ea042d739a804ab6da4c0361004cc9ed9a48a5f4b006d168df6a84ebb5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:27:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d9134a6829b7df9b42aeae161ea1f3961837d6a0b322b1adcd2417c47c0f5d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1d9134a6829b7df9b42aeae161ea1f3961837d6a0b322b1adcd2417c47c0f5d5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:27:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://9206e5978757b0979fa411a384d9e5b4728b01a769f87383df38dbb8f0f18e4b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9206e5978757b0979fa411a384d9e5b4728b01a769f87383df38dbb8f0f18e4b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:27:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:27:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:27:37Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:17Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:17 crc kubenswrapper[5008]: I0129 15:28:17.428420 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2624b9eb-bfe1-4c46-8825-6152c5e00565\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://12266e3ba2ed2e5d6d1e7ee893a0d59cd4575c8870cb1e129ca0fd9b8623467f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c778df6f5c031669143db37980250c01473f3d9856acc44a6ef51852822f99ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://de7c341c7443f28f5919ef6baeb21377b5571637ad807dd7515a5f28c218034b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0f710dffd08d1bbb467ff9d2c6a5d5beed779550747459407916e743506ab27\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:27:37Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:17Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:17 crc kubenswrapper[5008]: I0129 15:28:17.445291 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d62f7cc2-d2d7-4c9a-9432-8b4fb9f3fcf2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4702656214e54c2881bf198364622648679a9981721d09a6b1551a134c63b7d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ed1794f8b68a0810301b6f7b91e03cfb269b35084dd97b2f153789ba70970f2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://677c04a1dffb767e6149ccb064772548ca29cf553afc20cb4eb82a5f85742ff0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d710e35a02d14289e2d5fe6b35c08621e78c96b7e9e30451ffd6d51962fb761\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4d710e35a02d14289e2d5fe6b35c08621e78c96b7e9e30451ffd6d51962fb761\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"message\\\":\\\"file observer\\\\nW0129 15:27:57.701071 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0129 15:27:57.704726 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 15:27:57.707574 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-445213743/tls.crt::/tmp/serving-cert-445213743/tls.key\\\\\\\"\\\\nI0129 15:27:58.036057 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0129 15:27:58.041904 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0129 15:27:58.041936 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0129 15:27:58.041959 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0129 15:27:58.041967 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0129 15:27:58.046875 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0129 15:27:58.046901 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 15:27:58.046906 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 15:27:58.046911 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0129 15:27:58.046914 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0129 15:27:58.046917 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0129 15:27:58.046919 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0129 15:27:58.047110 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0129 15:27:58.052272 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T15:27:42Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3397d4d59fbac09e49247425eb263f25d13c62a72013146c981b606f6389165d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:40Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://25a0c747be0a011a60911a631709b27620d8ebc5afea1d21dbeb71f26d971f6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://25a0c747be0a011a60911a631709b27620d8ebc5afea1d21dbeb71f26d971f6e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:27:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:27:37Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:17Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:17 crc kubenswrapper[5008]: I0129 15:28:17.459087 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:17Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:17 crc kubenswrapper[5008]: I0129 15:28:17.470440 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:17 crc kubenswrapper[5008]: I0129 15:28:17.470490 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:17 crc kubenswrapper[5008]: I0129 15:28:17.470531 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:17 crc kubenswrapper[5008]: I0129 15:28:17.470551 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:17 crc kubenswrapper[5008]: I0129 15:28:17.470563 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:17Z","lastTransitionTime":"2026-01-29T15:28:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:17 crc kubenswrapper[5008]: I0129 15:28:17.472101 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:17Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:17 crc kubenswrapper[5008]: I0129 15:28:17.485161 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-42hcz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cdd8ae23-3f9f-49f8-928d-46dad823fde4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a44b0a7b0b53c339b51d5391ad7e0eb342bdb491b4af37a98f48788b8e2c077b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg75x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:27:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-42hcz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:17Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:17 crc kubenswrapper[5008]: I0129 15:28:17.497063 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-p5kdp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f5a0b69-5edd-467c-a822-093f1689df1d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6930478f2ddb5112eb944beac7cabb3e235fe16465a4706e8c665ce9481bc49d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gq2fz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://98ea0d7c1f2e3e9fc74e8e58ae26ab486c6b75f655273070cebee814c7c99e0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gq2fz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:28:11Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-p5kdp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:17Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:17 crc kubenswrapper[5008]: I0129 15:28:17.509389 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c04122903ba8ec9ecb21ba42f430520d0a097fff8cea9572b066e146d519cf91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:17Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:17 crc kubenswrapper[5008]: I0129 15:28:17.523680 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:17Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:17 crc kubenswrapper[5008]: I0129 15:28:17.535388 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-wtvvb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2dede057-dcce-4302-8efe-e2c3640308ec\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://63cab2ec47a6dc148b6d3554a6f4b5c1985ca43bf62bfc444ff3582273cce517\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mtnst\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:27:58Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-wtvvb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:17Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:17 crc kubenswrapper[5008]: I0129 15:28:17.547006 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-gk9q8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ca0fcb2d-733d-4bde-9bbf-3f7082d0e244\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fda885d25c8fd46bd297810d4fb6c23ec0d4bb76993e94ea75a623b0feeed247\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6blck\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4781ea933d8ce868cf1da4b2890797c16012b434ce074870a59307d61a3c731\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6blck\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:27:59Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-gk9q8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:17Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:17 crc kubenswrapper[5008]: I0129 15:28:17.568548 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pqg9w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1d092513-7735-4c98-9734-57bc46b99280\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://08beb10f1715c1ca4bbe5b5ecf918e595f3befca424a2b65a06e682936dcc9c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2xcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://84bee79a5084a74e833cfe4bac65bc4b319e7a41e9f3e8c7ee7de383385da1a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2xcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eddc7bcf8b28e2d71e41dbad61e84e0e0ac1e2702628a400e9c16dcc4303cad1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2xcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b82de879355c27b3c577b5d5a292b2c1db266e6d92a8e01409bf87ede71ba420\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2xcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e0b6c0db5ed1e87ffade45aa1c7194322bbf680050f9b7328a3584db57e1554\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2xcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://676b28dc78242b0ec7c7a3643a048da9020c807de1f4ddd0cd801f60a1bf41a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2xcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e99cc5b72dd4558981820cab4c037fc0a5419fbf5c8f8b6cc3733fa97ccbab8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6111e93f68c8aa5c23e0317317a19c4a1df88a0d6babfab0d89d65902410feee\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-29T15:28:11Z\\\",\\\"message\\\":\\\"emoval\\\\nI0129 15:28:10.306875 6307 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0129 15:28:10.306882 6307 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0129 15:28:10.306927 6307 factory.go:656] Stopping watch factory\\\\nI0129 15:28:10.306933 6307 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0129 15:28:10.306953 6307 handler.go:208] Removed *v1.Node event handler 7\\\\nI0129 15:28:10.306717 6307 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0129 15:28:10.307153 6307 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0129 15:28:10.307165 6307 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0129 15:28:10.307174 6307 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0129 15:28:10.307184 6307 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0129 15:28:10.307193 6307 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0129 15:28:10.307206 6307 handler.go:208] Removed *v1.Node event handler 2\\\\nI0129 15:28:10.307328 6307 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0129 15:28:10.307559 6307 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:06Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7e99cc5b72dd4558981820cab4c037fc0a5419fbf5c8f8b6cc3733fa97ccbab8\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-29T15:28:14Z\\\",\\\"message\\\":\\\"nil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0129 15:28:14.347483 6482 base_network_controller_pods.go:477] [default/openshift-network-console/networking-console-plugin-85b44fc459-gdk6g] creating logical port openshift-network-console_networking-console-plugin-85b44fc459-gdk6g for pod on switch crc\\\\nI0129 15:28:14.347479 6482 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-apiserver/api]} name:Service_openshift-apiserver/api_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.37:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {88e20c31-5b8d-4d44-bbd8-dba87b7dbaf0}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0129 15:28:14.347139 6482 services_controller.go:453] Built service openshift-ingress-canary/ingress-canary template LB for network=default: []services.LB{}\\\\nF0129 15:28:14.347332 6482 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy c\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2xcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dc93128ecb53884c776154eafc7f29837e9c378a10c37df5d85d608ef14d7195\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2xcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6807035e51f1a1b563d2c2de6ad73607b2a3bbb9b4336cb9dfeea693d35fdda6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6807035e51f1a1b563d2c2de6ad73607b2a3bbb9b4336cb9dfeea693d35fdda6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2xcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:27:59Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-pqg9w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:17Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:17 crc kubenswrapper[5008]: I0129 15:28:17.573153 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:17 crc kubenswrapper[5008]: I0129 15:28:17.573195 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:17 crc kubenswrapper[5008]: I0129 15:28:17.573207 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:17 crc kubenswrapper[5008]: I0129 15:28:17.573225 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:17 crc kubenswrapper[5008]: I0129 15:28:17.573238 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:17Z","lastTransitionTime":"2026-01-29T15:28:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:17 crc kubenswrapper[5008]: I0129 15:28:17.580878 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-kkc6c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f3716fd8-7f9b-44e2-ac3c-e907d8793dc9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tl4fv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tl4fv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:28:13Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-kkc6c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:17Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:17 crc kubenswrapper[5008]: I0129 15:28:17.675362 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:17 crc kubenswrapper[5008]: I0129 15:28:17.675589 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:17 crc kubenswrapper[5008]: I0129 15:28:17.675693 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:17 crc kubenswrapper[5008]: I0129 15:28:17.675768 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:17 crc kubenswrapper[5008]: I0129 15:28:17.675849 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:17Z","lastTransitionTime":"2026-01-29T15:28:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:17 crc kubenswrapper[5008]: I0129 15:28:17.778441 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:17 crc kubenswrapper[5008]: I0129 15:28:17.778674 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:17 crc kubenswrapper[5008]: I0129 15:28:17.778760 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:17 crc kubenswrapper[5008]: I0129 15:28:17.778889 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:17 crc kubenswrapper[5008]: I0129 15:28:17.778973 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:17Z","lastTransitionTime":"2026-01-29T15:28:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:17 crc kubenswrapper[5008]: I0129 15:28:17.882494 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:17 crc kubenswrapper[5008]: I0129 15:28:17.882537 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:17 crc kubenswrapper[5008]: I0129 15:28:17.882552 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:17 crc kubenswrapper[5008]: I0129 15:28:17.882573 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:17 crc kubenswrapper[5008]: I0129 15:28:17.882588 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:17Z","lastTransitionTime":"2026-01-29T15:28:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:17 crc kubenswrapper[5008]: I0129 15:28:17.986323 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:17 crc kubenswrapper[5008]: I0129 15:28:17.986687 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:17 crc kubenswrapper[5008]: I0129 15:28:17.986727 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:17 crc kubenswrapper[5008]: I0129 15:28:17.986755 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:17 crc kubenswrapper[5008]: I0129 15:28:17.986767 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:17Z","lastTransitionTime":"2026-01-29T15:28:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:18 crc kubenswrapper[5008]: I0129 15:28:18.090661 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:18 crc kubenswrapper[5008]: I0129 15:28:18.090732 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:18 crc kubenswrapper[5008]: I0129 15:28:18.090751 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:18 crc kubenswrapper[5008]: I0129 15:28:18.090778 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:18 crc kubenswrapper[5008]: I0129 15:28:18.090838 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:18Z","lastTransitionTime":"2026-01-29T15:28:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:18 crc kubenswrapper[5008]: I0129 15:28:18.195896 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:18 crc kubenswrapper[5008]: I0129 15:28:18.195948 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:18 crc kubenswrapper[5008]: I0129 15:28:18.195965 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:18 crc kubenswrapper[5008]: I0129 15:28:18.195990 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:18 crc kubenswrapper[5008]: I0129 15:28:18.196010 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:18Z","lastTransitionTime":"2026-01-29T15:28:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:18 crc kubenswrapper[5008]: I0129 15:28:18.298501 5008 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-13 22:50:45.45588741 +0000 UTC Jan 29 15:28:18 crc kubenswrapper[5008]: I0129 15:28:18.300024 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:18 crc kubenswrapper[5008]: I0129 15:28:18.300109 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:18 crc kubenswrapper[5008]: I0129 15:28:18.300139 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:18 crc kubenswrapper[5008]: I0129 15:28:18.300171 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:18 crc kubenswrapper[5008]: I0129 15:28:18.300199 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:18Z","lastTransitionTime":"2026-01-29T15:28:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:18 crc kubenswrapper[5008]: I0129 15:28:18.323417 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 15:28:18 crc kubenswrapper[5008]: E0129 15:28:18.323623 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 15:28:18 crc kubenswrapper[5008]: I0129 15:28:18.323651 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 15:28:18 crc kubenswrapper[5008]: I0129 15:28:18.323723 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 15:28:18 crc kubenswrapper[5008]: E0129 15:28:18.323969 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 15:28:18 crc kubenswrapper[5008]: E0129 15:28:18.324324 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 15:28:18 crc kubenswrapper[5008]: I0129 15:28:18.324895 5008 scope.go:117] "RemoveContainer" containerID="4d710e35a02d14289e2d5fe6b35c08621e78c96b7e9e30451ffd6d51962fb761" Jan 29 15:28:18 crc kubenswrapper[5008]: I0129 15:28:18.404097 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:18 crc kubenswrapper[5008]: I0129 15:28:18.404576 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:18 crc kubenswrapper[5008]: I0129 15:28:18.404590 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:18 crc kubenswrapper[5008]: I0129 15:28:18.404613 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:18 crc kubenswrapper[5008]: I0129 15:28:18.404638 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:18Z","lastTransitionTime":"2026-01-29T15:28:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:18 crc kubenswrapper[5008]: I0129 15:28:18.507115 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:18 crc kubenswrapper[5008]: I0129 15:28:18.507161 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:18 crc kubenswrapper[5008]: I0129 15:28:18.507171 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:18 crc kubenswrapper[5008]: I0129 15:28:18.507187 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:18 crc kubenswrapper[5008]: I0129 15:28:18.507198 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:18Z","lastTransitionTime":"2026-01-29T15:28:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:18 crc kubenswrapper[5008]: I0129 15:28:18.609995 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:18 crc kubenswrapper[5008]: I0129 15:28:18.610046 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:18 crc kubenswrapper[5008]: I0129 15:28:18.610058 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:18 crc kubenswrapper[5008]: I0129 15:28:18.610271 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:18 crc kubenswrapper[5008]: I0129 15:28:18.610281 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:18Z","lastTransitionTime":"2026-01-29T15:28:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:18 crc kubenswrapper[5008]: I0129 15:28:18.712737 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:18 crc kubenswrapper[5008]: I0129 15:28:18.712812 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:18 crc kubenswrapper[5008]: I0129 15:28:18.712837 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:18 crc kubenswrapper[5008]: I0129 15:28:18.712858 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:18 crc kubenswrapper[5008]: I0129 15:28:18.712872 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:18Z","lastTransitionTime":"2026-01-29T15:28:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:18 crc kubenswrapper[5008]: I0129 15:28:18.815250 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:18 crc kubenswrapper[5008]: I0129 15:28:18.815290 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:18 crc kubenswrapper[5008]: I0129 15:28:18.815300 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:18 crc kubenswrapper[5008]: I0129 15:28:18.815317 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:18 crc kubenswrapper[5008]: I0129 15:28:18.815328 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:18Z","lastTransitionTime":"2026-01-29T15:28:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:18 crc kubenswrapper[5008]: I0129 15:28:18.839034 5008 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Jan 29 15:28:18 crc kubenswrapper[5008]: I0129 15:28:18.840688 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"412b5d429b7a86a87e710ba4a0c81a54b03108f41ce6cc29f429aede063eb76c"} Jan 29 15:28:18 crc kubenswrapper[5008]: I0129 15:28:18.841707 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 15:28:18 crc kubenswrapper[5008]: I0129 15:28:18.857251 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-wtvvb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2dede057-dcce-4302-8efe-e2c3640308ec\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://63cab2ec47a6dc148b6d3554a6f4b5c1985ca43bf62bfc444ff3582273cce517\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mtnst\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:27:58Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-wtvvb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:18Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:18 crc kubenswrapper[5008]: I0129 15:28:18.874286 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-gk9q8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ca0fcb2d-733d-4bde-9bbf-3f7082d0e244\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fda885d25c8fd46bd297810d4fb6c23ec0d4bb76993e94ea75a623b0feeed247\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6blck\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4781ea933d8ce868cf1da4b2890797c16012b434ce074870a59307d61a3c731\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6blck\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:27:59Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-gk9q8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:18Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:18 crc kubenswrapper[5008]: I0129 15:28:18.907274 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pqg9w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1d092513-7735-4c98-9734-57bc46b99280\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://08beb10f1715c1ca4bbe5b5ecf918e595f3befca424a2b65a06e682936dcc9c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2xcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://84bee79a5084a74e833cfe4bac65bc4b319e7a41e9f3e8c7ee7de383385da1a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2xcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eddc7bcf8b28e2d71e41dbad61e84e0e0ac1e2702628a400e9c16dcc4303cad1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2xcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b82de879355c27b3c577b5d5a292b2c1db266e6d92a8e01409bf87ede71ba420\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2xcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e0b6c0db5ed1e87ffade45aa1c7194322bbf680050f9b7328a3584db57e1554\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2xcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://676b28dc78242b0ec7c7a3643a048da9020c807de1f4ddd0cd801f60a1bf41a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2xcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e99cc5b72dd4558981820cab4c037fc0a5419fbf5c8f8b6cc3733fa97ccbab8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6111e93f68c8aa5c23e0317317a19c4a1df88a0d6babfab0d89d65902410feee\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-29T15:28:11Z\\\",\\\"message\\\":\\\"emoval\\\\nI0129 15:28:10.306875 6307 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0129 15:28:10.306882 6307 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0129 15:28:10.306927 6307 factory.go:656] Stopping watch factory\\\\nI0129 15:28:10.306933 6307 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0129 15:28:10.306953 6307 handler.go:208] Removed *v1.Node event handler 7\\\\nI0129 15:28:10.306717 6307 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0129 15:28:10.307153 6307 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0129 15:28:10.307165 6307 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0129 15:28:10.307174 6307 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0129 15:28:10.307184 6307 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0129 15:28:10.307193 6307 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0129 15:28:10.307206 6307 handler.go:208] Removed *v1.Node event handler 2\\\\nI0129 15:28:10.307328 6307 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0129 15:28:10.307559 6307 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:06Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7e99cc5b72dd4558981820cab4c037fc0a5419fbf5c8f8b6cc3733fa97ccbab8\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-29T15:28:14Z\\\",\\\"message\\\":\\\"nil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0129 15:28:14.347483 6482 base_network_controller_pods.go:477] [default/openshift-network-console/networking-console-plugin-85b44fc459-gdk6g] creating logical port openshift-network-console_networking-console-plugin-85b44fc459-gdk6g for pod on switch crc\\\\nI0129 15:28:14.347479 6482 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-apiserver/api]} name:Service_openshift-apiserver/api_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.37:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {88e20c31-5b8d-4d44-bbd8-dba87b7dbaf0}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0129 15:28:14.347139 6482 services_controller.go:453] Built service openshift-ingress-canary/ingress-canary template LB for network=default: []services.LB{}\\\\nF0129 15:28:14.347332 6482 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy c\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2xcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dc93128ecb53884c776154eafc7f29837e9c378a10c37df5d85d608ef14d7195\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2xcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6807035e51f1a1b563d2c2de6ad73607b2a3bbb9b4336cb9dfeea693d35fdda6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6807035e51f1a1b563d2c2de6ad73607b2a3bbb9b4336cb9dfeea693d35fdda6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2xcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:27:59Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-pqg9w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:18Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:18 crc kubenswrapper[5008]: I0129 15:28:18.917963 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:18 crc kubenswrapper[5008]: I0129 15:28:18.918025 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:18 crc kubenswrapper[5008]: I0129 15:28:18.918043 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:18 crc kubenswrapper[5008]: I0129 15:28:18.918068 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:18 crc kubenswrapper[5008]: I0129 15:28:18.918085 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:18Z","lastTransitionTime":"2026-01-29T15:28:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:18 crc kubenswrapper[5008]: I0129 15:28:18.922260 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-kkc6c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f3716fd8-7f9b-44e2-ac3c-e907d8793dc9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tl4fv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tl4fv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:28:13Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-kkc6c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:18Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:18 crc kubenswrapper[5008]: I0129 15:28:18.939176 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c04122903ba8ec9ecb21ba42f430520d0a097fff8cea9572b066e146d519cf91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:18Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:18 crc kubenswrapper[5008]: I0129 15:28:18.957032 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:18Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:18 crc kubenswrapper[5008]: I0129 15:28:18.979071 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ae42d856f5916fe3a1dace4ed5ed53a6cab552d169357b7303516719b78ef076\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:18Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:19 crc kubenswrapper[5008]: I0129 15:28:19.001126 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e5526ab405f367c31c46e86dc356f5c21ac7529cd706af08cb6cd35e54dbe33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a34142066431679db41e56f6697765165128986ad22bc919152524672e3035d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:18Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:19 crc kubenswrapper[5008]: I0129 15:28:19.020251 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:19 crc kubenswrapper[5008]: I0129 15:28:19.020305 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:19 crc kubenswrapper[5008]: I0129 15:28:19.020322 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:19 crc kubenswrapper[5008]: I0129 15:28:19.020343 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:19 crc kubenswrapper[5008]: I0129 15:28:19.020360 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:19Z","lastTransitionTime":"2026-01-29T15:28:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:19 crc kubenswrapper[5008]: I0129 15:28:19.020385 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-78bl2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa065d0b-d690-4a7d-9079-a8f976a7aca3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb7be81711617226cfa9af5ce71166ad176fc477581c03ba781a2746d64bbf31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trwfk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dce68b57fb66d0f4fb38e7ba2da32746311a7705ec80e7dbaaee405bf6175456\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dce68b57fb66d0f4fb38e7ba2da32746311a7705ec80e7dbaaee405bf6175456\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:27:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trwfk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be2538011dc9cfea90fe3fdf861804d4f36944262a852e2efe4c6a215019fb7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://be2538011dc9cfea90fe3fdf861804d4f36944262a852e2efe4c6a215019fb7e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trwfk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1e468924dd5d2c21d28331698458147151b2c74b04a9154c3f0638b271ffb36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c1e468924dd5d2c21d28331698458147151b2c74b04a9154c3f0638b271ffb36\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trwfk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf3e60925690b1b555efc2db95efcef76510c147b6338b65b071bf0729561a6b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bf3e60925690b1b555efc2db95efcef76510c147b6338b65b071bf0729561a6b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trwfk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://83ad0f06e7035a28c9d0207484d22ac175226fac31b1d5e233ce7231cb957fb3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://83ad0f06e7035a28c9d0207484d22ac175226fac31b1d5e233ce7231cb957fb3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trwfk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://78d2f9571b05eeb98c339f4165ca858289b85192d254ea86d4fb2eae7ea2e61e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://78d2f9571b05eeb98c339f4165ca858289b85192d254ea86d4fb2eae7ea2e61e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trwfk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:27:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-78bl2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:19Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:19 crc kubenswrapper[5008]: I0129 15:28:19.035294 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-qj8wb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ffbfcf6-99e5-450c-8c72-b2db9365d93e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb113f45b58a5039b88d2c176d718d5a012e21c1785781c1fcda5843d529a9af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8mvmz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:28:01Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-qj8wb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:19Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:19 crc kubenswrapper[5008]: I0129 15:28:19.049666 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:19Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:19 crc kubenswrapper[5008]: I0129 15:28:19.064438 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:19Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:19 crc kubenswrapper[5008]: I0129 15:28:19.078502 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-42hcz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cdd8ae23-3f9f-49f8-928d-46dad823fde4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a44b0a7b0b53c339b51d5391ad7e0eb342bdb491b4af37a98f48788b8e2c077b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg75x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:27:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-42hcz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:19Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:19 crc kubenswrapper[5008]: I0129 15:28:19.091365 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-p5kdp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f5a0b69-5edd-467c-a822-093f1689df1d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6930478f2ddb5112eb944beac7cabb3e235fe16465a4706e8c665ce9481bc49d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gq2fz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://98ea0d7c1f2e3e9fc74e8e58ae26ab486c6b75f655273070cebee814c7c99e0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gq2fz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:28:11Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-p5kdp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:19Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:19 crc kubenswrapper[5008]: I0129 15:28:19.111564 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"77958faa-02ef-4792-b792-6094f922cd1d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de76f0d6e08ee14b4a5ab39a21ebdc63bdf379dcd5b648ae46a4edcc2a49f20e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1dcda54222f387e6560d3e297be72e19032a975feb916bc12a220870207a3f35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3cb618c2c44502074cb37ce1e688d187254eafae3916372a16c8ab845fed767a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff8e5fd243880ce71f07c5c532cad2cdff0e4bca2d0083280be78206a1a4c854\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7393e24277d74a2b9987e6cdc54cd65485f5bc57d93ec25a2cb8479923db1feb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://380711ea042d739a804ab6da4c0361004cc9ed9a48a5f4b006d168df6a84ebb5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://380711ea042d739a804ab6da4c0361004cc9ed9a48a5f4b006d168df6a84ebb5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:27:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d9134a6829b7df9b42aeae161ea1f3961837d6a0b322b1adcd2417c47c0f5d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1d9134a6829b7df9b42aeae161ea1f3961837d6a0b322b1adcd2417c47c0f5d5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:27:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://9206e5978757b0979fa411a384d9e5b4728b01a769f87383df38dbb8f0f18e4b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9206e5978757b0979fa411a384d9e5b4728b01a769f87383df38dbb8f0f18e4b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:27:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:27:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:27:37Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:19Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:19 crc kubenswrapper[5008]: I0129 15:28:19.123165 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:19 crc kubenswrapper[5008]: I0129 15:28:19.123206 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:19 crc kubenswrapper[5008]: I0129 15:28:19.123216 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:19 crc kubenswrapper[5008]: I0129 15:28:19.123232 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:19 crc kubenswrapper[5008]: I0129 15:28:19.123242 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:19Z","lastTransitionTime":"2026-01-29T15:28:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:19 crc kubenswrapper[5008]: I0129 15:28:19.125981 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2624b9eb-bfe1-4c46-8825-6152c5e00565\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://12266e3ba2ed2e5d6d1e7ee893a0d59cd4575c8870cb1e129ca0fd9b8623467f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c778df6f5c031669143db37980250c01473f3d9856acc44a6ef51852822f99ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://de7c341c7443f28f5919ef6baeb21377b5571637ad807dd7515a5f28c218034b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0f710dffd08d1bbb467ff9d2c6a5d5beed779550747459407916e743506ab27\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:27:37Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:19Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:19 crc kubenswrapper[5008]: I0129 15:28:19.138693 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d62f7cc2-d2d7-4c9a-9432-8b4fb9f3fcf2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4702656214e54c2881bf198364622648679a9981721d09a6b1551a134c63b7d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ed1794f8b68a0810301b6f7b91e03cfb269b35084dd97b2f153789ba70970f2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://677c04a1dffb767e6149ccb064772548ca29cf553afc20cb4eb82a5f85742ff0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://412b5d429b7a86a87e710ba4a0c81a54b03108f41ce6cc29f429aede063eb76c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4d710e35a02d14289e2d5fe6b35c08621e78c96b7e9e30451ffd6d51962fb761\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"message\\\":\\\"file observer\\\\nW0129 15:27:57.701071 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0129 15:27:57.704726 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 15:27:57.707574 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-445213743/tls.crt::/tmp/serving-cert-445213743/tls.key\\\\\\\"\\\\nI0129 15:27:58.036057 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0129 15:27:58.041904 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0129 15:27:58.041936 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0129 15:27:58.041959 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0129 15:27:58.041967 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0129 15:27:58.046875 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0129 15:27:58.046901 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 15:27:58.046906 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 15:27:58.046911 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0129 15:27:58.046914 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0129 15:27:58.046917 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0129 15:27:58.046919 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0129 15:27:58.047110 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0129 15:27:58.052272 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T15:27:42Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3397d4d59fbac09e49247425eb263f25d13c62a72013146c981b606f6389165d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:40Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://25a0c747be0a011a60911a631709b27620d8ebc5afea1d21dbeb71f26d971f6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://25a0c747be0a011a60911a631709b27620d8ebc5afea1d21dbeb71f26d971f6e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:27:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:27:37Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:19Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:19 crc kubenswrapper[5008]: I0129 15:28:19.226137 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:19 crc kubenswrapper[5008]: I0129 15:28:19.226195 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:19 crc kubenswrapper[5008]: I0129 15:28:19.226207 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:19 crc kubenswrapper[5008]: I0129 15:28:19.226224 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:19 crc kubenswrapper[5008]: I0129 15:28:19.226235 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:19Z","lastTransitionTime":"2026-01-29T15:28:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:19 crc kubenswrapper[5008]: I0129 15:28:19.299115 5008 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-17 19:46:37.392876664 +0000 UTC Jan 29 15:28:19 crc kubenswrapper[5008]: I0129 15:28:19.323864 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kkc6c" Jan 29 15:28:19 crc kubenswrapper[5008]: E0129 15:28:19.324318 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kkc6c" podUID="f3716fd8-7f9b-44e2-ac3c-e907d8793dc9" Jan 29 15:28:19 crc kubenswrapper[5008]: I0129 15:28:19.328480 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:19 crc kubenswrapper[5008]: I0129 15:28:19.328561 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:19 crc kubenswrapper[5008]: I0129 15:28:19.328582 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:19 crc kubenswrapper[5008]: I0129 15:28:19.328606 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:19 crc kubenswrapper[5008]: I0129 15:28:19.328619 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:19Z","lastTransitionTime":"2026-01-29T15:28:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:19 crc kubenswrapper[5008]: I0129 15:28:19.431049 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:19 crc kubenswrapper[5008]: I0129 15:28:19.431081 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:19 crc kubenswrapper[5008]: I0129 15:28:19.431092 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:19 crc kubenswrapper[5008]: I0129 15:28:19.431105 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:19 crc kubenswrapper[5008]: I0129 15:28:19.431114 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:19Z","lastTransitionTime":"2026-01-29T15:28:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:19 crc kubenswrapper[5008]: I0129 15:28:19.533481 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:19 crc kubenswrapper[5008]: I0129 15:28:19.533526 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:19 crc kubenswrapper[5008]: I0129 15:28:19.533537 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:19 crc kubenswrapper[5008]: I0129 15:28:19.533551 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:19 crc kubenswrapper[5008]: I0129 15:28:19.533562 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:19Z","lastTransitionTime":"2026-01-29T15:28:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:19 crc kubenswrapper[5008]: I0129 15:28:19.635572 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:19 crc kubenswrapper[5008]: I0129 15:28:19.635609 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:19 crc kubenswrapper[5008]: I0129 15:28:19.635618 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:19 crc kubenswrapper[5008]: I0129 15:28:19.635631 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:19 crc kubenswrapper[5008]: I0129 15:28:19.635640 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:19Z","lastTransitionTime":"2026-01-29T15:28:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:19 crc kubenswrapper[5008]: I0129 15:28:19.738704 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:19 crc kubenswrapper[5008]: I0129 15:28:19.738774 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:19 crc kubenswrapper[5008]: I0129 15:28:19.738823 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:19 crc kubenswrapper[5008]: I0129 15:28:19.738842 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:19 crc kubenswrapper[5008]: I0129 15:28:19.738853 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:19Z","lastTransitionTime":"2026-01-29T15:28:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:19 crc kubenswrapper[5008]: I0129 15:28:19.841461 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:19 crc kubenswrapper[5008]: I0129 15:28:19.841499 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:19 crc kubenswrapper[5008]: I0129 15:28:19.841507 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:19 crc kubenswrapper[5008]: I0129 15:28:19.841523 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:19 crc kubenswrapper[5008]: I0129 15:28:19.841533 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:19Z","lastTransitionTime":"2026-01-29T15:28:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:19 crc kubenswrapper[5008]: I0129 15:28:19.944729 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:19 crc kubenswrapper[5008]: I0129 15:28:19.944834 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:19 crc kubenswrapper[5008]: I0129 15:28:19.944846 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:19 crc kubenswrapper[5008]: I0129 15:28:19.944862 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:19 crc kubenswrapper[5008]: I0129 15:28:19.944874 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:19Z","lastTransitionTime":"2026-01-29T15:28:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:20 crc kubenswrapper[5008]: I0129 15:28:20.047031 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:20 crc kubenswrapper[5008]: I0129 15:28:20.047108 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:20 crc kubenswrapper[5008]: I0129 15:28:20.047126 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:20 crc kubenswrapper[5008]: I0129 15:28:20.047153 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:20 crc kubenswrapper[5008]: I0129 15:28:20.047170 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:20Z","lastTransitionTime":"2026-01-29T15:28:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:20 crc kubenswrapper[5008]: I0129 15:28:20.149867 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:20 crc kubenswrapper[5008]: I0129 15:28:20.149916 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:20 crc kubenswrapper[5008]: I0129 15:28:20.149926 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:20 crc kubenswrapper[5008]: I0129 15:28:20.149943 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:20 crc kubenswrapper[5008]: I0129 15:28:20.149953 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:20Z","lastTransitionTime":"2026-01-29T15:28:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:20 crc kubenswrapper[5008]: I0129 15:28:20.252559 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:20 crc kubenswrapper[5008]: I0129 15:28:20.252601 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:20 crc kubenswrapper[5008]: I0129 15:28:20.252647 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:20 crc kubenswrapper[5008]: I0129 15:28:20.252663 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:20 crc kubenswrapper[5008]: I0129 15:28:20.252674 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:20Z","lastTransitionTime":"2026-01-29T15:28:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:20 crc kubenswrapper[5008]: I0129 15:28:20.299835 5008 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-01 11:34:46.492243769 +0000 UTC Jan 29 15:28:20 crc kubenswrapper[5008]: I0129 15:28:20.323399 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 15:28:20 crc kubenswrapper[5008]: I0129 15:28:20.323481 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 15:28:20 crc kubenswrapper[5008]: I0129 15:28:20.323569 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 15:28:20 crc kubenswrapper[5008]: E0129 15:28:20.323639 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 15:28:20 crc kubenswrapper[5008]: E0129 15:28:20.323740 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 15:28:20 crc kubenswrapper[5008]: E0129 15:28:20.323883 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 15:28:20 crc kubenswrapper[5008]: I0129 15:28:20.355038 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:20 crc kubenswrapper[5008]: I0129 15:28:20.355094 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:20 crc kubenswrapper[5008]: I0129 15:28:20.355111 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:20 crc kubenswrapper[5008]: I0129 15:28:20.355138 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:20 crc kubenswrapper[5008]: I0129 15:28:20.355153 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:20Z","lastTransitionTime":"2026-01-29T15:28:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:20 crc kubenswrapper[5008]: I0129 15:28:20.458378 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:20 crc kubenswrapper[5008]: I0129 15:28:20.458649 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:20 crc kubenswrapper[5008]: I0129 15:28:20.458819 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:20 crc kubenswrapper[5008]: I0129 15:28:20.458938 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:20 crc kubenswrapper[5008]: I0129 15:28:20.459054 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:20Z","lastTransitionTime":"2026-01-29T15:28:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:20 crc kubenswrapper[5008]: I0129 15:28:20.562512 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:20 crc kubenswrapper[5008]: I0129 15:28:20.562588 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:20 crc kubenswrapper[5008]: I0129 15:28:20.562613 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:20 crc kubenswrapper[5008]: I0129 15:28:20.562683 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:20 crc kubenswrapper[5008]: I0129 15:28:20.562708 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:20Z","lastTransitionTime":"2026-01-29T15:28:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:20 crc kubenswrapper[5008]: I0129 15:28:20.665084 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:20 crc kubenswrapper[5008]: I0129 15:28:20.665128 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:20 crc kubenswrapper[5008]: I0129 15:28:20.665140 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:20 crc kubenswrapper[5008]: I0129 15:28:20.665156 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:20 crc kubenswrapper[5008]: I0129 15:28:20.665166 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:20Z","lastTransitionTime":"2026-01-29T15:28:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:20 crc kubenswrapper[5008]: I0129 15:28:20.766884 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:20 crc kubenswrapper[5008]: I0129 15:28:20.767183 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:20 crc kubenswrapper[5008]: I0129 15:28:20.767309 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:20 crc kubenswrapper[5008]: I0129 15:28:20.767405 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:20 crc kubenswrapper[5008]: I0129 15:28:20.767505 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:20Z","lastTransitionTime":"2026-01-29T15:28:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:20 crc kubenswrapper[5008]: I0129 15:28:20.869494 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:20 crc kubenswrapper[5008]: I0129 15:28:20.869548 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:20 crc kubenswrapper[5008]: I0129 15:28:20.869560 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:20 crc kubenswrapper[5008]: I0129 15:28:20.869579 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:20 crc kubenswrapper[5008]: I0129 15:28:20.869589 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:20Z","lastTransitionTime":"2026-01-29T15:28:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:20 crc kubenswrapper[5008]: I0129 15:28:20.886625 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/f3716fd8-7f9b-44e2-ac3c-e907d8793dc9-metrics-certs\") pod \"network-metrics-daemon-kkc6c\" (UID: \"f3716fd8-7f9b-44e2-ac3c-e907d8793dc9\") " pod="openshift-multus/network-metrics-daemon-kkc6c" Jan 29 15:28:20 crc kubenswrapper[5008]: E0129 15:28:20.887074 5008 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 29 15:28:20 crc kubenswrapper[5008]: E0129 15:28:20.887314 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f3716fd8-7f9b-44e2-ac3c-e907d8793dc9-metrics-certs podName:f3716fd8-7f9b-44e2-ac3c-e907d8793dc9 nodeName:}" failed. No retries permitted until 2026-01-29 15:28:28.887284018 +0000 UTC m=+52.560138285 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/f3716fd8-7f9b-44e2-ac3c-e907d8793dc9-metrics-certs") pod "network-metrics-daemon-kkc6c" (UID: "f3716fd8-7f9b-44e2-ac3c-e907d8793dc9") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 29 15:28:20 crc kubenswrapper[5008]: I0129 15:28:20.972925 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:20 crc kubenswrapper[5008]: I0129 15:28:20.972988 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:20 crc kubenswrapper[5008]: I0129 15:28:20.973002 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:20 crc kubenswrapper[5008]: I0129 15:28:20.973019 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:20 crc kubenswrapper[5008]: I0129 15:28:20.973031 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:20Z","lastTransitionTime":"2026-01-29T15:28:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:21 crc kubenswrapper[5008]: I0129 15:28:21.075769 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:21 crc kubenswrapper[5008]: I0129 15:28:21.075957 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:21 crc kubenswrapper[5008]: I0129 15:28:21.076002 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:21 crc kubenswrapper[5008]: I0129 15:28:21.076035 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:21 crc kubenswrapper[5008]: I0129 15:28:21.076059 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:21Z","lastTransitionTime":"2026-01-29T15:28:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:21 crc kubenswrapper[5008]: I0129 15:28:21.178919 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:21 crc kubenswrapper[5008]: I0129 15:28:21.178980 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:21 crc kubenswrapper[5008]: I0129 15:28:21.179000 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:21 crc kubenswrapper[5008]: I0129 15:28:21.179026 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:21 crc kubenswrapper[5008]: I0129 15:28:21.179043 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:21Z","lastTransitionTime":"2026-01-29T15:28:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:21 crc kubenswrapper[5008]: I0129 15:28:21.281887 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:21 crc kubenswrapper[5008]: I0129 15:28:21.281936 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:21 crc kubenswrapper[5008]: I0129 15:28:21.281947 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:21 crc kubenswrapper[5008]: I0129 15:28:21.281966 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:21 crc kubenswrapper[5008]: I0129 15:28:21.281977 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:21Z","lastTransitionTime":"2026-01-29T15:28:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:21 crc kubenswrapper[5008]: I0129 15:28:21.300015 5008 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-09 19:13:50.227555842 +0000 UTC Jan 29 15:28:21 crc kubenswrapper[5008]: I0129 15:28:21.323177 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kkc6c" Jan 29 15:28:21 crc kubenswrapper[5008]: E0129 15:28:21.323422 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kkc6c" podUID="f3716fd8-7f9b-44e2-ac3c-e907d8793dc9" Jan 29 15:28:21 crc kubenswrapper[5008]: I0129 15:28:21.384855 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:21 crc kubenswrapper[5008]: I0129 15:28:21.384910 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:21 crc kubenswrapper[5008]: I0129 15:28:21.384921 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:21 crc kubenswrapper[5008]: I0129 15:28:21.384939 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:21 crc kubenswrapper[5008]: I0129 15:28:21.384952 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:21Z","lastTransitionTime":"2026-01-29T15:28:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:21 crc kubenswrapper[5008]: I0129 15:28:21.487009 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:21 crc kubenswrapper[5008]: I0129 15:28:21.487067 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:21 crc kubenswrapper[5008]: I0129 15:28:21.487080 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:21 crc kubenswrapper[5008]: I0129 15:28:21.487098 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:21 crc kubenswrapper[5008]: I0129 15:28:21.487111 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:21Z","lastTransitionTime":"2026-01-29T15:28:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:21 crc kubenswrapper[5008]: I0129 15:28:21.593158 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:21 crc kubenswrapper[5008]: I0129 15:28:21.593203 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:21 crc kubenswrapper[5008]: I0129 15:28:21.593213 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:21 crc kubenswrapper[5008]: I0129 15:28:21.593232 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:21 crc kubenswrapper[5008]: I0129 15:28:21.593249 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:21Z","lastTransitionTime":"2026-01-29T15:28:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:21 crc kubenswrapper[5008]: I0129 15:28:21.696392 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:21 crc kubenswrapper[5008]: I0129 15:28:21.696460 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:21 crc kubenswrapper[5008]: I0129 15:28:21.696479 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:21 crc kubenswrapper[5008]: I0129 15:28:21.696500 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:21 crc kubenswrapper[5008]: I0129 15:28:21.696512 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:21Z","lastTransitionTime":"2026-01-29T15:28:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:21 crc kubenswrapper[5008]: I0129 15:28:21.800978 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:21 crc kubenswrapper[5008]: I0129 15:28:21.801044 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:21 crc kubenswrapper[5008]: I0129 15:28:21.801065 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:21 crc kubenswrapper[5008]: I0129 15:28:21.801098 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:21 crc kubenswrapper[5008]: I0129 15:28:21.801118 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:21Z","lastTransitionTime":"2026-01-29T15:28:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:21 crc kubenswrapper[5008]: I0129 15:28:21.903632 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:21 crc kubenswrapper[5008]: I0129 15:28:21.903699 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:21 crc kubenswrapper[5008]: I0129 15:28:21.903728 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:21 crc kubenswrapper[5008]: I0129 15:28:21.903756 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:21 crc kubenswrapper[5008]: I0129 15:28:21.903821 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:21Z","lastTransitionTime":"2026-01-29T15:28:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:22 crc kubenswrapper[5008]: I0129 15:28:22.006697 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:22 crc kubenswrapper[5008]: I0129 15:28:22.006737 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:22 crc kubenswrapper[5008]: I0129 15:28:22.006754 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:22 crc kubenswrapper[5008]: I0129 15:28:22.006768 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:22 crc kubenswrapper[5008]: I0129 15:28:22.006776 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:22Z","lastTransitionTime":"2026-01-29T15:28:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:22 crc kubenswrapper[5008]: I0129 15:28:22.110156 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:22 crc kubenswrapper[5008]: I0129 15:28:22.110261 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:22 crc kubenswrapper[5008]: I0129 15:28:22.110285 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:22 crc kubenswrapper[5008]: I0129 15:28:22.110317 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:22 crc kubenswrapper[5008]: I0129 15:28:22.110340 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:22Z","lastTransitionTime":"2026-01-29T15:28:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:22 crc kubenswrapper[5008]: I0129 15:28:22.229879 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:22 crc kubenswrapper[5008]: I0129 15:28:22.229939 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:22 crc kubenswrapper[5008]: I0129 15:28:22.229951 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:22 crc kubenswrapper[5008]: I0129 15:28:22.229969 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:22 crc kubenswrapper[5008]: I0129 15:28:22.229982 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:22Z","lastTransitionTime":"2026-01-29T15:28:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:22 crc kubenswrapper[5008]: I0129 15:28:22.301139 5008 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-18 05:59:17.632877811 +0000 UTC Jan 29 15:28:22 crc kubenswrapper[5008]: I0129 15:28:22.323682 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 15:28:22 crc kubenswrapper[5008]: I0129 15:28:22.323744 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 15:28:22 crc kubenswrapper[5008]: E0129 15:28:22.323900 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 15:28:22 crc kubenswrapper[5008]: I0129 15:28:22.323708 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 15:28:22 crc kubenswrapper[5008]: E0129 15:28:22.324071 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 15:28:22 crc kubenswrapper[5008]: E0129 15:28:22.324154 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 15:28:22 crc kubenswrapper[5008]: I0129 15:28:22.332024 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:22 crc kubenswrapper[5008]: I0129 15:28:22.332079 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:22 crc kubenswrapper[5008]: I0129 15:28:22.332094 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:22 crc kubenswrapper[5008]: I0129 15:28:22.332139 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:22 crc kubenswrapper[5008]: I0129 15:28:22.332154 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:22Z","lastTransitionTime":"2026-01-29T15:28:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:22 crc kubenswrapper[5008]: I0129 15:28:22.434311 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:22 crc kubenswrapper[5008]: I0129 15:28:22.434345 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:22 crc kubenswrapper[5008]: I0129 15:28:22.434354 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:22 crc kubenswrapper[5008]: I0129 15:28:22.434367 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:22 crc kubenswrapper[5008]: I0129 15:28:22.434376 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:22Z","lastTransitionTime":"2026-01-29T15:28:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:22 crc kubenswrapper[5008]: I0129 15:28:22.536978 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:22 crc kubenswrapper[5008]: I0129 15:28:22.537055 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:22 crc kubenswrapper[5008]: I0129 15:28:22.537078 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:22 crc kubenswrapper[5008]: I0129 15:28:22.537103 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:22 crc kubenswrapper[5008]: I0129 15:28:22.537117 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:22Z","lastTransitionTime":"2026-01-29T15:28:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:22 crc kubenswrapper[5008]: I0129 15:28:22.640146 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:22 crc kubenswrapper[5008]: I0129 15:28:22.640181 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:22 crc kubenswrapper[5008]: I0129 15:28:22.640190 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:22 crc kubenswrapper[5008]: I0129 15:28:22.640205 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:22 crc kubenswrapper[5008]: I0129 15:28:22.640218 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:22Z","lastTransitionTime":"2026-01-29T15:28:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:22 crc kubenswrapper[5008]: I0129 15:28:22.742907 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:22 crc kubenswrapper[5008]: I0129 15:28:22.742982 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:22 crc kubenswrapper[5008]: I0129 15:28:22.743006 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:22 crc kubenswrapper[5008]: I0129 15:28:22.743033 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:22 crc kubenswrapper[5008]: I0129 15:28:22.743049 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:22Z","lastTransitionTime":"2026-01-29T15:28:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:22 crc kubenswrapper[5008]: I0129 15:28:22.846280 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:22 crc kubenswrapper[5008]: I0129 15:28:22.846336 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:22 crc kubenswrapper[5008]: I0129 15:28:22.846362 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:22 crc kubenswrapper[5008]: I0129 15:28:22.846390 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:22 crc kubenswrapper[5008]: I0129 15:28:22.846411 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:22Z","lastTransitionTime":"2026-01-29T15:28:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:22 crc kubenswrapper[5008]: I0129 15:28:22.949718 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:22 crc kubenswrapper[5008]: I0129 15:28:22.949815 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:22 crc kubenswrapper[5008]: I0129 15:28:22.949834 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:22 crc kubenswrapper[5008]: I0129 15:28:22.949856 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:22 crc kubenswrapper[5008]: I0129 15:28:22.949872 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:22Z","lastTransitionTime":"2026-01-29T15:28:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:23 crc kubenswrapper[5008]: I0129 15:28:23.052912 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:23 crc kubenswrapper[5008]: I0129 15:28:23.053027 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:23 crc kubenswrapper[5008]: I0129 15:28:23.053053 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:23 crc kubenswrapper[5008]: I0129 15:28:23.053125 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:23 crc kubenswrapper[5008]: I0129 15:28:23.053171 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:23Z","lastTransitionTime":"2026-01-29T15:28:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:23 crc kubenswrapper[5008]: I0129 15:28:23.156272 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:23 crc kubenswrapper[5008]: I0129 15:28:23.156320 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:23 crc kubenswrapper[5008]: I0129 15:28:23.156333 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:23 crc kubenswrapper[5008]: I0129 15:28:23.156351 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:23 crc kubenswrapper[5008]: I0129 15:28:23.156363 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:23Z","lastTransitionTime":"2026-01-29T15:28:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:23 crc kubenswrapper[5008]: I0129 15:28:23.258282 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:23 crc kubenswrapper[5008]: I0129 15:28:23.258344 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:23 crc kubenswrapper[5008]: I0129 15:28:23.258357 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:23 crc kubenswrapper[5008]: I0129 15:28:23.258375 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:23 crc kubenswrapper[5008]: I0129 15:28:23.258387 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:23Z","lastTransitionTime":"2026-01-29T15:28:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:23 crc kubenswrapper[5008]: I0129 15:28:23.302360 5008 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-28 02:08:58.260944731 +0000 UTC Jan 29 15:28:23 crc kubenswrapper[5008]: I0129 15:28:23.323285 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kkc6c" Jan 29 15:28:23 crc kubenswrapper[5008]: E0129 15:28:23.323459 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kkc6c" podUID="f3716fd8-7f9b-44e2-ac3c-e907d8793dc9" Jan 29 15:28:23 crc kubenswrapper[5008]: I0129 15:28:23.361623 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:23 crc kubenswrapper[5008]: I0129 15:28:23.361682 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:23 crc kubenswrapper[5008]: I0129 15:28:23.361703 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:23 crc kubenswrapper[5008]: I0129 15:28:23.361733 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:23 crc kubenswrapper[5008]: I0129 15:28:23.361756 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:23Z","lastTransitionTime":"2026-01-29T15:28:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:23 crc kubenswrapper[5008]: I0129 15:28:23.465305 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:23 crc kubenswrapper[5008]: I0129 15:28:23.465358 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:23 crc kubenswrapper[5008]: I0129 15:28:23.465371 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:23 crc kubenswrapper[5008]: I0129 15:28:23.465390 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:23 crc kubenswrapper[5008]: I0129 15:28:23.465402 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:23Z","lastTransitionTime":"2026-01-29T15:28:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:23 crc kubenswrapper[5008]: I0129 15:28:23.568759 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:23 crc kubenswrapper[5008]: I0129 15:28:23.568989 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:23 crc kubenswrapper[5008]: I0129 15:28:23.569020 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:23 crc kubenswrapper[5008]: I0129 15:28:23.569054 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:23 crc kubenswrapper[5008]: I0129 15:28:23.569077 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:23Z","lastTransitionTime":"2026-01-29T15:28:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:23 crc kubenswrapper[5008]: I0129 15:28:23.672285 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:23 crc kubenswrapper[5008]: I0129 15:28:23.672342 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:23 crc kubenswrapper[5008]: I0129 15:28:23.672361 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:23 crc kubenswrapper[5008]: I0129 15:28:23.672384 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:23 crc kubenswrapper[5008]: I0129 15:28:23.672401 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:23Z","lastTransitionTime":"2026-01-29T15:28:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:23 crc kubenswrapper[5008]: I0129 15:28:23.775097 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:23 crc kubenswrapper[5008]: I0129 15:28:23.775201 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:23 crc kubenswrapper[5008]: I0129 15:28:23.775224 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:23 crc kubenswrapper[5008]: I0129 15:28:23.775255 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:23 crc kubenswrapper[5008]: I0129 15:28:23.775276 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:23Z","lastTransitionTime":"2026-01-29T15:28:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:23 crc kubenswrapper[5008]: I0129 15:28:23.877881 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:23 crc kubenswrapper[5008]: I0129 15:28:23.877924 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:23 crc kubenswrapper[5008]: I0129 15:28:23.877934 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:23 crc kubenswrapper[5008]: I0129 15:28:23.877950 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:23 crc kubenswrapper[5008]: I0129 15:28:23.877961 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:23Z","lastTransitionTime":"2026-01-29T15:28:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:23 crc kubenswrapper[5008]: I0129 15:28:23.981415 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:23 crc kubenswrapper[5008]: I0129 15:28:23.981513 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:23 crc kubenswrapper[5008]: I0129 15:28:23.981531 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:23 crc kubenswrapper[5008]: I0129 15:28:23.981564 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:23 crc kubenswrapper[5008]: I0129 15:28:23.981578 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:23Z","lastTransitionTime":"2026-01-29T15:28:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:24 crc kubenswrapper[5008]: I0129 15:28:24.085483 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:24 crc kubenswrapper[5008]: I0129 15:28:24.085559 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:24 crc kubenswrapper[5008]: I0129 15:28:24.085571 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:24 crc kubenswrapper[5008]: I0129 15:28:24.085598 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:24 crc kubenswrapper[5008]: I0129 15:28:24.085610 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:24Z","lastTransitionTime":"2026-01-29T15:28:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:24 crc kubenswrapper[5008]: I0129 15:28:24.188723 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:24 crc kubenswrapper[5008]: I0129 15:28:24.188824 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:24 crc kubenswrapper[5008]: I0129 15:28:24.188843 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:24 crc kubenswrapper[5008]: I0129 15:28:24.188872 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:24 crc kubenswrapper[5008]: I0129 15:28:24.188893 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:24Z","lastTransitionTime":"2026-01-29T15:28:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:24 crc kubenswrapper[5008]: I0129 15:28:24.291502 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:24 crc kubenswrapper[5008]: I0129 15:28:24.291542 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:24 crc kubenswrapper[5008]: I0129 15:28:24.291552 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:24 crc kubenswrapper[5008]: I0129 15:28:24.291568 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:24 crc kubenswrapper[5008]: I0129 15:28:24.291576 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:24Z","lastTransitionTime":"2026-01-29T15:28:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:24 crc kubenswrapper[5008]: I0129 15:28:24.303237 5008 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-28 21:23:57.008249696 +0000 UTC Jan 29 15:28:24 crc kubenswrapper[5008]: I0129 15:28:24.323731 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 15:28:24 crc kubenswrapper[5008]: I0129 15:28:24.323818 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 15:28:24 crc kubenswrapper[5008]: I0129 15:28:24.323858 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 15:28:24 crc kubenswrapper[5008]: E0129 15:28:24.323984 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 15:28:24 crc kubenswrapper[5008]: E0129 15:28:24.324115 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 15:28:24 crc kubenswrapper[5008]: E0129 15:28:24.324228 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 15:28:24 crc kubenswrapper[5008]: I0129 15:28:24.395070 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:24 crc kubenswrapper[5008]: I0129 15:28:24.395139 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:24 crc kubenswrapper[5008]: I0129 15:28:24.395153 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:24 crc kubenswrapper[5008]: I0129 15:28:24.395169 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:24 crc kubenswrapper[5008]: I0129 15:28:24.395182 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:24Z","lastTransitionTime":"2026-01-29T15:28:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:24 crc kubenswrapper[5008]: I0129 15:28:24.498151 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:24 crc kubenswrapper[5008]: I0129 15:28:24.498204 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:24 crc kubenswrapper[5008]: I0129 15:28:24.498218 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:24 crc kubenswrapper[5008]: I0129 15:28:24.498236 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:24 crc kubenswrapper[5008]: I0129 15:28:24.498249 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:24Z","lastTransitionTime":"2026-01-29T15:28:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:24 crc kubenswrapper[5008]: I0129 15:28:24.600185 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:24 crc kubenswrapper[5008]: I0129 15:28:24.600258 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:24 crc kubenswrapper[5008]: I0129 15:28:24.600282 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:24 crc kubenswrapper[5008]: I0129 15:28:24.600314 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:24 crc kubenswrapper[5008]: I0129 15:28:24.600336 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:24Z","lastTransitionTime":"2026-01-29T15:28:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:24 crc kubenswrapper[5008]: I0129 15:28:24.613830 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:24 crc kubenswrapper[5008]: I0129 15:28:24.613881 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:24 crc kubenswrapper[5008]: I0129 15:28:24.613896 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:24 crc kubenswrapper[5008]: I0129 15:28:24.613915 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:24 crc kubenswrapper[5008]: I0129 15:28:24.613927 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:24Z","lastTransitionTime":"2026-01-29T15:28:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:24 crc kubenswrapper[5008]: E0129 15:28:24.632602 5008 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:28:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:28:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:24Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:28:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:28:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:24Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"23463cb0-4db2-46f4-86c5-cabe2301deff\\\",\\\"systemUUID\\\":\\\"ad986a03-9926-4209-a3e1-d38e666bee86\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:24Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:24 crc kubenswrapper[5008]: I0129 15:28:24.636448 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:24 crc kubenswrapper[5008]: I0129 15:28:24.636488 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:24 crc kubenswrapper[5008]: I0129 15:28:24.636504 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:24 crc kubenswrapper[5008]: I0129 15:28:24.636523 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:24 crc kubenswrapper[5008]: I0129 15:28:24.636538 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:24Z","lastTransitionTime":"2026-01-29T15:28:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:24 crc kubenswrapper[5008]: E0129 15:28:24.652376 5008 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:28:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:28:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:24Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:28:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:28:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:24Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"23463cb0-4db2-46f4-86c5-cabe2301deff\\\",\\\"systemUUID\\\":\\\"ad986a03-9926-4209-a3e1-d38e666bee86\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:24Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:24 crc kubenswrapper[5008]: I0129 15:28:24.676642 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:24 crc kubenswrapper[5008]: I0129 15:28:24.676696 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:24 crc kubenswrapper[5008]: I0129 15:28:24.676705 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:24 crc kubenswrapper[5008]: I0129 15:28:24.676720 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:24 crc kubenswrapper[5008]: I0129 15:28:24.676729 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:24Z","lastTransitionTime":"2026-01-29T15:28:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:24 crc kubenswrapper[5008]: E0129 15:28:24.691527 5008 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:28:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:28:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:24Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:28:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:28:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:24Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"23463cb0-4db2-46f4-86c5-cabe2301deff\\\",\\\"systemUUID\\\":\\\"ad986a03-9926-4209-a3e1-d38e666bee86\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:24Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:24 crc kubenswrapper[5008]: I0129 15:28:24.696508 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:24 crc kubenswrapper[5008]: I0129 15:28:24.696550 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:24 crc kubenswrapper[5008]: I0129 15:28:24.696558 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:24 crc kubenswrapper[5008]: I0129 15:28:24.696573 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:24 crc kubenswrapper[5008]: I0129 15:28:24.696584 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:24Z","lastTransitionTime":"2026-01-29T15:28:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:24 crc kubenswrapper[5008]: E0129 15:28:24.714898 5008 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:28:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:28:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:24Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:28:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:28:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:24Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"23463cb0-4db2-46f4-86c5-cabe2301deff\\\",\\\"systemUUID\\\":\\\"ad986a03-9926-4209-a3e1-d38e666bee86\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:24Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:24 crc kubenswrapper[5008]: I0129 15:28:24.718297 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:24 crc kubenswrapper[5008]: I0129 15:28:24.718334 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:24 crc kubenswrapper[5008]: I0129 15:28:24.718346 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:24 crc kubenswrapper[5008]: I0129 15:28:24.718364 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:24 crc kubenswrapper[5008]: I0129 15:28:24.718376 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:24Z","lastTransitionTime":"2026-01-29T15:28:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:24 crc kubenswrapper[5008]: E0129 15:28:24.730237 5008 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:28:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:28:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:24Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:28:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:28:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:24Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"23463cb0-4db2-46f4-86c5-cabe2301deff\\\",\\\"systemUUID\\\":\\\"ad986a03-9926-4209-a3e1-d38e666bee86\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:24Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:24 crc kubenswrapper[5008]: E0129 15:28:24.730386 5008 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 29 15:28:24 crc kubenswrapper[5008]: I0129 15:28:24.732056 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:24 crc kubenswrapper[5008]: I0129 15:28:24.732112 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:24 crc kubenswrapper[5008]: I0129 15:28:24.732124 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:24 crc kubenswrapper[5008]: I0129 15:28:24.732142 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:24 crc kubenswrapper[5008]: I0129 15:28:24.732154 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:24Z","lastTransitionTime":"2026-01-29T15:28:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:24 crc kubenswrapper[5008]: I0129 15:28:24.835264 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:24 crc kubenswrapper[5008]: I0129 15:28:24.835313 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:24 crc kubenswrapper[5008]: I0129 15:28:24.835330 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:24 crc kubenswrapper[5008]: I0129 15:28:24.835353 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:24 crc kubenswrapper[5008]: I0129 15:28:24.835371 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:24Z","lastTransitionTime":"2026-01-29T15:28:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:24 crc kubenswrapper[5008]: I0129 15:28:24.938987 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:24 crc kubenswrapper[5008]: I0129 15:28:24.939042 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:24 crc kubenswrapper[5008]: I0129 15:28:24.939060 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:24 crc kubenswrapper[5008]: I0129 15:28:24.939084 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:24 crc kubenswrapper[5008]: I0129 15:28:24.939100 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:24Z","lastTransitionTime":"2026-01-29T15:28:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:25 crc kubenswrapper[5008]: I0129 15:28:25.041551 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:25 crc kubenswrapper[5008]: I0129 15:28:25.041617 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:25 crc kubenswrapper[5008]: I0129 15:28:25.041634 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:25 crc kubenswrapper[5008]: I0129 15:28:25.041673 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:25 crc kubenswrapper[5008]: I0129 15:28:25.041712 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:25Z","lastTransitionTime":"2026-01-29T15:28:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:25 crc kubenswrapper[5008]: I0129 15:28:25.143898 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:25 crc kubenswrapper[5008]: I0129 15:28:25.143984 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:25 crc kubenswrapper[5008]: I0129 15:28:25.144012 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:25 crc kubenswrapper[5008]: I0129 15:28:25.144047 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:25 crc kubenswrapper[5008]: I0129 15:28:25.144071 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:25Z","lastTransitionTime":"2026-01-29T15:28:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:25 crc kubenswrapper[5008]: I0129 15:28:25.247207 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:25 crc kubenswrapper[5008]: I0129 15:28:25.247250 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:25 crc kubenswrapper[5008]: I0129 15:28:25.247261 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:25 crc kubenswrapper[5008]: I0129 15:28:25.247277 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:25 crc kubenswrapper[5008]: I0129 15:28:25.247289 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:25Z","lastTransitionTime":"2026-01-29T15:28:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:25 crc kubenswrapper[5008]: I0129 15:28:25.304149 5008 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-15 10:19:25.210163067 +0000 UTC Jan 29 15:28:25 crc kubenswrapper[5008]: I0129 15:28:25.323550 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kkc6c" Jan 29 15:28:25 crc kubenswrapper[5008]: E0129 15:28:25.323730 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kkc6c" podUID="f3716fd8-7f9b-44e2-ac3c-e907d8793dc9" Jan 29 15:28:25 crc kubenswrapper[5008]: I0129 15:28:25.352281 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:25 crc kubenswrapper[5008]: I0129 15:28:25.352321 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:25 crc kubenswrapper[5008]: I0129 15:28:25.352332 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:25 crc kubenswrapper[5008]: I0129 15:28:25.352349 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:25 crc kubenswrapper[5008]: I0129 15:28:25.352361 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:25Z","lastTransitionTime":"2026-01-29T15:28:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:25 crc kubenswrapper[5008]: I0129 15:28:25.455947 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:25 crc kubenswrapper[5008]: I0129 15:28:25.456020 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:25 crc kubenswrapper[5008]: I0129 15:28:25.456044 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:25 crc kubenswrapper[5008]: I0129 15:28:25.456074 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:25 crc kubenswrapper[5008]: I0129 15:28:25.456116 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:25Z","lastTransitionTime":"2026-01-29T15:28:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:25 crc kubenswrapper[5008]: I0129 15:28:25.559008 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:25 crc kubenswrapper[5008]: I0129 15:28:25.559081 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:25 crc kubenswrapper[5008]: I0129 15:28:25.559103 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:25 crc kubenswrapper[5008]: I0129 15:28:25.559133 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:25 crc kubenswrapper[5008]: I0129 15:28:25.559155 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:25Z","lastTransitionTime":"2026-01-29T15:28:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:25 crc kubenswrapper[5008]: I0129 15:28:25.661862 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:25 crc kubenswrapper[5008]: I0129 15:28:25.661912 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:25 crc kubenswrapper[5008]: I0129 15:28:25.661929 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:25 crc kubenswrapper[5008]: I0129 15:28:25.661954 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:25 crc kubenswrapper[5008]: I0129 15:28:25.661972 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:25Z","lastTransitionTime":"2026-01-29T15:28:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:25 crc kubenswrapper[5008]: I0129 15:28:25.763837 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:25 crc kubenswrapper[5008]: I0129 15:28:25.763877 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:25 crc kubenswrapper[5008]: I0129 15:28:25.763888 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:25 crc kubenswrapper[5008]: I0129 15:28:25.763904 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:25 crc kubenswrapper[5008]: I0129 15:28:25.763914 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:25Z","lastTransitionTime":"2026-01-29T15:28:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:25 crc kubenswrapper[5008]: I0129 15:28:25.868125 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:25 crc kubenswrapper[5008]: I0129 15:28:25.868178 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:25 crc kubenswrapper[5008]: I0129 15:28:25.868190 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:25 crc kubenswrapper[5008]: I0129 15:28:25.868212 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:25 crc kubenswrapper[5008]: I0129 15:28:25.868226 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:25Z","lastTransitionTime":"2026-01-29T15:28:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:25 crc kubenswrapper[5008]: I0129 15:28:25.971522 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:25 crc kubenswrapper[5008]: I0129 15:28:25.971600 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:25 crc kubenswrapper[5008]: I0129 15:28:25.971621 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:25 crc kubenswrapper[5008]: I0129 15:28:25.971654 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:25 crc kubenswrapper[5008]: I0129 15:28:25.971675 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:25Z","lastTransitionTime":"2026-01-29T15:28:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:26 crc kubenswrapper[5008]: I0129 15:28:26.074451 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:26 crc kubenswrapper[5008]: I0129 15:28:26.074494 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:26 crc kubenswrapper[5008]: I0129 15:28:26.074506 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:26 crc kubenswrapper[5008]: I0129 15:28:26.074525 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:26 crc kubenswrapper[5008]: I0129 15:28:26.074541 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:26Z","lastTransitionTime":"2026-01-29T15:28:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:26 crc kubenswrapper[5008]: I0129 15:28:26.177904 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:26 crc kubenswrapper[5008]: I0129 15:28:26.177979 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:26 crc kubenswrapper[5008]: I0129 15:28:26.177997 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:26 crc kubenswrapper[5008]: I0129 15:28:26.178023 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:26 crc kubenswrapper[5008]: I0129 15:28:26.178043 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:26Z","lastTransitionTime":"2026-01-29T15:28:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:26 crc kubenswrapper[5008]: I0129 15:28:26.282067 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:26 crc kubenswrapper[5008]: I0129 15:28:26.282132 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:26 crc kubenswrapper[5008]: I0129 15:28:26.282152 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:26 crc kubenswrapper[5008]: I0129 15:28:26.282184 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:26 crc kubenswrapper[5008]: I0129 15:28:26.282207 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:26Z","lastTransitionTime":"2026-01-29T15:28:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:26 crc kubenswrapper[5008]: I0129 15:28:26.305300 5008 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-18 17:22:00.098198958 +0000 UTC Jan 29 15:28:26 crc kubenswrapper[5008]: I0129 15:28:26.322692 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 15:28:26 crc kubenswrapper[5008]: I0129 15:28:26.322855 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 15:28:26 crc kubenswrapper[5008]: E0129 15:28:26.322866 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 15:28:26 crc kubenswrapper[5008]: I0129 15:28:26.323047 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 15:28:26 crc kubenswrapper[5008]: E0129 15:28:26.323205 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 15:28:26 crc kubenswrapper[5008]: E0129 15:28:26.323354 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 15:28:26 crc kubenswrapper[5008]: I0129 15:28:26.385305 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:26 crc kubenswrapper[5008]: I0129 15:28:26.385347 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:26 crc kubenswrapper[5008]: I0129 15:28:26.385359 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:26 crc kubenswrapper[5008]: I0129 15:28:26.385380 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:26 crc kubenswrapper[5008]: I0129 15:28:26.385392 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:26Z","lastTransitionTime":"2026-01-29T15:28:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:26 crc kubenswrapper[5008]: I0129 15:28:26.489452 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:26 crc kubenswrapper[5008]: I0129 15:28:26.489521 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:26 crc kubenswrapper[5008]: I0129 15:28:26.489534 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:26 crc kubenswrapper[5008]: I0129 15:28:26.489561 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:26 crc kubenswrapper[5008]: I0129 15:28:26.489585 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:26Z","lastTransitionTime":"2026-01-29T15:28:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:26 crc kubenswrapper[5008]: I0129 15:28:26.592589 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:26 crc kubenswrapper[5008]: I0129 15:28:26.592664 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:26 crc kubenswrapper[5008]: I0129 15:28:26.592680 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:26 crc kubenswrapper[5008]: I0129 15:28:26.592709 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:26 crc kubenswrapper[5008]: I0129 15:28:26.592726 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:26Z","lastTransitionTime":"2026-01-29T15:28:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:26 crc kubenswrapper[5008]: I0129 15:28:26.695993 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:26 crc kubenswrapper[5008]: I0129 15:28:26.696057 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:26 crc kubenswrapper[5008]: I0129 15:28:26.696080 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:26 crc kubenswrapper[5008]: I0129 15:28:26.696117 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:26 crc kubenswrapper[5008]: I0129 15:28:26.696140 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:26Z","lastTransitionTime":"2026-01-29T15:28:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:26 crc kubenswrapper[5008]: I0129 15:28:26.800688 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:26 crc kubenswrapper[5008]: I0129 15:28:26.800763 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:26 crc kubenswrapper[5008]: I0129 15:28:26.800829 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:26 crc kubenswrapper[5008]: I0129 15:28:26.800856 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:26 crc kubenswrapper[5008]: I0129 15:28:26.800873 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:26Z","lastTransitionTime":"2026-01-29T15:28:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:26 crc kubenswrapper[5008]: I0129 15:28:26.903558 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:26 crc kubenswrapper[5008]: I0129 15:28:26.903623 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:26 crc kubenswrapper[5008]: I0129 15:28:26.903640 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:26 crc kubenswrapper[5008]: I0129 15:28:26.903664 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:26 crc kubenswrapper[5008]: I0129 15:28:26.903682 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:26Z","lastTransitionTime":"2026-01-29T15:28:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:26 crc kubenswrapper[5008]: I0129 15:28:26.982530 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 29 15:28:26 crc kubenswrapper[5008]: I0129 15:28:26.996615 5008 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/openshift-kube-scheduler-crc"] Jan 29 15:28:27 crc kubenswrapper[5008]: I0129 15:28:27.009751 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ae42d856f5916fe3a1dace4ed5ed53a6cab552d169357b7303516719b78ef076\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:27Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:27 crc kubenswrapper[5008]: I0129 15:28:27.014540 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:27 crc kubenswrapper[5008]: I0129 15:28:27.014614 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:27 crc kubenswrapper[5008]: I0129 15:28:27.014631 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:27 crc kubenswrapper[5008]: I0129 15:28:27.014657 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:27 crc kubenswrapper[5008]: I0129 15:28:27.014675 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:27Z","lastTransitionTime":"2026-01-29T15:28:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:27 crc kubenswrapper[5008]: I0129 15:28:27.030311 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e5526ab405f367c31c46e86dc356f5c21ac7529cd706af08cb6cd35e54dbe33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a34142066431679db41e56f6697765165128986ad22bc919152524672e3035d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:27Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:27 crc kubenswrapper[5008]: I0129 15:28:27.055193 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-78bl2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa065d0b-d690-4a7d-9079-a8f976a7aca3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb7be81711617226cfa9af5ce71166ad176fc477581c03ba781a2746d64bbf31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trwfk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dce68b57fb66d0f4fb38e7ba2da32746311a7705ec80e7dbaaee405bf6175456\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dce68b57fb66d0f4fb38e7ba2da32746311a7705ec80e7dbaaee405bf6175456\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:27:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trwfk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be2538011dc9cfea90fe3fdf861804d4f36944262a852e2efe4c6a215019fb7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://be2538011dc9cfea90fe3fdf861804d4f36944262a852e2efe4c6a215019fb7e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trwfk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1e468924dd5d2c21d28331698458147151b2c74b04a9154c3f0638b271ffb36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c1e468924dd5d2c21d28331698458147151b2c74b04a9154c3f0638b271ffb36\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trwfk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf3e60925690b1b555efc2db95efcef76510c147b6338b65b071bf0729561a6b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bf3e60925690b1b555efc2db95efcef76510c147b6338b65b071bf0729561a6b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trwfk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://83ad0f06e7035a28c9d0207484d22ac175226fac31b1d5e233ce7231cb957fb3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://83ad0f06e7035a28c9d0207484d22ac175226fac31b1d5e233ce7231cb957fb3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trwfk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://78d2f9571b05eeb98c339f4165ca858289b85192d254ea86d4fb2eae7ea2e61e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://78d2f9571b05eeb98c339f4165ca858289b85192d254ea86d4fb2eae7ea2e61e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trwfk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:27:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-78bl2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:27Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:27 crc kubenswrapper[5008]: I0129 15:28:27.071016 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-qj8wb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ffbfcf6-99e5-450c-8c72-b2db9365d93e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb113f45b58a5039b88d2c176d718d5a012e21c1785781c1fcda5843d529a9af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8mvmz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:28:01Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-qj8wb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:27Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:27 crc kubenswrapper[5008]: I0129 15:28:27.089774 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:27Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:27 crc kubenswrapper[5008]: I0129 15:28:27.108555 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-42hcz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cdd8ae23-3f9f-49f8-928d-46dad823fde4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a44b0a7b0b53c339b51d5391ad7e0eb342bdb491b4af37a98f48788b8e2c077b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg75x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:27:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-42hcz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:27Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:27 crc kubenswrapper[5008]: I0129 15:28:27.117991 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:27 crc kubenswrapper[5008]: I0129 15:28:27.118048 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:27 crc kubenswrapper[5008]: I0129 15:28:27.118065 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:27 crc kubenswrapper[5008]: I0129 15:28:27.118090 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:27 crc kubenswrapper[5008]: I0129 15:28:27.118107 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:27Z","lastTransitionTime":"2026-01-29T15:28:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:27 crc kubenswrapper[5008]: I0129 15:28:27.123334 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-p5kdp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f5a0b69-5edd-467c-a822-093f1689df1d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6930478f2ddb5112eb944beac7cabb3e235fe16465a4706e8c665ce9481bc49d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gq2fz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://98ea0d7c1f2e3e9fc74e8e58ae26ab486c6b75f655273070cebee814c7c99e0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gq2fz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:28:11Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-p5kdp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:27Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:27 crc kubenswrapper[5008]: I0129 15:28:27.159608 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"77958faa-02ef-4792-b792-6094f922cd1d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de76f0d6e08ee14b4a5ab39a21ebdc63bdf379dcd5b648ae46a4edcc2a49f20e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1dcda54222f387e6560d3e297be72e19032a975feb916bc12a220870207a3f35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3cb618c2c44502074cb37ce1e688d187254eafae3916372a16c8ab845fed767a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff8e5fd243880ce71f07c5c532cad2cdff0e4bca2d0083280be78206a1a4c854\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7393e24277d74a2b9987e6cdc54cd65485f5bc57d93ec25a2cb8479923db1feb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://380711ea042d739a804ab6da4c0361004cc9ed9a48a5f4b006d168df6a84ebb5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://380711ea042d739a804ab6da4c0361004cc9ed9a48a5f4b006d168df6a84ebb5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:27:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d9134a6829b7df9b42aeae161ea1f3961837d6a0b322b1adcd2417c47c0f5d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1d9134a6829b7df9b42aeae161ea1f3961837d6a0b322b1adcd2417c47c0f5d5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:27:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://9206e5978757b0979fa411a384d9e5b4728b01a769f87383df38dbb8f0f18e4b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9206e5978757b0979fa411a384d9e5b4728b01a769f87383df38dbb8f0f18e4b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:27:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:27:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:27:37Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:27Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:27 crc kubenswrapper[5008]: I0129 15:28:27.181017 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2624b9eb-bfe1-4c46-8825-6152c5e00565\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://12266e3ba2ed2e5d6d1e7ee893a0d59cd4575c8870cb1e129ca0fd9b8623467f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c778df6f5c031669143db37980250c01473f3d9856acc44a6ef51852822f99ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://de7c341c7443f28f5919ef6baeb21377b5571637ad807dd7515a5f28c218034b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0f710dffd08d1bbb467ff9d2c6a5d5beed779550747459407916e743506ab27\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:27:37Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:27Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:27 crc kubenswrapper[5008]: I0129 15:28:27.195086 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d62f7cc2-d2d7-4c9a-9432-8b4fb9f3fcf2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4702656214e54c2881bf198364622648679a9981721d09a6b1551a134c63b7d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ed1794f8b68a0810301b6f7b91e03cfb269b35084dd97b2f153789ba70970f2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://677c04a1dffb767e6149ccb064772548ca29cf553afc20cb4eb82a5f85742ff0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://412b5d429b7a86a87e710ba4a0c81a54b03108f41ce6cc29f429aede063eb76c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4d710e35a02d14289e2d5fe6b35c08621e78c96b7e9e30451ffd6d51962fb761\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"message\\\":\\\"file observer\\\\nW0129 15:27:57.701071 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0129 15:27:57.704726 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 15:27:57.707574 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-445213743/tls.crt::/tmp/serving-cert-445213743/tls.key\\\\\\\"\\\\nI0129 15:27:58.036057 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0129 15:27:58.041904 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0129 15:27:58.041936 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0129 15:27:58.041959 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0129 15:27:58.041967 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0129 15:27:58.046875 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0129 15:27:58.046901 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 15:27:58.046906 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 15:27:58.046911 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0129 15:27:58.046914 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0129 15:27:58.046917 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0129 15:27:58.046919 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0129 15:27:58.047110 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0129 15:27:58.052272 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T15:27:42Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3397d4d59fbac09e49247425eb263f25d13c62a72013146c981b606f6389165d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:40Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://25a0c747be0a011a60911a631709b27620d8ebc5afea1d21dbeb71f26d971f6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://25a0c747be0a011a60911a631709b27620d8ebc5afea1d21dbeb71f26d971f6e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:27:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:27:37Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:27Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:27 crc kubenswrapper[5008]: I0129 15:28:27.210869 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:27Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:27 crc kubenswrapper[5008]: I0129 15:28:27.221162 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:27 crc kubenswrapper[5008]: I0129 15:28:27.221232 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:27 crc kubenswrapper[5008]: I0129 15:28:27.221252 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:27 crc kubenswrapper[5008]: I0129 15:28:27.221277 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:27 crc kubenswrapper[5008]: I0129 15:28:27.221297 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:27Z","lastTransitionTime":"2026-01-29T15:28:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:27 crc kubenswrapper[5008]: I0129 15:28:27.227934 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-gk9q8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ca0fcb2d-733d-4bde-9bbf-3f7082d0e244\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fda885d25c8fd46bd297810d4fb6c23ec0d4bb76993e94ea75a623b0feeed247\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6blck\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4781ea933d8ce868cf1da4b2890797c16012b434ce074870a59307d61a3c731\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6blck\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:27:59Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-gk9q8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:27Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:27 crc kubenswrapper[5008]: I0129 15:28:27.246942 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pqg9w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1d092513-7735-4c98-9734-57bc46b99280\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://08beb10f1715c1ca4bbe5b5ecf918e595f3befca424a2b65a06e682936dcc9c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2xcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://84bee79a5084a74e833cfe4bac65bc4b319e7a41e9f3e8c7ee7de383385da1a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2xcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eddc7bcf8b28e2d71e41dbad61e84e0e0ac1e2702628a400e9c16dcc4303cad1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2xcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b82de879355c27b3c577b5d5a292b2c1db266e6d92a8e01409bf87ede71ba420\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2xcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e0b6c0db5ed1e87ffade45aa1c7194322bbf680050f9b7328a3584db57e1554\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2xcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://676b28dc78242b0ec7c7a3643a048da9020c807de1f4ddd0cd801f60a1bf41a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2xcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e99cc5b72dd4558981820cab4c037fc0a5419fbf5c8f8b6cc3733fa97ccbab8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6111e93f68c8aa5c23e0317317a19c4a1df88a0d6babfab0d89d65902410feee\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-29T15:28:11Z\\\",\\\"message\\\":\\\"emoval\\\\nI0129 15:28:10.306875 6307 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0129 15:28:10.306882 6307 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0129 15:28:10.306927 6307 factory.go:656] Stopping watch factory\\\\nI0129 15:28:10.306933 6307 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0129 15:28:10.306953 6307 handler.go:208] Removed *v1.Node event handler 7\\\\nI0129 15:28:10.306717 6307 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0129 15:28:10.307153 6307 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0129 15:28:10.307165 6307 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0129 15:28:10.307174 6307 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0129 15:28:10.307184 6307 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0129 15:28:10.307193 6307 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0129 15:28:10.307206 6307 handler.go:208] Removed *v1.Node event handler 2\\\\nI0129 15:28:10.307328 6307 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0129 15:28:10.307559 6307 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:06Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7e99cc5b72dd4558981820cab4c037fc0a5419fbf5c8f8b6cc3733fa97ccbab8\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-29T15:28:14Z\\\",\\\"message\\\":\\\"nil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0129 15:28:14.347483 6482 base_network_controller_pods.go:477] [default/openshift-network-console/networking-console-plugin-85b44fc459-gdk6g] creating logical port openshift-network-console_networking-console-plugin-85b44fc459-gdk6g for pod on switch crc\\\\nI0129 15:28:14.347479 6482 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-apiserver/api]} name:Service_openshift-apiserver/api_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.37:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {88e20c31-5b8d-4d44-bbd8-dba87b7dbaf0}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0129 15:28:14.347139 6482 services_controller.go:453] Built service openshift-ingress-canary/ingress-canary template LB for network=default: []services.LB{}\\\\nF0129 15:28:14.347332 6482 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy c\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2xcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dc93128ecb53884c776154eafc7f29837e9c378a10c37df5d85d608ef14d7195\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2xcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6807035e51f1a1b563d2c2de6ad73607b2a3bbb9b4336cb9dfeea693d35fdda6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6807035e51f1a1b563d2c2de6ad73607b2a3bbb9b4336cb9dfeea693d35fdda6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2xcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:27:59Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-pqg9w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:27Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:27 crc kubenswrapper[5008]: I0129 15:28:27.260644 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-kkc6c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f3716fd8-7f9b-44e2-ac3c-e907d8793dc9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tl4fv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tl4fv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:28:13Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-kkc6c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:27Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:27 crc kubenswrapper[5008]: I0129 15:28:27.276526 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c04122903ba8ec9ecb21ba42f430520d0a097fff8cea9572b066e146d519cf91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:27Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:27 crc kubenswrapper[5008]: I0129 15:28:27.293879 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:27Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:27 crc kubenswrapper[5008]: I0129 15:28:27.304920 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-wtvvb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2dede057-dcce-4302-8efe-e2c3640308ec\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://63cab2ec47a6dc148b6d3554a6f4b5c1985ca43bf62bfc444ff3582273cce517\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mtnst\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:27:58Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-wtvvb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:27Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:27 crc kubenswrapper[5008]: I0129 15:28:27.305802 5008 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-26 08:38:20.318054668 +0000 UTC Jan 29 15:28:27 crc kubenswrapper[5008]: I0129 15:28:27.322697 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kkc6c" Jan 29 15:28:27 crc kubenswrapper[5008]: E0129 15:28:27.322877 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kkc6c" podUID="f3716fd8-7f9b-44e2-ac3c-e907d8793dc9" Jan 29 15:28:27 crc kubenswrapper[5008]: I0129 15:28:27.325682 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:27 crc kubenswrapper[5008]: I0129 15:28:27.325775 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:27 crc kubenswrapper[5008]: I0129 15:28:27.325845 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:27 crc kubenswrapper[5008]: I0129 15:28:27.325868 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:27 crc kubenswrapper[5008]: I0129 15:28:27.325928 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:27Z","lastTransitionTime":"2026-01-29T15:28:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:27 crc kubenswrapper[5008]: I0129 15:28:27.339266 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f3710d4-b153-4018-a492-367eb8b81ef8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c89e24fc5acc0577d3d738d63e7982aa32a07ecc01952570f6f417286b8747a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://33245f510d76b9610b3e44259d0944eaef5873c4bc31c3f3012a013248d16933\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c76eae897742ba4e95f6d60a81e2da82f1c0b0e220f48473436b03bff9f2f7e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0ee76cb03f96b669c6907a5d4a1520afda186e96b59ddea75f8c0fd7547c9063\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0ee76cb03f96b669c6907a5d4a1520afda186e96b59ddea75f8c0fd7547c9063\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:27:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:27:37Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:27Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:27 crc kubenswrapper[5008]: I0129 15:28:27.360545 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c04122903ba8ec9ecb21ba42f430520d0a097fff8cea9572b066e146d519cf91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:27Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:27 crc kubenswrapper[5008]: I0129 15:28:27.381144 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:27Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:27 crc kubenswrapper[5008]: I0129 15:28:27.394973 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-wtvvb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2dede057-dcce-4302-8efe-e2c3640308ec\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://63cab2ec47a6dc148b6d3554a6f4b5c1985ca43bf62bfc444ff3582273cce517\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mtnst\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:27:58Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-wtvvb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:27Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:27 crc kubenswrapper[5008]: I0129 15:28:27.409212 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-gk9q8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ca0fcb2d-733d-4bde-9bbf-3f7082d0e244\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fda885d25c8fd46bd297810d4fb6c23ec0d4bb76993e94ea75a623b0feeed247\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6blck\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4781ea933d8ce868cf1da4b2890797c16012b434ce074870a59307d61a3c731\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6blck\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:27:59Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-gk9q8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:27Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:27 crc kubenswrapper[5008]: I0129 15:28:27.428197 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:27 crc kubenswrapper[5008]: I0129 15:28:27.428253 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:27 crc kubenswrapper[5008]: I0129 15:28:27.428269 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:27 crc kubenswrapper[5008]: I0129 15:28:27.428293 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:27 crc kubenswrapper[5008]: I0129 15:28:27.428311 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:27Z","lastTransitionTime":"2026-01-29T15:28:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:27 crc kubenswrapper[5008]: I0129 15:28:27.446642 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pqg9w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1d092513-7735-4c98-9734-57bc46b99280\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://08beb10f1715c1ca4bbe5b5ecf918e595f3befca424a2b65a06e682936dcc9c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2xcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://84bee79a5084a74e833cfe4bac65bc4b319e7a41e9f3e8c7ee7de383385da1a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2xcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eddc7bcf8b28e2d71e41dbad61e84e0e0ac1e2702628a400e9c16dcc4303cad1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2xcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b82de879355c27b3c577b5d5a292b2c1db266e6d92a8e01409bf87ede71ba420\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2xcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e0b6c0db5ed1e87ffade45aa1c7194322bbf680050f9b7328a3584db57e1554\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2xcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://676b28dc78242b0ec7c7a3643a048da9020c807de1f4ddd0cd801f60a1bf41a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2xcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e99cc5b72dd4558981820cab4c037fc0a5419fbf5c8f8b6cc3733fa97ccbab8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6111e93f68c8aa5c23e0317317a19c4a1df88a0d6babfab0d89d65902410feee\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-29T15:28:11Z\\\",\\\"message\\\":\\\"emoval\\\\nI0129 15:28:10.306875 6307 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0129 15:28:10.306882 6307 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0129 15:28:10.306927 6307 factory.go:656] Stopping watch factory\\\\nI0129 15:28:10.306933 6307 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0129 15:28:10.306953 6307 handler.go:208] Removed *v1.Node event handler 7\\\\nI0129 15:28:10.306717 6307 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0129 15:28:10.307153 6307 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0129 15:28:10.307165 6307 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0129 15:28:10.307174 6307 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0129 15:28:10.307184 6307 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0129 15:28:10.307193 6307 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0129 15:28:10.307206 6307 handler.go:208] Removed *v1.Node event handler 2\\\\nI0129 15:28:10.307328 6307 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0129 15:28:10.307559 6307 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:06Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7e99cc5b72dd4558981820cab4c037fc0a5419fbf5c8f8b6cc3733fa97ccbab8\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-29T15:28:14Z\\\",\\\"message\\\":\\\"nil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0129 15:28:14.347483 6482 base_network_controller_pods.go:477] [default/openshift-network-console/networking-console-plugin-85b44fc459-gdk6g] creating logical port openshift-network-console_networking-console-plugin-85b44fc459-gdk6g for pod on switch crc\\\\nI0129 15:28:14.347479 6482 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-apiserver/api]} name:Service_openshift-apiserver/api_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.37:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {88e20c31-5b8d-4d44-bbd8-dba87b7dbaf0}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0129 15:28:14.347139 6482 services_controller.go:453] Built service openshift-ingress-canary/ingress-canary template LB for network=default: []services.LB{}\\\\nF0129 15:28:14.347332 6482 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy c\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2xcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dc93128ecb53884c776154eafc7f29837e9c378a10c37df5d85d608ef14d7195\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2xcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6807035e51f1a1b563d2c2de6ad73607b2a3bbb9b4336cb9dfeea693d35fdda6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6807035e51f1a1b563d2c2de6ad73607b2a3bbb9b4336cb9dfeea693d35fdda6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2xcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:27:59Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-pqg9w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:27Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:27 crc kubenswrapper[5008]: I0129 15:28:27.459703 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-kkc6c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f3716fd8-7f9b-44e2-ac3c-e907d8793dc9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tl4fv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tl4fv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:28:13Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-kkc6c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:27Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:27 crc kubenswrapper[5008]: I0129 15:28:27.472421 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ae42d856f5916fe3a1dace4ed5ed53a6cab552d169357b7303516719b78ef076\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:27Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:27 crc kubenswrapper[5008]: I0129 15:28:27.484502 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e5526ab405f367c31c46e86dc356f5c21ac7529cd706af08cb6cd35e54dbe33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a34142066431679db41e56f6697765165128986ad22bc919152524672e3035d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:27Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:27 crc kubenswrapper[5008]: I0129 15:28:27.499344 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-78bl2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa065d0b-d690-4a7d-9079-a8f976a7aca3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb7be81711617226cfa9af5ce71166ad176fc477581c03ba781a2746d64bbf31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trwfk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dce68b57fb66d0f4fb38e7ba2da32746311a7705ec80e7dbaaee405bf6175456\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dce68b57fb66d0f4fb38e7ba2da32746311a7705ec80e7dbaaee405bf6175456\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:27:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trwfk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be2538011dc9cfea90fe3fdf861804d4f36944262a852e2efe4c6a215019fb7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://be2538011dc9cfea90fe3fdf861804d4f36944262a852e2efe4c6a215019fb7e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trwfk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1e468924dd5d2c21d28331698458147151b2c74b04a9154c3f0638b271ffb36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c1e468924dd5d2c21d28331698458147151b2c74b04a9154c3f0638b271ffb36\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trwfk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf3e60925690b1b555efc2db95efcef76510c147b6338b65b071bf0729561a6b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bf3e60925690b1b555efc2db95efcef76510c147b6338b65b071bf0729561a6b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trwfk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://83ad0f06e7035a28c9d0207484d22ac175226fac31b1d5e233ce7231cb957fb3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://83ad0f06e7035a28c9d0207484d22ac175226fac31b1d5e233ce7231cb957fb3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trwfk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://78d2f9571b05eeb98c339f4165ca858289b85192d254ea86d4fb2eae7ea2e61e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://78d2f9571b05eeb98c339f4165ca858289b85192d254ea86d4fb2eae7ea2e61e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trwfk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:27:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-78bl2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:27Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:27 crc kubenswrapper[5008]: I0129 15:28:27.512051 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-qj8wb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ffbfcf6-99e5-450c-8c72-b2db9365d93e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb113f45b58a5039b88d2c176d718d5a012e21c1785781c1fcda5843d529a9af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8mvmz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:28:01Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-qj8wb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:27Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:27 crc kubenswrapper[5008]: I0129 15:28:27.530960 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:27 crc kubenswrapper[5008]: I0129 15:28:27.531005 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:27 crc kubenswrapper[5008]: I0129 15:28:27.531020 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:27 crc kubenswrapper[5008]: I0129 15:28:27.531040 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:27 crc kubenswrapper[5008]: I0129 15:28:27.531056 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:27Z","lastTransitionTime":"2026-01-29T15:28:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:27 crc kubenswrapper[5008]: I0129 15:28:27.531143 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"77958faa-02ef-4792-b792-6094f922cd1d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de76f0d6e08ee14b4a5ab39a21ebdc63bdf379dcd5b648ae46a4edcc2a49f20e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1dcda54222f387e6560d3e297be72e19032a975feb916bc12a220870207a3f35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3cb618c2c44502074cb37ce1e688d187254eafae3916372a16c8ab845fed767a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff8e5fd243880ce71f07c5c532cad2cdff0e4bca2d0083280be78206a1a4c854\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7393e24277d74a2b9987e6cdc54cd65485f5bc57d93ec25a2cb8479923db1feb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://380711ea042d739a804ab6da4c0361004cc9ed9a48a5f4b006d168df6a84ebb5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://380711ea042d739a804ab6da4c0361004cc9ed9a48a5f4b006d168df6a84ebb5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:27:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d9134a6829b7df9b42aeae161ea1f3961837d6a0b322b1adcd2417c47c0f5d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1d9134a6829b7df9b42aeae161ea1f3961837d6a0b322b1adcd2417c47c0f5d5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:27:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://9206e5978757b0979fa411a384d9e5b4728b01a769f87383df38dbb8f0f18e4b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9206e5978757b0979fa411a384d9e5b4728b01a769f87383df38dbb8f0f18e4b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:27:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:27:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:27:37Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:27Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:27 crc kubenswrapper[5008]: I0129 15:28:27.544332 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2624b9eb-bfe1-4c46-8825-6152c5e00565\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://12266e3ba2ed2e5d6d1e7ee893a0d59cd4575c8870cb1e129ca0fd9b8623467f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c778df6f5c031669143db37980250c01473f3d9856acc44a6ef51852822f99ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://de7c341c7443f28f5919ef6baeb21377b5571637ad807dd7515a5f28c218034b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0f710dffd08d1bbb467ff9d2c6a5d5beed779550747459407916e743506ab27\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:27:37Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:27Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:27 crc kubenswrapper[5008]: I0129 15:28:27.559485 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d62f7cc2-d2d7-4c9a-9432-8b4fb9f3fcf2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4702656214e54c2881bf198364622648679a9981721d09a6b1551a134c63b7d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ed1794f8b68a0810301b6f7b91e03cfb269b35084dd97b2f153789ba70970f2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://677c04a1dffb767e6149ccb064772548ca29cf553afc20cb4eb82a5f85742ff0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://412b5d429b7a86a87e710ba4a0c81a54b03108f41ce6cc29f429aede063eb76c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4d710e35a02d14289e2d5fe6b35c08621e78c96b7e9e30451ffd6d51962fb761\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"message\\\":\\\"file observer\\\\nW0129 15:27:57.701071 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0129 15:27:57.704726 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 15:27:57.707574 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-445213743/tls.crt::/tmp/serving-cert-445213743/tls.key\\\\\\\"\\\\nI0129 15:27:58.036057 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0129 15:27:58.041904 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0129 15:27:58.041936 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0129 15:27:58.041959 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0129 15:27:58.041967 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0129 15:27:58.046875 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0129 15:27:58.046901 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 15:27:58.046906 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 15:27:58.046911 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0129 15:27:58.046914 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0129 15:27:58.046917 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0129 15:27:58.046919 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0129 15:27:58.047110 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0129 15:27:58.052272 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T15:27:42Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3397d4d59fbac09e49247425eb263f25d13c62a72013146c981b606f6389165d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:40Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://25a0c747be0a011a60911a631709b27620d8ebc5afea1d21dbeb71f26d971f6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://25a0c747be0a011a60911a631709b27620d8ebc5afea1d21dbeb71f26d971f6e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:27:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:27:37Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:27Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:27 crc kubenswrapper[5008]: I0129 15:28:27.574145 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:27Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:27 crc kubenswrapper[5008]: I0129 15:28:27.587209 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:27Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:27 crc kubenswrapper[5008]: I0129 15:28:27.598123 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-42hcz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cdd8ae23-3f9f-49f8-928d-46dad823fde4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a44b0a7b0b53c339b51d5391ad7e0eb342bdb491b4af37a98f48788b8e2c077b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg75x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:27:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-42hcz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:27Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:27 crc kubenswrapper[5008]: I0129 15:28:27.608524 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-p5kdp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f5a0b69-5edd-467c-a822-093f1689df1d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6930478f2ddb5112eb944beac7cabb3e235fe16465a4706e8c665ce9481bc49d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gq2fz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://98ea0d7c1f2e3e9fc74e8e58ae26ab486c6b75f655273070cebee814c7c99e0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gq2fz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:28:11Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-p5kdp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:27Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:27 crc kubenswrapper[5008]: I0129 15:28:27.633430 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:27 crc kubenswrapper[5008]: I0129 15:28:27.633463 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:27 crc kubenswrapper[5008]: I0129 15:28:27.633472 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:27 crc kubenswrapper[5008]: I0129 15:28:27.633485 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:27 crc kubenswrapper[5008]: I0129 15:28:27.633494 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:27Z","lastTransitionTime":"2026-01-29T15:28:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:27 crc kubenswrapper[5008]: I0129 15:28:27.737046 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:27 crc kubenswrapper[5008]: I0129 15:28:27.737152 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:27 crc kubenswrapper[5008]: I0129 15:28:27.737175 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:27 crc kubenswrapper[5008]: I0129 15:28:27.737204 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:27 crc kubenswrapper[5008]: I0129 15:28:27.737387 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:27Z","lastTransitionTime":"2026-01-29T15:28:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:27 crc kubenswrapper[5008]: I0129 15:28:27.839996 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:27 crc kubenswrapper[5008]: I0129 15:28:27.840055 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:27 crc kubenswrapper[5008]: I0129 15:28:27.840073 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:27 crc kubenswrapper[5008]: I0129 15:28:27.840094 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:27 crc kubenswrapper[5008]: I0129 15:28:27.840111 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:27Z","lastTransitionTime":"2026-01-29T15:28:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:27 crc kubenswrapper[5008]: I0129 15:28:27.942375 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:27 crc kubenswrapper[5008]: I0129 15:28:27.942554 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:27 crc kubenswrapper[5008]: I0129 15:28:27.942584 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:27 crc kubenswrapper[5008]: I0129 15:28:27.942686 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:27 crc kubenswrapper[5008]: I0129 15:28:27.942753 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:27Z","lastTransitionTime":"2026-01-29T15:28:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:28 crc kubenswrapper[5008]: I0129 15:28:28.045921 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:28 crc kubenswrapper[5008]: I0129 15:28:28.045990 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:28 crc kubenswrapper[5008]: I0129 15:28:28.046007 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:28 crc kubenswrapper[5008]: I0129 15:28:28.046034 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:28 crc kubenswrapper[5008]: I0129 15:28:28.046051 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:28Z","lastTransitionTime":"2026-01-29T15:28:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:28 crc kubenswrapper[5008]: I0129 15:28:28.124197 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 15:28:28 crc kubenswrapper[5008]: I0129 15:28:28.136673 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-kkc6c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f3716fd8-7f9b-44e2-ac3c-e907d8793dc9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tl4fv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tl4fv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:28:13Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-kkc6c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:28Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:28 crc kubenswrapper[5008]: I0129 15:28:28.149932 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:28 crc kubenswrapper[5008]: I0129 15:28:28.149965 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:28 crc kubenswrapper[5008]: I0129 15:28:28.149973 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:28 crc kubenswrapper[5008]: I0129 15:28:28.149987 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:28 crc kubenswrapper[5008]: I0129 15:28:28.149998 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:28Z","lastTransitionTime":"2026-01-29T15:28:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:28 crc kubenswrapper[5008]: I0129 15:28:28.156551 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f3710d4-b153-4018-a492-367eb8b81ef8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c89e24fc5acc0577d3d738d63e7982aa32a07ecc01952570f6f417286b8747a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://33245f510d76b9610b3e44259d0944eaef5873c4bc31c3f3012a013248d16933\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c76eae897742ba4e95f6d60a81e2da82f1c0b0e220f48473436b03bff9f2f7e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0ee76cb03f96b669c6907a5d4a1520afda186e96b59ddea75f8c0fd7547c9063\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0ee76cb03f96b669c6907a5d4a1520afda186e96b59ddea75f8c0fd7547c9063\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:27:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:27:37Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:28Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:28 crc kubenswrapper[5008]: I0129 15:28:28.174775 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c04122903ba8ec9ecb21ba42f430520d0a097fff8cea9572b066e146d519cf91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:28Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:28 crc kubenswrapper[5008]: I0129 15:28:28.194432 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:28Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:28 crc kubenswrapper[5008]: I0129 15:28:28.207811 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-wtvvb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2dede057-dcce-4302-8efe-e2c3640308ec\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://63cab2ec47a6dc148b6d3554a6f4b5c1985ca43bf62bfc444ff3582273cce517\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mtnst\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:27:58Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-wtvvb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:28Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:28 crc kubenswrapper[5008]: I0129 15:28:28.221196 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-gk9q8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ca0fcb2d-733d-4bde-9bbf-3f7082d0e244\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fda885d25c8fd46bd297810d4fb6c23ec0d4bb76993e94ea75a623b0feeed247\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6blck\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4781ea933d8ce868cf1da4b2890797c16012b434ce074870a59307d61a3c731\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6blck\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:27:59Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-gk9q8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:28Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:28 crc kubenswrapper[5008]: I0129 15:28:28.251298 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pqg9w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1d092513-7735-4c98-9734-57bc46b99280\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://08beb10f1715c1ca4bbe5b5ecf918e595f3befca424a2b65a06e682936dcc9c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2xcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://84bee79a5084a74e833cfe4bac65bc4b319e7a41e9f3e8c7ee7de383385da1a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2xcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eddc7bcf8b28e2d71e41dbad61e84e0e0ac1e2702628a400e9c16dcc4303cad1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2xcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b82de879355c27b3c577b5d5a292b2c1db266e6d92a8e01409bf87ede71ba420\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2xcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e0b6c0db5ed1e87ffade45aa1c7194322bbf680050f9b7328a3584db57e1554\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2xcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://676b28dc78242b0ec7c7a3643a048da9020c807de1f4ddd0cd801f60a1bf41a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2xcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e99cc5b72dd4558981820cab4c037fc0a5419fbf5c8f8b6cc3733fa97ccbab8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6111e93f68c8aa5c23e0317317a19c4a1df88a0d6babfab0d89d65902410feee\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-29T15:28:11Z\\\",\\\"message\\\":\\\"emoval\\\\nI0129 15:28:10.306875 6307 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0129 15:28:10.306882 6307 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0129 15:28:10.306927 6307 factory.go:656] Stopping watch factory\\\\nI0129 15:28:10.306933 6307 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0129 15:28:10.306953 6307 handler.go:208] Removed *v1.Node event handler 7\\\\nI0129 15:28:10.306717 6307 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0129 15:28:10.307153 6307 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0129 15:28:10.307165 6307 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0129 15:28:10.307174 6307 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0129 15:28:10.307184 6307 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0129 15:28:10.307193 6307 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0129 15:28:10.307206 6307 handler.go:208] Removed *v1.Node event handler 2\\\\nI0129 15:28:10.307328 6307 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0129 15:28:10.307559 6307 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:06Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7e99cc5b72dd4558981820cab4c037fc0a5419fbf5c8f8b6cc3733fa97ccbab8\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-29T15:28:14Z\\\",\\\"message\\\":\\\"nil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0129 15:28:14.347483 6482 base_network_controller_pods.go:477] [default/openshift-network-console/networking-console-plugin-85b44fc459-gdk6g] creating logical port openshift-network-console_networking-console-plugin-85b44fc459-gdk6g for pod on switch crc\\\\nI0129 15:28:14.347479 6482 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-apiserver/api]} name:Service_openshift-apiserver/api_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.37:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {88e20c31-5b8d-4d44-bbd8-dba87b7dbaf0}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0129 15:28:14.347139 6482 services_controller.go:453] Built service openshift-ingress-canary/ingress-canary template LB for network=default: []services.LB{}\\\\nF0129 15:28:14.347332 6482 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy c\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2xcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dc93128ecb53884c776154eafc7f29837e9c378a10c37df5d85d608ef14d7195\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2xcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6807035e51f1a1b563d2c2de6ad73607b2a3bbb9b4336cb9dfeea693d35fdda6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6807035e51f1a1b563d2c2de6ad73607b2a3bbb9b4336cb9dfeea693d35fdda6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2xcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:27:59Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-pqg9w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:28Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:28 crc kubenswrapper[5008]: I0129 15:28:28.252172 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:28 crc kubenswrapper[5008]: I0129 15:28:28.252201 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:28 crc kubenswrapper[5008]: I0129 15:28:28.252209 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:28 crc kubenswrapper[5008]: I0129 15:28:28.252224 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:28 crc kubenswrapper[5008]: I0129 15:28:28.252233 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:28Z","lastTransitionTime":"2026-01-29T15:28:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:28 crc kubenswrapper[5008]: I0129 15:28:28.265832 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ae42d856f5916fe3a1dace4ed5ed53a6cab552d169357b7303516719b78ef076\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:28Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:28 crc kubenswrapper[5008]: I0129 15:28:28.281604 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e5526ab405f367c31c46e86dc356f5c21ac7529cd706af08cb6cd35e54dbe33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a34142066431679db41e56f6697765165128986ad22bc919152524672e3035d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:28Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:28 crc kubenswrapper[5008]: I0129 15:28:28.305617 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-78bl2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa065d0b-d690-4a7d-9079-a8f976a7aca3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb7be81711617226cfa9af5ce71166ad176fc477581c03ba781a2746d64bbf31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trwfk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dce68b57fb66d0f4fb38e7ba2da32746311a7705ec80e7dbaaee405bf6175456\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dce68b57fb66d0f4fb38e7ba2da32746311a7705ec80e7dbaaee405bf6175456\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:27:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trwfk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be2538011dc9cfea90fe3fdf861804d4f36944262a852e2efe4c6a215019fb7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://be2538011dc9cfea90fe3fdf861804d4f36944262a852e2efe4c6a215019fb7e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trwfk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1e468924dd5d2c21d28331698458147151b2c74b04a9154c3f0638b271ffb36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c1e468924dd5d2c21d28331698458147151b2c74b04a9154c3f0638b271ffb36\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trwfk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf3e60925690b1b555efc2db95efcef76510c147b6338b65b071bf0729561a6b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bf3e60925690b1b555efc2db95efcef76510c147b6338b65b071bf0729561a6b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trwfk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://83ad0f06e7035a28c9d0207484d22ac175226fac31b1d5e233ce7231cb957fb3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://83ad0f06e7035a28c9d0207484d22ac175226fac31b1d5e233ce7231cb957fb3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trwfk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://78d2f9571b05eeb98c339f4165ca858289b85192d254ea86d4fb2eae7ea2e61e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://78d2f9571b05eeb98c339f4165ca858289b85192d254ea86d4fb2eae7ea2e61e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trwfk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:27:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-78bl2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:28Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:28 crc kubenswrapper[5008]: I0129 15:28:28.305883 5008 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-07 00:45:53.306872717 +0000 UTC Jan 29 15:28:28 crc kubenswrapper[5008]: I0129 15:28:28.316508 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-qj8wb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ffbfcf6-99e5-450c-8c72-b2db9365d93e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb113f45b58a5039b88d2c176d718d5a012e21c1785781c1fcda5843d529a9af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8mvmz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:28:01Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-qj8wb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:28Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:28 crc kubenswrapper[5008]: I0129 15:28:28.323593 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 15:28:28 crc kubenswrapper[5008]: E0129 15:28:28.323731 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 15:28:28 crc kubenswrapper[5008]: I0129 15:28:28.323609 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 15:28:28 crc kubenswrapper[5008]: I0129 15:28:28.323600 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 15:28:28 crc kubenswrapper[5008]: E0129 15:28:28.323870 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 15:28:28 crc kubenswrapper[5008]: E0129 15:28:28.324009 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 15:28:28 crc kubenswrapper[5008]: I0129 15:28:28.331430 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-p5kdp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f5a0b69-5edd-467c-a822-093f1689df1d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6930478f2ddb5112eb944beac7cabb3e235fe16465a4706e8c665ce9481bc49d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gq2fz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://98ea0d7c1f2e3e9fc74e8e58ae26ab486c6b75f655273070cebee814c7c99e0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gq2fz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:28:11Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-p5kdp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:28Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:28 crc kubenswrapper[5008]: I0129 15:28:28.354935 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:28 crc kubenswrapper[5008]: I0129 15:28:28.355006 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:28 crc kubenswrapper[5008]: I0129 15:28:28.355023 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:28 crc kubenswrapper[5008]: I0129 15:28:28.355047 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:28 crc kubenswrapper[5008]: I0129 15:28:28.355069 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:28Z","lastTransitionTime":"2026-01-29T15:28:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:28 crc kubenswrapper[5008]: I0129 15:28:28.358935 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"77958faa-02ef-4792-b792-6094f922cd1d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de76f0d6e08ee14b4a5ab39a21ebdc63bdf379dcd5b648ae46a4edcc2a49f20e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1dcda54222f387e6560d3e297be72e19032a975feb916bc12a220870207a3f35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3cb618c2c44502074cb37ce1e688d187254eafae3916372a16c8ab845fed767a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff8e5fd243880ce71f07c5c532cad2cdff0e4bca2d0083280be78206a1a4c854\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7393e24277d74a2b9987e6cdc54cd65485f5bc57d93ec25a2cb8479923db1feb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://380711ea042d739a804ab6da4c0361004cc9ed9a48a5f4b006d168df6a84ebb5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://380711ea042d739a804ab6da4c0361004cc9ed9a48a5f4b006d168df6a84ebb5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:27:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d9134a6829b7df9b42aeae161ea1f3961837d6a0b322b1adcd2417c47c0f5d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1d9134a6829b7df9b42aeae161ea1f3961837d6a0b322b1adcd2417c47c0f5d5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:27:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://9206e5978757b0979fa411a384d9e5b4728b01a769f87383df38dbb8f0f18e4b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9206e5978757b0979fa411a384d9e5b4728b01a769f87383df38dbb8f0f18e4b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:27:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:27:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:27:37Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:28Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:28 crc kubenswrapper[5008]: I0129 15:28:28.374920 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2624b9eb-bfe1-4c46-8825-6152c5e00565\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://12266e3ba2ed2e5d6d1e7ee893a0d59cd4575c8870cb1e129ca0fd9b8623467f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c778df6f5c031669143db37980250c01473f3d9856acc44a6ef51852822f99ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://de7c341c7443f28f5919ef6baeb21377b5571637ad807dd7515a5f28c218034b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0f710dffd08d1bbb467ff9d2c6a5d5beed779550747459407916e743506ab27\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:27:37Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:28Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:28 crc kubenswrapper[5008]: I0129 15:28:28.391201 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d62f7cc2-d2d7-4c9a-9432-8b4fb9f3fcf2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4702656214e54c2881bf198364622648679a9981721d09a6b1551a134c63b7d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ed1794f8b68a0810301b6f7b91e03cfb269b35084dd97b2f153789ba70970f2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://677c04a1dffb767e6149ccb064772548ca29cf553afc20cb4eb82a5f85742ff0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://412b5d429b7a86a87e710ba4a0c81a54b03108f41ce6cc29f429aede063eb76c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4d710e35a02d14289e2d5fe6b35c08621e78c96b7e9e30451ffd6d51962fb761\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"message\\\":\\\"file observer\\\\nW0129 15:27:57.701071 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0129 15:27:57.704726 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 15:27:57.707574 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-445213743/tls.crt::/tmp/serving-cert-445213743/tls.key\\\\\\\"\\\\nI0129 15:27:58.036057 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0129 15:27:58.041904 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0129 15:27:58.041936 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0129 15:27:58.041959 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0129 15:27:58.041967 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0129 15:27:58.046875 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0129 15:27:58.046901 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 15:27:58.046906 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 15:27:58.046911 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0129 15:27:58.046914 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0129 15:27:58.046917 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0129 15:27:58.046919 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0129 15:27:58.047110 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0129 15:27:58.052272 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T15:27:42Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3397d4d59fbac09e49247425eb263f25d13c62a72013146c981b606f6389165d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:40Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://25a0c747be0a011a60911a631709b27620d8ebc5afea1d21dbeb71f26d971f6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://25a0c747be0a011a60911a631709b27620d8ebc5afea1d21dbeb71f26d971f6e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:27:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:27:37Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:28Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:28 crc kubenswrapper[5008]: I0129 15:28:28.406508 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:28Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:28 crc kubenswrapper[5008]: I0129 15:28:28.418747 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:28Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:28 crc kubenswrapper[5008]: I0129 15:28:28.431009 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-42hcz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cdd8ae23-3f9f-49f8-928d-46dad823fde4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a44b0a7b0b53c339b51d5391ad7e0eb342bdb491b4af37a98f48788b8e2c077b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg75x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:27:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-42hcz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:28Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:28 crc kubenswrapper[5008]: I0129 15:28:28.458211 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:28 crc kubenswrapper[5008]: I0129 15:28:28.458269 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:28 crc kubenswrapper[5008]: I0129 15:28:28.458284 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:28 crc kubenswrapper[5008]: I0129 15:28:28.458305 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:28 crc kubenswrapper[5008]: I0129 15:28:28.458318 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:28Z","lastTransitionTime":"2026-01-29T15:28:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:28 crc kubenswrapper[5008]: I0129 15:28:28.561327 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:28 crc kubenswrapper[5008]: I0129 15:28:28.561418 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:28 crc kubenswrapper[5008]: I0129 15:28:28.561451 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:28 crc kubenswrapper[5008]: I0129 15:28:28.561483 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:28 crc kubenswrapper[5008]: I0129 15:28:28.561508 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:28Z","lastTransitionTime":"2026-01-29T15:28:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:28 crc kubenswrapper[5008]: I0129 15:28:28.664653 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:28 crc kubenswrapper[5008]: I0129 15:28:28.664698 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:28 crc kubenswrapper[5008]: I0129 15:28:28.664709 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:28 crc kubenswrapper[5008]: I0129 15:28:28.664723 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:28 crc kubenswrapper[5008]: I0129 15:28:28.664731 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:28Z","lastTransitionTime":"2026-01-29T15:28:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:28 crc kubenswrapper[5008]: I0129 15:28:28.766992 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:28 crc kubenswrapper[5008]: I0129 15:28:28.767034 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:28 crc kubenswrapper[5008]: I0129 15:28:28.767045 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:28 crc kubenswrapper[5008]: I0129 15:28:28.767062 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:28 crc kubenswrapper[5008]: I0129 15:28:28.767073 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:28Z","lastTransitionTime":"2026-01-29T15:28:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:28 crc kubenswrapper[5008]: I0129 15:28:28.869755 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:28 crc kubenswrapper[5008]: I0129 15:28:28.869838 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:28 crc kubenswrapper[5008]: I0129 15:28:28.869852 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:28 crc kubenswrapper[5008]: I0129 15:28:28.869872 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:28 crc kubenswrapper[5008]: I0129 15:28:28.869890 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:28Z","lastTransitionTime":"2026-01-29T15:28:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:28 crc kubenswrapper[5008]: I0129 15:28:28.894220 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/f3716fd8-7f9b-44e2-ac3c-e907d8793dc9-metrics-certs\") pod \"network-metrics-daemon-kkc6c\" (UID: \"f3716fd8-7f9b-44e2-ac3c-e907d8793dc9\") " pod="openshift-multus/network-metrics-daemon-kkc6c" Jan 29 15:28:28 crc kubenswrapper[5008]: E0129 15:28:28.894424 5008 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 29 15:28:28 crc kubenswrapper[5008]: E0129 15:28:28.894531 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f3716fd8-7f9b-44e2-ac3c-e907d8793dc9-metrics-certs podName:f3716fd8-7f9b-44e2-ac3c-e907d8793dc9 nodeName:}" failed. No retries permitted until 2026-01-29 15:28:44.894502991 +0000 UTC m=+68.567357258 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/f3716fd8-7f9b-44e2-ac3c-e907d8793dc9-metrics-certs") pod "network-metrics-daemon-kkc6c" (UID: "f3716fd8-7f9b-44e2-ac3c-e907d8793dc9") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 29 15:28:28 crc kubenswrapper[5008]: I0129 15:28:28.976166 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:28 crc kubenswrapper[5008]: I0129 15:28:28.976261 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:28 crc kubenswrapper[5008]: I0129 15:28:28.976283 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:28 crc kubenswrapper[5008]: I0129 15:28:28.976309 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:28 crc kubenswrapper[5008]: I0129 15:28:28.976328 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:28Z","lastTransitionTime":"2026-01-29T15:28:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:29 crc kubenswrapper[5008]: I0129 15:28:29.080160 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:29 crc kubenswrapper[5008]: I0129 15:28:29.080215 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:29 crc kubenswrapper[5008]: I0129 15:28:29.080232 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:29 crc kubenswrapper[5008]: I0129 15:28:29.080257 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:29 crc kubenswrapper[5008]: I0129 15:28:29.080275 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:29Z","lastTransitionTime":"2026-01-29T15:28:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:29 crc kubenswrapper[5008]: I0129 15:28:29.183527 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:29 crc kubenswrapper[5008]: I0129 15:28:29.183632 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:29 crc kubenswrapper[5008]: I0129 15:28:29.183681 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:29 crc kubenswrapper[5008]: I0129 15:28:29.183704 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:29 crc kubenswrapper[5008]: I0129 15:28:29.183727 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:29Z","lastTransitionTime":"2026-01-29T15:28:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:29 crc kubenswrapper[5008]: I0129 15:28:29.287258 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:29 crc kubenswrapper[5008]: I0129 15:28:29.287345 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:29 crc kubenswrapper[5008]: I0129 15:28:29.287368 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:29 crc kubenswrapper[5008]: I0129 15:28:29.287397 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:29 crc kubenswrapper[5008]: I0129 15:28:29.287430 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:29Z","lastTransitionTime":"2026-01-29T15:28:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:29 crc kubenswrapper[5008]: I0129 15:28:29.306398 5008 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-01 07:23:45.349611014 +0000 UTC Jan 29 15:28:29 crc kubenswrapper[5008]: I0129 15:28:29.322742 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kkc6c" Jan 29 15:28:29 crc kubenswrapper[5008]: E0129 15:28:29.322956 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kkc6c" podUID="f3716fd8-7f9b-44e2-ac3c-e907d8793dc9" Jan 29 15:28:29 crc kubenswrapper[5008]: I0129 15:28:29.323912 5008 scope.go:117] "RemoveContainer" containerID="7e99cc5b72dd4558981820cab4c037fc0a5419fbf5c8f8b6cc3733fa97ccbab8" Jan 29 15:28:29 crc kubenswrapper[5008]: I0129 15:28:29.360392 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pqg9w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1d092513-7735-4c98-9734-57bc46b99280\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://08beb10f1715c1ca4bbe5b5ecf918e595f3befca424a2b65a06e682936dcc9c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2xcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://84bee79a5084a74e833cfe4bac65bc4b319e7a41e9f3e8c7ee7de383385da1a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2xcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eddc7bcf8b28e2d71e41dbad61e84e0e0ac1e2702628a400e9c16dcc4303cad1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2xcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b82de879355c27b3c577b5d5a292b2c1db266e6d92a8e01409bf87ede71ba420\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2xcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e0b6c0db5ed1e87ffade45aa1c7194322bbf680050f9b7328a3584db57e1554\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2xcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://676b28dc78242b0ec7c7a3643a048da9020c807de1f4ddd0cd801f60a1bf41a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2xcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e99cc5b72dd4558981820cab4c037fc0a5419fbf5c8f8b6cc3733fa97ccbab8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7e99cc5b72dd4558981820cab4c037fc0a5419fbf5c8f8b6cc3733fa97ccbab8\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-29T15:28:14Z\\\",\\\"message\\\":\\\"nil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0129 15:28:14.347483 6482 base_network_controller_pods.go:477] [default/openshift-network-console/networking-console-plugin-85b44fc459-gdk6g] creating logical port openshift-network-console_networking-console-plugin-85b44fc459-gdk6g for pod on switch crc\\\\nI0129 15:28:14.347479 6482 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-apiserver/api]} name:Service_openshift-apiserver/api_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.37:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {88e20c31-5b8d-4d44-bbd8-dba87b7dbaf0}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0129 15:28:14.347139 6482 services_controller.go:453] Built service openshift-ingress-canary/ingress-canary template LB for network=default: []services.LB{}\\\\nF0129 15:28:14.347332 6482 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy c\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:13Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-pqg9w_openshift-ovn-kubernetes(1d092513-7735-4c98-9734-57bc46b99280)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2xcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dc93128ecb53884c776154eafc7f29837e9c378a10c37df5d85d608ef14d7195\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2xcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6807035e51f1a1b563d2c2de6ad73607b2a3bbb9b4336cb9dfeea693d35fdda6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6807035e51f1a1b563d2c2de6ad73607b2a3bbb9b4336cb9dfeea693d35fdda6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2xcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:27:59Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-pqg9w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:29Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:29 crc kubenswrapper[5008]: I0129 15:28:29.373465 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-kkc6c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f3716fd8-7f9b-44e2-ac3c-e907d8793dc9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tl4fv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tl4fv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:28:13Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-kkc6c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:29Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:29 crc kubenswrapper[5008]: I0129 15:28:29.385692 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f3710d4-b153-4018-a492-367eb8b81ef8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c89e24fc5acc0577d3d738d63e7982aa32a07ecc01952570f6f417286b8747a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://33245f510d76b9610b3e44259d0944eaef5873c4bc31c3f3012a013248d16933\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c76eae897742ba4e95f6d60a81e2da82f1c0b0e220f48473436b03bff9f2f7e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0ee76cb03f96b669c6907a5d4a1520afda186e96b59ddea75f8c0fd7547c9063\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0ee76cb03f96b669c6907a5d4a1520afda186e96b59ddea75f8c0fd7547c9063\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:27:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:27:37Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:29Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:29 crc kubenswrapper[5008]: I0129 15:28:29.390563 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:29 crc kubenswrapper[5008]: I0129 15:28:29.390647 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:29 crc kubenswrapper[5008]: I0129 15:28:29.390663 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:29 crc kubenswrapper[5008]: I0129 15:28:29.390684 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:29 crc kubenswrapper[5008]: I0129 15:28:29.390701 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:29Z","lastTransitionTime":"2026-01-29T15:28:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:29 crc kubenswrapper[5008]: I0129 15:28:29.399718 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c04122903ba8ec9ecb21ba42f430520d0a097fff8cea9572b066e146d519cf91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:29Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:29 crc kubenswrapper[5008]: I0129 15:28:29.418746 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:29Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:29 crc kubenswrapper[5008]: I0129 15:28:29.432906 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-wtvvb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2dede057-dcce-4302-8efe-e2c3640308ec\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://63cab2ec47a6dc148b6d3554a6f4b5c1985ca43bf62bfc444ff3582273cce517\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mtnst\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:27:58Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-wtvvb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:29Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:29 crc kubenswrapper[5008]: I0129 15:28:29.448760 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-gk9q8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ca0fcb2d-733d-4bde-9bbf-3f7082d0e244\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fda885d25c8fd46bd297810d4fb6c23ec0d4bb76993e94ea75a623b0feeed247\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6blck\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4781ea933d8ce868cf1da4b2890797c16012b434ce074870a59307d61a3c731\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6blck\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:27:59Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-gk9q8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:29Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:29 crc kubenswrapper[5008]: I0129 15:28:29.463606 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ae42d856f5916fe3a1dace4ed5ed53a6cab552d169357b7303516719b78ef076\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:29Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:29 crc kubenswrapper[5008]: I0129 15:28:29.477153 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e5526ab405f367c31c46e86dc356f5c21ac7529cd706af08cb6cd35e54dbe33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a34142066431679db41e56f6697765165128986ad22bc919152524672e3035d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:29Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:29 crc kubenswrapper[5008]: I0129 15:28:29.494907 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:29 crc kubenswrapper[5008]: I0129 15:28:29.494951 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:29 crc kubenswrapper[5008]: I0129 15:28:29.494959 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:29 crc kubenswrapper[5008]: I0129 15:28:29.494972 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:29 crc kubenswrapper[5008]: I0129 15:28:29.494983 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:29Z","lastTransitionTime":"2026-01-29T15:28:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:29 crc kubenswrapper[5008]: I0129 15:28:29.499113 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-78bl2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa065d0b-d690-4a7d-9079-a8f976a7aca3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb7be81711617226cfa9af5ce71166ad176fc477581c03ba781a2746d64bbf31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trwfk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dce68b57fb66d0f4fb38e7ba2da32746311a7705ec80e7dbaaee405bf6175456\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dce68b57fb66d0f4fb38e7ba2da32746311a7705ec80e7dbaaee405bf6175456\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:27:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trwfk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be2538011dc9cfea90fe3fdf861804d4f36944262a852e2efe4c6a215019fb7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://be2538011dc9cfea90fe3fdf861804d4f36944262a852e2efe4c6a215019fb7e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trwfk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1e468924dd5d2c21d28331698458147151b2c74b04a9154c3f0638b271ffb36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c1e468924dd5d2c21d28331698458147151b2c74b04a9154c3f0638b271ffb36\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trwfk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf3e60925690b1b555efc2db95efcef76510c147b6338b65b071bf0729561a6b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bf3e60925690b1b555efc2db95efcef76510c147b6338b65b071bf0729561a6b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trwfk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://83ad0f06e7035a28c9d0207484d22ac175226fac31b1d5e233ce7231cb957fb3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://83ad0f06e7035a28c9d0207484d22ac175226fac31b1d5e233ce7231cb957fb3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trwfk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://78d2f9571b05eeb98c339f4165ca858289b85192d254ea86d4fb2eae7ea2e61e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://78d2f9571b05eeb98c339f4165ca858289b85192d254ea86d4fb2eae7ea2e61e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trwfk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:27:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-78bl2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:29Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:29 crc kubenswrapper[5008]: I0129 15:28:29.509710 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-qj8wb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ffbfcf6-99e5-450c-8c72-b2db9365d93e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb113f45b58a5039b88d2c176d718d5a012e21c1785781c1fcda5843d529a9af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8mvmz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:28:01Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-qj8wb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:29Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:29 crc kubenswrapper[5008]: I0129 15:28:29.522080 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-42hcz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cdd8ae23-3f9f-49f8-928d-46dad823fde4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a44b0a7b0b53c339b51d5391ad7e0eb342bdb491b4af37a98f48788b8e2c077b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg75x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:27:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-42hcz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:29Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:29 crc kubenswrapper[5008]: I0129 15:28:29.533589 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-p5kdp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f5a0b69-5edd-467c-a822-093f1689df1d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6930478f2ddb5112eb944beac7cabb3e235fe16465a4706e8c665ce9481bc49d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gq2fz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://98ea0d7c1f2e3e9fc74e8e58ae26ab486c6b75f655273070cebee814c7c99e0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gq2fz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:28:11Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-p5kdp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:29Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:29 crc kubenswrapper[5008]: I0129 15:28:29.552885 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"77958faa-02ef-4792-b792-6094f922cd1d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de76f0d6e08ee14b4a5ab39a21ebdc63bdf379dcd5b648ae46a4edcc2a49f20e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1dcda54222f387e6560d3e297be72e19032a975feb916bc12a220870207a3f35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3cb618c2c44502074cb37ce1e688d187254eafae3916372a16c8ab845fed767a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff8e5fd243880ce71f07c5c532cad2cdff0e4bca2d0083280be78206a1a4c854\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7393e24277d74a2b9987e6cdc54cd65485f5bc57d93ec25a2cb8479923db1feb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://380711ea042d739a804ab6da4c0361004cc9ed9a48a5f4b006d168df6a84ebb5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://380711ea042d739a804ab6da4c0361004cc9ed9a48a5f4b006d168df6a84ebb5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:27:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d9134a6829b7df9b42aeae161ea1f3961837d6a0b322b1adcd2417c47c0f5d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1d9134a6829b7df9b42aeae161ea1f3961837d6a0b322b1adcd2417c47c0f5d5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:27:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://9206e5978757b0979fa411a384d9e5b4728b01a769f87383df38dbb8f0f18e4b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9206e5978757b0979fa411a384d9e5b4728b01a769f87383df38dbb8f0f18e4b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:27:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:27:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:27:37Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:29Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:29 crc kubenswrapper[5008]: I0129 15:28:29.567533 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2624b9eb-bfe1-4c46-8825-6152c5e00565\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://12266e3ba2ed2e5d6d1e7ee893a0d59cd4575c8870cb1e129ca0fd9b8623467f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c778df6f5c031669143db37980250c01473f3d9856acc44a6ef51852822f99ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://de7c341c7443f28f5919ef6baeb21377b5571637ad807dd7515a5f28c218034b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0f710dffd08d1bbb467ff9d2c6a5d5beed779550747459407916e743506ab27\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:27:37Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:29Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:29 crc kubenswrapper[5008]: I0129 15:28:29.581556 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d62f7cc2-d2d7-4c9a-9432-8b4fb9f3fcf2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4702656214e54c2881bf198364622648679a9981721d09a6b1551a134c63b7d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ed1794f8b68a0810301b6f7b91e03cfb269b35084dd97b2f153789ba70970f2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://677c04a1dffb767e6149ccb064772548ca29cf553afc20cb4eb82a5f85742ff0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://412b5d429b7a86a87e710ba4a0c81a54b03108f41ce6cc29f429aede063eb76c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4d710e35a02d14289e2d5fe6b35c08621e78c96b7e9e30451ffd6d51962fb761\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"message\\\":\\\"file observer\\\\nW0129 15:27:57.701071 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0129 15:27:57.704726 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 15:27:57.707574 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-445213743/tls.crt::/tmp/serving-cert-445213743/tls.key\\\\\\\"\\\\nI0129 15:27:58.036057 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0129 15:27:58.041904 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0129 15:27:58.041936 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0129 15:27:58.041959 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0129 15:27:58.041967 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0129 15:27:58.046875 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0129 15:27:58.046901 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 15:27:58.046906 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 15:27:58.046911 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0129 15:27:58.046914 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0129 15:27:58.046917 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0129 15:27:58.046919 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0129 15:27:58.047110 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0129 15:27:58.052272 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T15:27:42Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3397d4d59fbac09e49247425eb263f25d13c62a72013146c981b606f6389165d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:40Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://25a0c747be0a011a60911a631709b27620d8ebc5afea1d21dbeb71f26d971f6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://25a0c747be0a011a60911a631709b27620d8ebc5afea1d21dbeb71f26d971f6e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:27:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:27:37Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:29Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:29 crc kubenswrapper[5008]: I0129 15:28:29.596775 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:29 crc kubenswrapper[5008]: I0129 15:28:29.596832 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:29 crc kubenswrapper[5008]: I0129 15:28:29.596845 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:29 crc kubenswrapper[5008]: I0129 15:28:29.596861 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:29 crc kubenswrapper[5008]: I0129 15:28:29.596873 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:29Z","lastTransitionTime":"2026-01-29T15:28:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:29 crc kubenswrapper[5008]: I0129 15:28:29.598991 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:29Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:29 crc kubenswrapper[5008]: I0129 15:28:29.614550 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:29Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:29 crc kubenswrapper[5008]: I0129 15:28:29.681868 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-pqg9w" Jan 29 15:28:29 crc kubenswrapper[5008]: I0129 15:28:29.698820 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:29 crc kubenswrapper[5008]: I0129 15:28:29.698867 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:29 crc kubenswrapper[5008]: I0129 15:28:29.698879 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:29 crc kubenswrapper[5008]: I0129 15:28:29.698899 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:29 crc kubenswrapper[5008]: I0129 15:28:29.698913 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:29Z","lastTransitionTime":"2026-01-29T15:28:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:29 crc kubenswrapper[5008]: I0129 15:28:29.801796 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:29 crc kubenswrapper[5008]: I0129 15:28:29.801843 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:29 crc kubenswrapper[5008]: I0129 15:28:29.801854 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:29 crc kubenswrapper[5008]: I0129 15:28:29.801870 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:29 crc kubenswrapper[5008]: I0129 15:28:29.801881 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:29Z","lastTransitionTime":"2026-01-29T15:28:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:29 crc kubenswrapper[5008]: I0129 15:28:29.904393 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:29 crc kubenswrapper[5008]: I0129 15:28:29.904445 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:29 crc kubenswrapper[5008]: I0129 15:28:29.904457 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:29 crc kubenswrapper[5008]: I0129 15:28:29.904473 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:29 crc kubenswrapper[5008]: I0129 15:28:29.904485 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:29Z","lastTransitionTime":"2026-01-29T15:28:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:30 crc kubenswrapper[5008]: I0129 15:28:30.007401 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:30 crc kubenswrapper[5008]: I0129 15:28:30.007440 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:30 crc kubenswrapper[5008]: I0129 15:28:30.007451 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:30 crc kubenswrapper[5008]: I0129 15:28:30.007467 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:30 crc kubenswrapper[5008]: I0129 15:28:30.007479 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:30Z","lastTransitionTime":"2026-01-29T15:28:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:30 crc kubenswrapper[5008]: I0129 15:28:30.109587 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:30 crc kubenswrapper[5008]: I0129 15:28:30.109634 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:30 crc kubenswrapper[5008]: I0129 15:28:30.109647 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:30 crc kubenswrapper[5008]: I0129 15:28:30.109683 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:30 crc kubenswrapper[5008]: I0129 15:28:30.109698 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:30Z","lastTransitionTime":"2026-01-29T15:28:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:30 crc kubenswrapper[5008]: I0129 15:28:30.208828 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 15:28:30 crc kubenswrapper[5008]: E0129 15:28:30.208979 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 15:29:02.208951102 +0000 UTC m=+85.881805339 (durationBeforeRetry 32s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:28:30 crc kubenswrapper[5008]: I0129 15:28:30.209052 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 15:28:30 crc kubenswrapper[5008]: E0129 15:28:30.209201 5008 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 29 15:28:30 crc kubenswrapper[5008]: E0129 15:28:30.209264 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-29 15:29:02.20924651 +0000 UTC m=+85.882100747 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 29 15:28:30 crc kubenswrapper[5008]: I0129 15:28:30.214814 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:30 crc kubenswrapper[5008]: I0129 15:28:30.214856 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:30 crc kubenswrapper[5008]: I0129 15:28:30.214867 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:30 crc kubenswrapper[5008]: I0129 15:28:30.214884 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:30 crc kubenswrapper[5008]: I0129 15:28:30.214894 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:30Z","lastTransitionTime":"2026-01-29T15:28:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:30 crc kubenswrapper[5008]: I0129 15:28:30.307630 5008 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-06 21:38:52.014474638 +0000 UTC Jan 29 15:28:30 crc kubenswrapper[5008]: I0129 15:28:30.310166 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 15:28:30 crc kubenswrapper[5008]: I0129 15:28:30.310229 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 15:28:30 crc kubenswrapper[5008]: I0129 15:28:30.310270 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 15:28:30 crc kubenswrapper[5008]: E0129 15:28:30.310345 5008 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 29 15:28:30 crc kubenswrapper[5008]: E0129 15:28:30.310346 5008 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 29 15:28:30 crc kubenswrapper[5008]: E0129 15:28:30.310414 5008 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 29 15:28:30 crc kubenswrapper[5008]: E0129 15:28:30.310432 5008 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 29 15:28:30 crc kubenswrapper[5008]: E0129 15:28:30.310394 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-29 15:29:02.310381301 +0000 UTC m=+85.983235548 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 29 15:28:30 crc kubenswrapper[5008]: E0129 15:28:30.310568 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-29 15:29:02.310536075 +0000 UTC m=+85.983390322 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 29 15:28:30 crc kubenswrapper[5008]: E0129 15:28:30.310597 5008 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 29 15:28:30 crc kubenswrapper[5008]: E0129 15:28:30.310671 5008 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 29 15:28:30 crc kubenswrapper[5008]: E0129 15:28:30.310699 5008 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 29 15:28:30 crc kubenswrapper[5008]: E0129 15:28:30.310843 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-29 15:29:02.310776282 +0000 UTC m=+85.983630559 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 29 15:28:30 crc kubenswrapper[5008]: I0129 15:28:30.317588 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:30 crc kubenswrapper[5008]: I0129 15:28:30.317659 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:30 crc kubenswrapper[5008]: I0129 15:28:30.317684 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:30 crc kubenswrapper[5008]: I0129 15:28:30.317717 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:30 crc kubenswrapper[5008]: I0129 15:28:30.317744 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:30Z","lastTransitionTime":"2026-01-29T15:28:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:30 crc kubenswrapper[5008]: I0129 15:28:30.322669 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 15:28:30 crc kubenswrapper[5008]: I0129 15:28:30.322695 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 15:28:30 crc kubenswrapper[5008]: I0129 15:28:30.322680 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 15:28:30 crc kubenswrapper[5008]: E0129 15:28:30.322830 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 15:28:30 crc kubenswrapper[5008]: E0129 15:28:30.323003 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 15:28:30 crc kubenswrapper[5008]: E0129 15:28:30.323094 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 15:28:30 crc kubenswrapper[5008]: I0129 15:28:30.420744 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:30 crc kubenswrapper[5008]: I0129 15:28:30.420825 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:30 crc kubenswrapper[5008]: I0129 15:28:30.420843 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:30 crc kubenswrapper[5008]: I0129 15:28:30.420865 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:30 crc kubenswrapper[5008]: I0129 15:28:30.420882 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:30Z","lastTransitionTime":"2026-01-29T15:28:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:30 crc kubenswrapper[5008]: I0129 15:28:30.523098 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:30 crc kubenswrapper[5008]: I0129 15:28:30.523132 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:30 crc kubenswrapper[5008]: I0129 15:28:30.523141 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:30 crc kubenswrapper[5008]: I0129 15:28:30.523156 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:30 crc kubenswrapper[5008]: I0129 15:28:30.523165 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:30Z","lastTransitionTime":"2026-01-29T15:28:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:30 crc kubenswrapper[5008]: I0129 15:28:30.625932 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:30 crc kubenswrapper[5008]: I0129 15:28:30.625996 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:30 crc kubenswrapper[5008]: I0129 15:28:30.626018 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:30 crc kubenswrapper[5008]: I0129 15:28:30.626045 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:30 crc kubenswrapper[5008]: I0129 15:28:30.626066 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:30Z","lastTransitionTime":"2026-01-29T15:28:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:30 crc kubenswrapper[5008]: I0129 15:28:30.728918 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:30 crc kubenswrapper[5008]: I0129 15:28:30.729014 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:30 crc kubenswrapper[5008]: I0129 15:28:30.729039 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:30 crc kubenswrapper[5008]: I0129 15:28:30.729073 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:30 crc kubenswrapper[5008]: I0129 15:28:30.729100 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:30Z","lastTransitionTime":"2026-01-29T15:28:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:30 crc kubenswrapper[5008]: I0129 15:28:30.832011 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:30 crc kubenswrapper[5008]: I0129 15:28:30.832071 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:30 crc kubenswrapper[5008]: I0129 15:28:30.832085 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:30 crc kubenswrapper[5008]: I0129 15:28:30.832102 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:30 crc kubenswrapper[5008]: I0129 15:28:30.832114 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:30Z","lastTransitionTime":"2026-01-29T15:28:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:30 crc kubenswrapper[5008]: I0129 15:28:30.886288 5008 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-pqg9w_1d092513-7735-4c98-9734-57bc46b99280/ovnkube-controller/1.log" Jan 29 15:28:30 crc kubenswrapper[5008]: I0129 15:28:30.889465 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pqg9w" event={"ID":"1d092513-7735-4c98-9734-57bc46b99280","Type":"ContainerStarted","Data":"643ac2f5dd2119b6ede74fb609222a3e5d7643c302ea60d1799cf3b8db6e2120"} Jan 29 15:28:30 crc kubenswrapper[5008]: I0129 15:28:30.889910 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-pqg9w" Jan 29 15:28:30 crc kubenswrapper[5008]: I0129 15:28:30.904235 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ae42d856f5916fe3a1dace4ed5ed53a6cab552d169357b7303516719b78ef076\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:30Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:30 crc kubenswrapper[5008]: I0129 15:28:30.918973 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e5526ab405f367c31c46e86dc356f5c21ac7529cd706af08cb6cd35e54dbe33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a34142066431679db41e56f6697765165128986ad22bc919152524672e3035d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:30Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:30 crc kubenswrapper[5008]: I0129 15:28:30.930561 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-qj8wb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ffbfcf6-99e5-450c-8c72-b2db9365d93e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb113f45b58a5039b88d2c176d718d5a012e21c1785781c1fcda5843d529a9af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8mvmz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:28:01Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-qj8wb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:30Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:30 crc kubenswrapper[5008]: I0129 15:28:30.934256 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:30 crc kubenswrapper[5008]: I0129 15:28:30.934282 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:30 crc kubenswrapper[5008]: I0129 15:28:30.934294 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:30 crc kubenswrapper[5008]: I0129 15:28:30.934310 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:30 crc kubenswrapper[5008]: I0129 15:28:30.934323 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:30Z","lastTransitionTime":"2026-01-29T15:28:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:30 crc kubenswrapper[5008]: I0129 15:28:30.949933 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-78bl2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa065d0b-d690-4a7d-9079-a8f976a7aca3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb7be81711617226cfa9af5ce71166ad176fc477581c03ba781a2746d64bbf31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trwfk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dce68b57fb66d0f4fb38e7ba2da32746311a7705ec80e7dbaaee405bf6175456\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dce68b57fb66d0f4fb38e7ba2da32746311a7705ec80e7dbaaee405bf6175456\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:27:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trwfk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be2538011dc9cfea90fe3fdf861804d4f36944262a852e2efe4c6a215019fb7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://be2538011dc9cfea90fe3fdf861804d4f36944262a852e2efe4c6a215019fb7e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trwfk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1e468924dd5d2c21d28331698458147151b2c74b04a9154c3f0638b271ffb36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c1e468924dd5d2c21d28331698458147151b2c74b04a9154c3f0638b271ffb36\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trwfk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf3e60925690b1b555efc2db95efcef76510c147b6338b65b071bf0729561a6b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bf3e60925690b1b555efc2db95efcef76510c147b6338b65b071bf0729561a6b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trwfk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://83ad0f06e7035a28c9d0207484d22ac175226fac31b1d5e233ce7231cb957fb3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://83ad0f06e7035a28c9d0207484d22ac175226fac31b1d5e233ce7231cb957fb3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trwfk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://78d2f9571b05eeb98c339f4165ca858289b85192d254ea86d4fb2eae7ea2e61e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://78d2f9571b05eeb98c339f4165ca858289b85192d254ea86d4fb2eae7ea2e61e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trwfk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:27:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-78bl2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:30Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:30 crc kubenswrapper[5008]: I0129 15:28:30.962436 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d62f7cc2-d2d7-4c9a-9432-8b4fb9f3fcf2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4702656214e54c2881bf198364622648679a9981721d09a6b1551a134c63b7d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ed1794f8b68a0810301b6f7b91e03cfb269b35084dd97b2f153789ba70970f2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://677c04a1dffb767e6149ccb064772548ca29cf553afc20cb4eb82a5f85742ff0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://412b5d429b7a86a87e710ba4a0c81a54b03108f41ce6cc29f429aede063eb76c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4d710e35a02d14289e2d5fe6b35c08621e78c96b7e9e30451ffd6d51962fb761\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"message\\\":\\\"file observer\\\\nW0129 15:27:57.701071 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0129 15:27:57.704726 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 15:27:57.707574 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-445213743/tls.crt::/tmp/serving-cert-445213743/tls.key\\\\\\\"\\\\nI0129 15:27:58.036057 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0129 15:27:58.041904 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0129 15:27:58.041936 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0129 15:27:58.041959 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0129 15:27:58.041967 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0129 15:27:58.046875 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0129 15:27:58.046901 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 15:27:58.046906 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 15:27:58.046911 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0129 15:27:58.046914 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0129 15:27:58.046917 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0129 15:27:58.046919 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0129 15:27:58.047110 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0129 15:27:58.052272 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T15:27:42Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3397d4d59fbac09e49247425eb263f25d13c62a72013146c981b606f6389165d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:40Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://25a0c747be0a011a60911a631709b27620d8ebc5afea1d21dbeb71f26d971f6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://25a0c747be0a011a60911a631709b27620d8ebc5afea1d21dbeb71f26d971f6e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:27:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:27:37Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:30Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:30 crc kubenswrapper[5008]: I0129 15:28:30.976217 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:30Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:30 crc kubenswrapper[5008]: I0129 15:28:30.988841 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:30Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:31 crc kubenswrapper[5008]: I0129 15:28:31.000961 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-42hcz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cdd8ae23-3f9f-49f8-928d-46dad823fde4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a44b0a7b0b53c339b51d5391ad7e0eb342bdb491b4af37a98f48788b8e2c077b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg75x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:27:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-42hcz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:30Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:31 crc kubenswrapper[5008]: I0129 15:28:31.016710 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-p5kdp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f5a0b69-5edd-467c-a822-093f1689df1d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6930478f2ddb5112eb944beac7cabb3e235fe16465a4706e8c665ce9481bc49d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gq2fz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://98ea0d7c1f2e3e9fc74e8e58ae26ab486c6b75f655273070cebee814c7c99e0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gq2fz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:28:11Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-p5kdp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:31Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:31 crc kubenswrapper[5008]: I0129 15:28:31.036893 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:31 crc kubenswrapper[5008]: I0129 15:28:31.036961 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:31 crc kubenswrapper[5008]: I0129 15:28:31.036979 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:31 crc kubenswrapper[5008]: I0129 15:28:31.037002 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:31 crc kubenswrapper[5008]: I0129 15:28:31.037022 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:31Z","lastTransitionTime":"2026-01-29T15:28:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:31 crc kubenswrapper[5008]: I0129 15:28:31.051707 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"77958faa-02ef-4792-b792-6094f922cd1d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de76f0d6e08ee14b4a5ab39a21ebdc63bdf379dcd5b648ae46a4edcc2a49f20e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1dcda54222f387e6560d3e297be72e19032a975feb916bc12a220870207a3f35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3cb618c2c44502074cb37ce1e688d187254eafae3916372a16c8ab845fed767a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff8e5fd243880ce71f07c5c532cad2cdff0e4bca2d0083280be78206a1a4c854\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7393e24277d74a2b9987e6cdc54cd65485f5bc57d93ec25a2cb8479923db1feb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://380711ea042d739a804ab6da4c0361004cc9ed9a48a5f4b006d168df6a84ebb5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://380711ea042d739a804ab6da4c0361004cc9ed9a48a5f4b006d168df6a84ebb5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:27:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d9134a6829b7df9b42aeae161ea1f3961837d6a0b322b1adcd2417c47c0f5d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1d9134a6829b7df9b42aeae161ea1f3961837d6a0b322b1adcd2417c47c0f5d5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:27:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://9206e5978757b0979fa411a384d9e5b4728b01a769f87383df38dbb8f0f18e4b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9206e5978757b0979fa411a384d9e5b4728b01a769f87383df38dbb8f0f18e4b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:27:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:27:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:27:37Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:31Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:31 crc kubenswrapper[5008]: I0129 15:28:31.071610 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2624b9eb-bfe1-4c46-8825-6152c5e00565\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://12266e3ba2ed2e5d6d1e7ee893a0d59cd4575c8870cb1e129ca0fd9b8623467f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c778df6f5c031669143db37980250c01473f3d9856acc44a6ef51852822f99ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://de7c341c7443f28f5919ef6baeb21377b5571637ad807dd7515a5f28c218034b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0f710dffd08d1bbb467ff9d2c6a5d5beed779550747459407916e743506ab27\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:27:37Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:31Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:31 crc kubenswrapper[5008]: I0129 15:28:31.090099 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:31Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:31 crc kubenswrapper[5008]: I0129 15:28:31.104644 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-wtvvb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2dede057-dcce-4302-8efe-e2c3640308ec\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://63cab2ec47a6dc148b6d3554a6f4b5c1985ca43bf62bfc444ff3582273cce517\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mtnst\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:27:58Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-wtvvb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:31Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:31 crc kubenswrapper[5008]: I0129 15:28:31.117750 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-gk9q8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ca0fcb2d-733d-4bde-9bbf-3f7082d0e244\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fda885d25c8fd46bd297810d4fb6c23ec0d4bb76993e94ea75a623b0feeed247\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6blck\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4781ea933d8ce868cf1da4b2890797c16012b434ce074870a59307d61a3c731\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6blck\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:27:59Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-gk9q8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:31Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:31 crc kubenswrapper[5008]: I0129 15:28:31.139288 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:31 crc kubenswrapper[5008]: I0129 15:28:31.139494 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:31 crc kubenswrapper[5008]: I0129 15:28:31.139528 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:31 crc kubenswrapper[5008]: I0129 15:28:31.139563 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:31 crc kubenswrapper[5008]: I0129 15:28:31.139588 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:31Z","lastTransitionTime":"2026-01-29T15:28:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:31 crc kubenswrapper[5008]: I0129 15:28:31.142419 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pqg9w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1d092513-7735-4c98-9734-57bc46b99280\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://08beb10f1715c1ca4bbe5b5ecf918e595f3befca424a2b65a06e682936dcc9c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2xcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://84bee79a5084a74e833cfe4bac65bc4b319e7a41e9f3e8c7ee7de383385da1a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2xcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eddc7bcf8b28e2d71e41dbad61e84e0e0ac1e2702628a400e9c16dcc4303cad1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2xcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b82de879355c27b3c577b5d5a292b2c1db266e6d92a8e01409bf87ede71ba420\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2xcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e0b6c0db5ed1e87ffade45aa1c7194322bbf680050f9b7328a3584db57e1554\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2xcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://676b28dc78242b0ec7c7a3643a048da9020c807de1f4ddd0cd801f60a1bf41a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2xcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://643ac2f5dd2119b6ede74fb609222a3e5d7643c302ea60d1799cf3b8db6e2120\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7e99cc5b72dd4558981820cab4c037fc0a5419fbf5c8f8b6cc3733fa97ccbab8\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-29T15:28:14Z\\\",\\\"message\\\":\\\"nil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0129 15:28:14.347483 6482 base_network_controller_pods.go:477] [default/openshift-network-console/networking-console-plugin-85b44fc459-gdk6g] creating logical port openshift-network-console_networking-console-plugin-85b44fc459-gdk6g for pod on switch crc\\\\nI0129 15:28:14.347479 6482 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-apiserver/api]} name:Service_openshift-apiserver/api_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.37:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {88e20c31-5b8d-4d44-bbd8-dba87b7dbaf0}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0129 15:28:14.347139 6482 services_controller.go:453] Built service openshift-ingress-canary/ingress-canary template LB for network=default: []services.LB{}\\\\nF0129 15:28:14.347332 6482 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy c\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:13Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2xcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dc93128ecb53884c776154eafc7f29837e9c378a10c37df5d85d608ef14d7195\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2xcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6807035e51f1a1b563d2c2de6ad73607b2a3bbb9b4336cb9dfeea693d35fdda6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6807035e51f1a1b563d2c2de6ad73607b2a3bbb9b4336cb9dfeea693d35fdda6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2xcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:27:59Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-pqg9w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:31Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:31 crc kubenswrapper[5008]: I0129 15:28:31.157459 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-kkc6c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f3716fd8-7f9b-44e2-ac3c-e907d8793dc9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tl4fv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tl4fv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:28:13Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-kkc6c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:31Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:31 crc kubenswrapper[5008]: I0129 15:28:31.170584 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f3710d4-b153-4018-a492-367eb8b81ef8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c89e24fc5acc0577d3d738d63e7982aa32a07ecc01952570f6f417286b8747a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://33245f510d76b9610b3e44259d0944eaef5873c4bc31c3f3012a013248d16933\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c76eae897742ba4e95f6d60a81e2da82f1c0b0e220f48473436b03bff9f2f7e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0ee76cb03f96b669c6907a5d4a1520afda186e96b59ddea75f8c0fd7547c9063\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0ee76cb03f96b669c6907a5d4a1520afda186e96b59ddea75f8c0fd7547c9063\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:27:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:27:37Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:31Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:31 crc kubenswrapper[5008]: I0129 15:28:31.186262 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c04122903ba8ec9ecb21ba42f430520d0a097fff8cea9572b066e146d519cf91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:31Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:31 crc kubenswrapper[5008]: I0129 15:28:31.242413 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:31 crc kubenswrapper[5008]: I0129 15:28:31.242448 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:31 crc kubenswrapper[5008]: I0129 15:28:31.242459 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:31 crc kubenswrapper[5008]: I0129 15:28:31.242475 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:31 crc kubenswrapper[5008]: I0129 15:28:31.242484 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:31Z","lastTransitionTime":"2026-01-29T15:28:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:31 crc kubenswrapper[5008]: I0129 15:28:31.308543 5008 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-07 07:08:56.951149413 +0000 UTC Jan 29 15:28:31 crc kubenswrapper[5008]: I0129 15:28:31.323109 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kkc6c" Jan 29 15:28:31 crc kubenswrapper[5008]: E0129 15:28:31.323456 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kkc6c" podUID="f3716fd8-7f9b-44e2-ac3c-e907d8793dc9" Jan 29 15:28:31 crc kubenswrapper[5008]: I0129 15:28:31.344921 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:31 crc kubenswrapper[5008]: I0129 15:28:31.344981 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:31 crc kubenswrapper[5008]: I0129 15:28:31.344999 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:31 crc kubenswrapper[5008]: I0129 15:28:31.345025 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:31 crc kubenswrapper[5008]: I0129 15:28:31.345043 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:31Z","lastTransitionTime":"2026-01-29T15:28:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:31 crc kubenswrapper[5008]: I0129 15:28:31.447963 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:31 crc kubenswrapper[5008]: I0129 15:28:31.448040 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:31 crc kubenswrapper[5008]: I0129 15:28:31.448063 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:31 crc kubenswrapper[5008]: I0129 15:28:31.448091 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:31 crc kubenswrapper[5008]: I0129 15:28:31.448112 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:31Z","lastTransitionTime":"2026-01-29T15:28:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:31 crc kubenswrapper[5008]: I0129 15:28:31.550932 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:31 crc kubenswrapper[5008]: I0129 15:28:31.550990 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:31 crc kubenswrapper[5008]: I0129 15:28:31.551008 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:31 crc kubenswrapper[5008]: I0129 15:28:31.551032 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:31 crc kubenswrapper[5008]: I0129 15:28:31.551049 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:31Z","lastTransitionTime":"2026-01-29T15:28:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:31 crc kubenswrapper[5008]: I0129 15:28:31.654512 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:31 crc kubenswrapper[5008]: I0129 15:28:31.654549 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:31 crc kubenswrapper[5008]: I0129 15:28:31.654571 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:31 crc kubenswrapper[5008]: I0129 15:28:31.654590 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:31 crc kubenswrapper[5008]: I0129 15:28:31.654603 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:31Z","lastTransitionTime":"2026-01-29T15:28:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:31 crc kubenswrapper[5008]: I0129 15:28:31.758154 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:31 crc kubenswrapper[5008]: I0129 15:28:31.758236 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:31 crc kubenswrapper[5008]: I0129 15:28:31.758273 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:31 crc kubenswrapper[5008]: I0129 15:28:31.758303 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:31 crc kubenswrapper[5008]: I0129 15:28:31.758324 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:31Z","lastTransitionTime":"2026-01-29T15:28:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:31 crc kubenswrapper[5008]: I0129 15:28:31.861912 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:31 crc kubenswrapper[5008]: I0129 15:28:31.861978 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:31 crc kubenswrapper[5008]: I0129 15:28:31.862002 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:31 crc kubenswrapper[5008]: I0129 15:28:31.862029 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:31 crc kubenswrapper[5008]: I0129 15:28:31.862046 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:31Z","lastTransitionTime":"2026-01-29T15:28:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:31 crc kubenswrapper[5008]: I0129 15:28:31.896184 5008 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-pqg9w_1d092513-7735-4c98-9734-57bc46b99280/ovnkube-controller/2.log" Jan 29 15:28:31 crc kubenswrapper[5008]: I0129 15:28:31.897600 5008 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-pqg9w_1d092513-7735-4c98-9734-57bc46b99280/ovnkube-controller/1.log" Jan 29 15:28:31 crc kubenswrapper[5008]: I0129 15:28:31.901920 5008 generic.go:334] "Generic (PLEG): container finished" podID="1d092513-7735-4c98-9734-57bc46b99280" containerID="643ac2f5dd2119b6ede74fb609222a3e5d7643c302ea60d1799cf3b8db6e2120" exitCode=1 Jan 29 15:28:31 crc kubenswrapper[5008]: I0129 15:28:31.901962 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pqg9w" event={"ID":"1d092513-7735-4c98-9734-57bc46b99280","Type":"ContainerDied","Data":"643ac2f5dd2119b6ede74fb609222a3e5d7643c302ea60d1799cf3b8db6e2120"} Jan 29 15:28:31 crc kubenswrapper[5008]: I0129 15:28:31.902004 5008 scope.go:117] "RemoveContainer" containerID="7e99cc5b72dd4558981820cab4c037fc0a5419fbf5c8f8b6cc3733fa97ccbab8" Jan 29 15:28:31 crc kubenswrapper[5008]: I0129 15:28:31.904057 5008 scope.go:117] "RemoveContainer" containerID="643ac2f5dd2119b6ede74fb609222a3e5d7643c302ea60d1799cf3b8db6e2120" Jan 29 15:28:31 crc kubenswrapper[5008]: E0129 15:28:31.904385 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-pqg9w_openshift-ovn-kubernetes(1d092513-7735-4c98-9734-57bc46b99280)\"" pod="openshift-ovn-kubernetes/ovnkube-node-pqg9w" podUID="1d092513-7735-4c98-9734-57bc46b99280" Jan 29 15:28:31 crc kubenswrapper[5008]: I0129 15:28:31.926897 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-p5kdp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f5a0b69-5edd-467c-a822-093f1689df1d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6930478f2ddb5112eb944beac7cabb3e235fe16465a4706e8c665ce9481bc49d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gq2fz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://98ea0d7c1f2e3e9fc74e8e58ae26ab486c6b75f655273070cebee814c7c99e0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gq2fz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:28:11Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-p5kdp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:31Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:31 crc kubenswrapper[5008]: I0129 15:28:31.960370 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"77958faa-02ef-4792-b792-6094f922cd1d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de76f0d6e08ee14b4a5ab39a21ebdc63bdf379dcd5b648ae46a4edcc2a49f20e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1dcda54222f387e6560d3e297be72e19032a975feb916bc12a220870207a3f35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3cb618c2c44502074cb37ce1e688d187254eafae3916372a16c8ab845fed767a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff8e5fd243880ce71f07c5c532cad2cdff0e4bca2d0083280be78206a1a4c854\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7393e24277d74a2b9987e6cdc54cd65485f5bc57d93ec25a2cb8479923db1feb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://380711ea042d739a804ab6da4c0361004cc9ed9a48a5f4b006d168df6a84ebb5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://380711ea042d739a804ab6da4c0361004cc9ed9a48a5f4b006d168df6a84ebb5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:27:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d9134a6829b7df9b42aeae161ea1f3961837d6a0b322b1adcd2417c47c0f5d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1d9134a6829b7df9b42aeae161ea1f3961837d6a0b322b1adcd2417c47c0f5d5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:27:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://9206e5978757b0979fa411a384d9e5b4728b01a769f87383df38dbb8f0f18e4b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9206e5978757b0979fa411a384d9e5b4728b01a769f87383df38dbb8f0f18e4b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:27:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:27:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:27:37Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:31Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:31 crc kubenswrapper[5008]: I0129 15:28:31.965486 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:31 crc kubenswrapper[5008]: I0129 15:28:31.965585 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:31 crc kubenswrapper[5008]: I0129 15:28:31.965644 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:31 crc kubenswrapper[5008]: I0129 15:28:31.965670 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:31 crc kubenswrapper[5008]: I0129 15:28:31.965731 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:31Z","lastTransitionTime":"2026-01-29T15:28:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:31 crc kubenswrapper[5008]: I0129 15:28:31.983234 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2624b9eb-bfe1-4c46-8825-6152c5e00565\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://12266e3ba2ed2e5d6d1e7ee893a0d59cd4575c8870cb1e129ca0fd9b8623467f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c778df6f5c031669143db37980250c01473f3d9856acc44a6ef51852822f99ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://de7c341c7443f28f5919ef6baeb21377b5571637ad807dd7515a5f28c218034b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0f710dffd08d1bbb467ff9d2c6a5d5beed779550747459407916e743506ab27\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:27:37Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:31Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:32 crc kubenswrapper[5008]: I0129 15:28:32.007888 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d62f7cc2-d2d7-4c9a-9432-8b4fb9f3fcf2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4702656214e54c2881bf198364622648679a9981721d09a6b1551a134c63b7d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ed1794f8b68a0810301b6f7b91e03cfb269b35084dd97b2f153789ba70970f2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://677c04a1dffb767e6149ccb064772548ca29cf553afc20cb4eb82a5f85742ff0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://412b5d429b7a86a87e710ba4a0c81a54b03108f41ce6cc29f429aede063eb76c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4d710e35a02d14289e2d5fe6b35c08621e78c96b7e9e30451ffd6d51962fb761\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"message\\\":\\\"file observer\\\\nW0129 15:27:57.701071 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0129 15:27:57.704726 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 15:27:57.707574 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-445213743/tls.crt::/tmp/serving-cert-445213743/tls.key\\\\\\\"\\\\nI0129 15:27:58.036057 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0129 15:27:58.041904 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0129 15:27:58.041936 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0129 15:27:58.041959 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0129 15:27:58.041967 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0129 15:27:58.046875 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0129 15:27:58.046901 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 15:27:58.046906 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 15:27:58.046911 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0129 15:27:58.046914 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0129 15:27:58.046917 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0129 15:27:58.046919 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0129 15:27:58.047110 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0129 15:27:58.052272 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T15:27:42Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3397d4d59fbac09e49247425eb263f25d13c62a72013146c981b606f6389165d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:40Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://25a0c747be0a011a60911a631709b27620d8ebc5afea1d21dbeb71f26d971f6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://25a0c747be0a011a60911a631709b27620d8ebc5afea1d21dbeb71f26d971f6e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:27:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:27:37Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:32Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:32 crc kubenswrapper[5008]: I0129 15:28:32.028039 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:32Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:32 crc kubenswrapper[5008]: I0129 15:28:32.051697 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:32Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:32 crc kubenswrapper[5008]: I0129 15:28:32.068715 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:32 crc kubenswrapper[5008]: I0129 15:28:32.068821 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:32 crc kubenswrapper[5008]: I0129 15:28:32.068846 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:32 crc kubenswrapper[5008]: I0129 15:28:32.068872 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:32 crc kubenswrapper[5008]: I0129 15:28:32.068889 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:32Z","lastTransitionTime":"2026-01-29T15:28:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:32 crc kubenswrapper[5008]: I0129 15:28:32.073975 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-42hcz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cdd8ae23-3f9f-49f8-928d-46dad823fde4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a44b0a7b0b53c339b51d5391ad7e0eb342bdb491b4af37a98f48788b8e2c077b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg75x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:27:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-42hcz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:32Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:32 crc kubenswrapper[5008]: I0129 15:28:32.090654 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-kkc6c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f3716fd8-7f9b-44e2-ac3c-e907d8793dc9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tl4fv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tl4fv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:28:13Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-kkc6c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:32Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:32 crc kubenswrapper[5008]: I0129 15:28:32.112907 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f3710d4-b153-4018-a492-367eb8b81ef8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c89e24fc5acc0577d3d738d63e7982aa32a07ecc01952570f6f417286b8747a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://33245f510d76b9610b3e44259d0944eaef5873c4bc31c3f3012a013248d16933\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c76eae897742ba4e95f6d60a81e2da82f1c0b0e220f48473436b03bff9f2f7e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0ee76cb03f96b669c6907a5d4a1520afda186e96b59ddea75f8c0fd7547c9063\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0ee76cb03f96b669c6907a5d4a1520afda186e96b59ddea75f8c0fd7547c9063\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:27:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:27:37Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:32Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:32 crc kubenswrapper[5008]: I0129 15:28:32.133648 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c04122903ba8ec9ecb21ba42f430520d0a097fff8cea9572b066e146d519cf91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:32Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:32 crc kubenswrapper[5008]: I0129 15:28:32.156503 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:32Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:32 crc kubenswrapper[5008]: I0129 15:28:32.172007 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:32 crc kubenswrapper[5008]: I0129 15:28:32.172086 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:32 crc kubenswrapper[5008]: I0129 15:28:32.172109 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:32 crc kubenswrapper[5008]: I0129 15:28:32.172133 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:32 crc kubenswrapper[5008]: I0129 15:28:32.172151 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:32Z","lastTransitionTime":"2026-01-29T15:28:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:32 crc kubenswrapper[5008]: I0129 15:28:32.177184 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-wtvvb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2dede057-dcce-4302-8efe-e2c3640308ec\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://63cab2ec47a6dc148b6d3554a6f4b5c1985ca43bf62bfc444ff3582273cce517\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mtnst\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:27:58Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-wtvvb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:32Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:32 crc kubenswrapper[5008]: I0129 15:28:32.194714 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-gk9q8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ca0fcb2d-733d-4bde-9bbf-3f7082d0e244\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fda885d25c8fd46bd297810d4fb6c23ec0d4bb76993e94ea75a623b0feeed247\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6blck\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4781ea933d8ce868cf1da4b2890797c16012b434ce074870a59307d61a3c731\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6blck\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:27:59Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-gk9q8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:32Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:32 crc kubenswrapper[5008]: I0129 15:28:32.218495 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pqg9w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1d092513-7735-4c98-9734-57bc46b99280\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://08beb10f1715c1ca4bbe5b5ecf918e595f3befca424a2b65a06e682936dcc9c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2xcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://84bee79a5084a74e833cfe4bac65bc4b319e7a41e9f3e8c7ee7de383385da1a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2xcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eddc7bcf8b28e2d71e41dbad61e84e0e0ac1e2702628a400e9c16dcc4303cad1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2xcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b82de879355c27b3c577b5d5a292b2c1db266e6d92a8e01409bf87ede71ba420\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2xcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e0b6c0db5ed1e87ffade45aa1c7194322bbf680050f9b7328a3584db57e1554\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2xcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://676b28dc78242b0ec7c7a3643a048da9020c807de1f4ddd0cd801f60a1bf41a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2xcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://643ac2f5dd2119b6ede74fb609222a3e5d7643c302ea60d1799cf3b8db6e2120\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7e99cc5b72dd4558981820cab4c037fc0a5419fbf5c8f8b6cc3733fa97ccbab8\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-29T15:28:14Z\\\",\\\"message\\\":\\\"nil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0129 15:28:14.347483 6482 base_network_controller_pods.go:477] [default/openshift-network-console/networking-console-plugin-85b44fc459-gdk6g] creating logical port openshift-network-console_networking-console-plugin-85b44fc459-gdk6g for pod on switch crc\\\\nI0129 15:28:14.347479 6482 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-apiserver/api]} name:Service_openshift-apiserver/api_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.37:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {88e20c31-5b8d-4d44-bbd8-dba87b7dbaf0}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0129 15:28:14.347139 6482 services_controller.go:453] Built service openshift-ingress-canary/ingress-canary template LB for network=default: []services.LB{}\\\\nF0129 15:28:14.347332 6482 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy c\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:13Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://643ac2f5dd2119b6ede74fb609222a3e5d7643c302ea60d1799cf3b8db6e2120\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-29T15:28:31Z\\\",\\\"message\\\":\\\"15:28:30.852721 6704 default_network_controller.go:776] Recording success event on pod openshift-etcd/etcd-crc\\\\nI0129 15:28:30.852591 6704 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI0129 15:28:30.852819 6704 model_client.go:382] Update operations generated as: [{Op:update Table:NAT Row:map[external_ip:192.168.126.11 logical_ip:10.217.0.4 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {43933d5e-3c3b-4ff8-8926-04ac25de450e}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0129 15:28:30.852875 6704 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2xcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dc93128ecb53884c776154eafc7f29837e9c378a10c37df5d85d608ef14d7195\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2xcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6807035e51f1a1b563d2c2de6ad73607b2a3bbb9b4336cb9dfeea693d35fdda6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6807035e51f1a1b563d2c2de6ad73607b2a3bbb9b4336cb9dfeea693d35fdda6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2xcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:27:59Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-pqg9w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:32Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:32 crc kubenswrapper[5008]: I0129 15:28:32.238275 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ae42d856f5916fe3a1dace4ed5ed53a6cab552d169357b7303516719b78ef076\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:32Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:32 crc kubenswrapper[5008]: I0129 15:28:32.252858 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e5526ab405f367c31c46e86dc356f5c21ac7529cd706af08cb6cd35e54dbe33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a34142066431679db41e56f6697765165128986ad22bc919152524672e3035d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:32Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:32 crc kubenswrapper[5008]: I0129 15:28:32.271912 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-78bl2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa065d0b-d690-4a7d-9079-a8f976a7aca3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb7be81711617226cfa9af5ce71166ad176fc477581c03ba781a2746d64bbf31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trwfk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dce68b57fb66d0f4fb38e7ba2da32746311a7705ec80e7dbaaee405bf6175456\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dce68b57fb66d0f4fb38e7ba2da32746311a7705ec80e7dbaaee405bf6175456\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:27:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trwfk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be2538011dc9cfea90fe3fdf861804d4f36944262a852e2efe4c6a215019fb7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://be2538011dc9cfea90fe3fdf861804d4f36944262a852e2efe4c6a215019fb7e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trwfk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1e468924dd5d2c21d28331698458147151b2c74b04a9154c3f0638b271ffb36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c1e468924dd5d2c21d28331698458147151b2c74b04a9154c3f0638b271ffb36\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trwfk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf3e60925690b1b555efc2db95efcef76510c147b6338b65b071bf0729561a6b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bf3e60925690b1b555efc2db95efcef76510c147b6338b65b071bf0729561a6b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trwfk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://83ad0f06e7035a28c9d0207484d22ac175226fac31b1d5e233ce7231cb957fb3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://83ad0f06e7035a28c9d0207484d22ac175226fac31b1d5e233ce7231cb957fb3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trwfk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://78d2f9571b05eeb98c339f4165ca858289b85192d254ea86d4fb2eae7ea2e61e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://78d2f9571b05eeb98c339f4165ca858289b85192d254ea86d4fb2eae7ea2e61e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trwfk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:27:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-78bl2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:32Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:32 crc kubenswrapper[5008]: I0129 15:28:32.274969 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:32 crc kubenswrapper[5008]: I0129 15:28:32.275046 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:32 crc kubenswrapper[5008]: I0129 15:28:32.275125 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:32 crc kubenswrapper[5008]: I0129 15:28:32.275162 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:32 crc kubenswrapper[5008]: I0129 15:28:32.275185 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:32Z","lastTransitionTime":"2026-01-29T15:28:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:32 crc kubenswrapper[5008]: I0129 15:28:32.286220 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-qj8wb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ffbfcf6-99e5-450c-8c72-b2db9365d93e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb113f45b58a5039b88d2c176d718d5a012e21c1785781c1fcda5843d529a9af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8mvmz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:28:01Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-qj8wb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:32Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:32 crc kubenswrapper[5008]: I0129 15:28:32.309601 5008 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-03 08:23:04.397169389 +0000 UTC Jan 29 15:28:32 crc kubenswrapper[5008]: I0129 15:28:32.323129 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 15:28:32 crc kubenswrapper[5008]: I0129 15:28:32.323125 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 15:28:32 crc kubenswrapper[5008]: E0129 15:28:32.323318 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 15:28:32 crc kubenswrapper[5008]: I0129 15:28:32.323152 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 15:28:32 crc kubenswrapper[5008]: E0129 15:28:32.323389 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 15:28:32 crc kubenswrapper[5008]: E0129 15:28:32.323515 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 15:28:32 crc kubenswrapper[5008]: I0129 15:28:32.377718 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:32 crc kubenswrapper[5008]: I0129 15:28:32.377802 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:32 crc kubenswrapper[5008]: I0129 15:28:32.377820 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:32 crc kubenswrapper[5008]: I0129 15:28:32.377839 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:32 crc kubenswrapper[5008]: I0129 15:28:32.377853 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:32Z","lastTransitionTime":"2026-01-29T15:28:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:32 crc kubenswrapper[5008]: I0129 15:28:32.479916 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:32 crc kubenswrapper[5008]: I0129 15:28:32.479974 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:32 crc kubenswrapper[5008]: I0129 15:28:32.479991 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:32 crc kubenswrapper[5008]: I0129 15:28:32.480012 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:32 crc kubenswrapper[5008]: I0129 15:28:32.480032 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:32Z","lastTransitionTime":"2026-01-29T15:28:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:32 crc kubenswrapper[5008]: I0129 15:28:32.582649 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:32 crc kubenswrapper[5008]: I0129 15:28:32.582684 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:32 crc kubenswrapper[5008]: I0129 15:28:32.582703 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:32 crc kubenswrapper[5008]: I0129 15:28:32.582721 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:32 crc kubenswrapper[5008]: I0129 15:28:32.582731 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:32Z","lastTransitionTime":"2026-01-29T15:28:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:32 crc kubenswrapper[5008]: I0129 15:28:32.685334 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:32 crc kubenswrapper[5008]: I0129 15:28:32.685392 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:32 crc kubenswrapper[5008]: I0129 15:28:32.685401 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:32 crc kubenswrapper[5008]: I0129 15:28:32.685415 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:32 crc kubenswrapper[5008]: I0129 15:28:32.685425 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:32Z","lastTransitionTime":"2026-01-29T15:28:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:32 crc kubenswrapper[5008]: I0129 15:28:32.788258 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:32 crc kubenswrapper[5008]: I0129 15:28:32.788329 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:32 crc kubenswrapper[5008]: I0129 15:28:32.788351 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:32 crc kubenswrapper[5008]: I0129 15:28:32.788384 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:32 crc kubenswrapper[5008]: I0129 15:28:32.788406 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:32Z","lastTransitionTime":"2026-01-29T15:28:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:32 crc kubenswrapper[5008]: I0129 15:28:32.890986 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:32 crc kubenswrapper[5008]: I0129 15:28:32.891056 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:32 crc kubenswrapper[5008]: I0129 15:28:32.891080 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:32 crc kubenswrapper[5008]: I0129 15:28:32.891108 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:32 crc kubenswrapper[5008]: I0129 15:28:32.891131 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:32Z","lastTransitionTime":"2026-01-29T15:28:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:32 crc kubenswrapper[5008]: I0129 15:28:32.914769 5008 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-pqg9w_1d092513-7735-4c98-9734-57bc46b99280/ovnkube-controller/2.log" Jan 29 15:28:32 crc kubenswrapper[5008]: I0129 15:28:32.920113 5008 scope.go:117] "RemoveContainer" containerID="643ac2f5dd2119b6ede74fb609222a3e5d7643c302ea60d1799cf3b8db6e2120" Jan 29 15:28:32 crc kubenswrapper[5008]: E0129 15:28:32.920543 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-pqg9w_openshift-ovn-kubernetes(1d092513-7735-4c98-9734-57bc46b99280)\"" pod="openshift-ovn-kubernetes/ovnkube-node-pqg9w" podUID="1d092513-7735-4c98-9734-57bc46b99280" Jan 29 15:28:32 crc kubenswrapper[5008]: I0129 15:28:32.939038 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-p5kdp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f5a0b69-5edd-467c-a822-093f1689df1d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6930478f2ddb5112eb944beac7cabb3e235fe16465a4706e8c665ce9481bc49d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gq2fz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://98ea0d7c1f2e3e9fc74e8e58ae26ab486c6b75f655273070cebee814c7c99e0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gq2fz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:28:11Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-p5kdp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:32Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:32 crc kubenswrapper[5008]: I0129 15:28:32.974209 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"77958faa-02ef-4792-b792-6094f922cd1d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de76f0d6e08ee14b4a5ab39a21ebdc63bdf379dcd5b648ae46a4edcc2a49f20e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1dcda54222f387e6560d3e297be72e19032a975feb916bc12a220870207a3f35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3cb618c2c44502074cb37ce1e688d187254eafae3916372a16c8ab845fed767a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff8e5fd243880ce71f07c5c532cad2cdff0e4bca2d0083280be78206a1a4c854\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7393e24277d74a2b9987e6cdc54cd65485f5bc57d93ec25a2cb8479923db1feb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://380711ea042d739a804ab6da4c0361004cc9ed9a48a5f4b006d168df6a84ebb5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://380711ea042d739a804ab6da4c0361004cc9ed9a48a5f4b006d168df6a84ebb5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:27:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d9134a6829b7df9b42aeae161ea1f3961837d6a0b322b1adcd2417c47c0f5d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1d9134a6829b7df9b42aeae161ea1f3961837d6a0b322b1adcd2417c47c0f5d5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:27:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://9206e5978757b0979fa411a384d9e5b4728b01a769f87383df38dbb8f0f18e4b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9206e5978757b0979fa411a384d9e5b4728b01a769f87383df38dbb8f0f18e4b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:27:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:27:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:27:37Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:32Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:32 crc kubenswrapper[5008]: I0129 15:28:32.993816 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:32 crc kubenswrapper[5008]: I0129 15:28:32.993894 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:32 crc kubenswrapper[5008]: I0129 15:28:32.993918 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:32 crc kubenswrapper[5008]: I0129 15:28:32.993947 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:32 crc kubenswrapper[5008]: I0129 15:28:32.993968 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:32Z","lastTransitionTime":"2026-01-29T15:28:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:32 crc kubenswrapper[5008]: I0129 15:28:32.996751 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2624b9eb-bfe1-4c46-8825-6152c5e00565\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://12266e3ba2ed2e5d6d1e7ee893a0d59cd4575c8870cb1e129ca0fd9b8623467f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c778df6f5c031669143db37980250c01473f3d9856acc44a6ef51852822f99ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://de7c341c7443f28f5919ef6baeb21377b5571637ad807dd7515a5f28c218034b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0f710dffd08d1bbb467ff9d2c6a5d5beed779550747459407916e743506ab27\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:27:37Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:32Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:33 crc kubenswrapper[5008]: I0129 15:28:33.018848 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d62f7cc2-d2d7-4c9a-9432-8b4fb9f3fcf2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4702656214e54c2881bf198364622648679a9981721d09a6b1551a134c63b7d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ed1794f8b68a0810301b6f7b91e03cfb269b35084dd97b2f153789ba70970f2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://677c04a1dffb767e6149ccb064772548ca29cf553afc20cb4eb82a5f85742ff0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://412b5d429b7a86a87e710ba4a0c81a54b03108f41ce6cc29f429aede063eb76c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4d710e35a02d14289e2d5fe6b35c08621e78c96b7e9e30451ffd6d51962fb761\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"message\\\":\\\"file observer\\\\nW0129 15:27:57.701071 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0129 15:27:57.704726 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 15:27:57.707574 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-445213743/tls.crt::/tmp/serving-cert-445213743/tls.key\\\\\\\"\\\\nI0129 15:27:58.036057 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0129 15:27:58.041904 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0129 15:27:58.041936 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0129 15:27:58.041959 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0129 15:27:58.041967 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0129 15:27:58.046875 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0129 15:27:58.046901 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 15:27:58.046906 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 15:27:58.046911 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0129 15:27:58.046914 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0129 15:27:58.046917 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0129 15:27:58.046919 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0129 15:27:58.047110 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0129 15:27:58.052272 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T15:27:42Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3397d4d59fbac09e49247425eb263f25d13c62a72013146c981b606f6389165d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:40Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://25a0c747be0a011a60911a631709b27620d8ebc5afea1d21dbeb71f26d971f6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://25a0c747be0a011a60911a631709b27620d8ebc5afea1d21dbeb71f26d971f6e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:27:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:27:37Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:33Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:33 crc kubenswrapper[5008]: I0129 15:28:33.038283 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:33Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:33 crc kubenswrapper[5008]: I0129 15:28:33.059322 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:33Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:33 crc kubenswrapper[5008]: I0129 15:28:33.080023 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-42hcz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cdd8ae23-3f9f-49f8-928d-46dad823fde4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a44b0a7b0b53c339b51d5391ad7e0eb342bdb491b4af37a98f48788b8e2c077b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg75x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:27:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-42hcz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:33Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:33 crc kubenswrapper[5008]: I0129 15:28:33.096268 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-kkc6c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f3716fd8-7f9b-44e2-ac3c-e907d8793dc9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tl4fv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tl4fv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:28:13Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-kkc6c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:33Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:33 crc kubenswrapper[5008]: I0129 15:28:33.098891 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:33 crc kubenswrapper[5008]: I0129 15:28:33.098964 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:33 crc kubenswrapper[5008]: I0129 15:28:33.098984 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:33 crc kubenswrapper[5008]: I0129 15:28:33.099023 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:33 crc kubenswrapper[5008]: I0129 15:28:33.099062 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:33Z","lastTransitionTime":"2026-01-29T15:28:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:33 crc kubenswrapper[5008]: I0129 15:28:33.116966 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f3710d4-b153-4018-a492-367eb8b81ef8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c89e24fc5acc0577d3d738d63e7982aa32a07ecc01952570f6f417286b8747a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://33245f510d76b9610b3e44259d0944eaef5873c4bc31c3f3012a013248d16933\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c76eae897742ba4e95f6d60a81e2da82f1c0b0e220f48473436b03bff9f2f7e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0ee76cb03f96b669c6907a5d4a1520afda186e96b59ddea75f8c0fd7547c9063\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0ee76cb03f96b669c6907a5d4a1520afda186e96b59ddea75f8c0fd7547c9063\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:27:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:27:37Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:33Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:33 crc kubenswrapper[5008]: I0129 15:28:33.136031 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c04122903ba8ec9ecb21ba42f430520d0a097fff8cea9572b066e146d519cf91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:33Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:33 crc kubenswrapper[5008]: I0129 15:28:33.154350 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:33Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:33 crc kubenswrapper[5008]: I0129 15:28:33.172470 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-wtvvb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2dede057-dcce-4302-8efe-e2c3640308ec\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://63cab2ec47a6dc148b6d3554a6f4b5c1985ca43bf62bfc444ff3582273cce517\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mtnst\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:27:58Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-wtvvb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:33Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:33 crc kubenswrapper[5008]: I0129 15:28:33.202166 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:33 crc kubenswrapper[5008]: I0129 15:28:33.202230 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:33 crc kubenswrapper[5008]: I0129 15:28:33.202243 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:33 crc kubenswrapper[5008]: I0129 15:28:33.202261 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:33 crc kubenswrapper[5008]: I0129 15:28:33.202276 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:33Z","lastTransitionTime":"2026-01-29T15:28:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:33 crc kubenswrapper[5008]: I0129 15:28:33.203264 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-gk9q8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ca0fcb2d-733d-4bde-9bbf-3f7082d0e244\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fda885d25c8fd46bd297810d4fb6c23ec0d4bb76993e94ea75a623b0feeed247\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6blck\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4781ea933d8ce868cf1da4b2890797c16012b434ce074870a59307d61a3c731\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6blck\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:27:59Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-gk9q8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:33Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:33 crc kubenswrapper[5008]: I0129 15:28:33.235289 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pqg9w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1d092513-7735-4c98-9734-57bc46b99280\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://08beb10f1715c1ca4bbe5b5ecf918e595f3befca424a2b65a06e682936dcc9c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2xcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://84bee79a5084a74e833cfe4bac65bc4b319e7a41e9f3e8c7ee7de383385da1a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2xcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eddc7bcf8b28e2d71e41dbad61e84e0e0ac1e2702628a400e9c16dcc4303cad1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2xcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b82de879355c27b3c577b5d5a292b2c1db266e6d92a8e01409bf87ede71ba420\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2xcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e0b6c0db5ed1e87ffade45aa1c7194322bbf680050f9b7328a3584db57e1554\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2xcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://676b28dc78242b0ec7c7a3643a048da9020c807de1f4ddd0cd801f60a1bf41a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2xcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://643ac2f5dd2119b6ede74fb609222a3e5d7643c302ea60d1799cf3b8db6e2120\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://643ac2f5dd2119b6ede74fb609222a3e5d7643c302ea60d1799cf3b8db6e2120\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-29T15:28:31Z\\\",\\\"message\\\":\\\"15:28:30.852721 6704 default_network_controller.go:776] Recording success event on pod openshift-etcd/etcd-crc\\\\nI0129 15:28:30.852591 6704 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI0129 15:28:30.852819 6704 model_client.go:382] Update operations generated as: [{Op:update Table:NAT Row:map[external_ip:192.168.126.11 logical_ip:10.217.0.4 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {43933d5e-3c3b-4ff8-8926-04ac25de450e}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0129 15:28:30.852875 6704 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:30Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-pqg9w_openshift-ovn-kubernetes(1d092513-7735-4c98-9734-57bc46b99280)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2xcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dc93128ecb53884c776154eafc7f29837e9c378a10c37df5d85d608ef14d7195\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2xcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6807035e51f1a1b563d2c2de6ad73607b2a3bbb9b4336cb9dfeea693d35fdda6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6807035e51f1a1b563d2c2de6ad73607b2a3bbb9b4336cb9dfeea693d35fdda6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2xcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:27:59Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-pqg9w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:33Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:33 crc kubenswrapper[5008]: I0129 15:28:33.257083 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ae42d856f5916fe3a1dace4ed5ed53a6cab552d169357b7303516719b78ef076\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:33Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:33 crc kubenswrapper[5008]: I0129 15:28:33.275965 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e5526ab405f367c31c46e86dc356f5c21ac7529cd706af08cb6cd35e54dbe33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a34142066431679db41e56f6697765165128986ad22bc919152524672e3035d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:33Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:33 crc kubenswrapper[5008]: I0129 15:28:33.294237 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-78bl2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa065d0b-d690-4a7d-9079-a8f976a7aca3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb7be81711617226cfa9af5ce71166ad176fc477581c03ba781a2746d64bbf31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trwfk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dce68b57fb66d0f4fb38e7ba2da32746311a7705ec80e7dbaaee405bf6175456\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dce68b57fb66d0f4fb38e7ba2da32746311a7705ec80e7dbaaee405bf6175456\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:27:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trwfk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be2538011dc9cfea90fe3fdf861804d4f36944262a852e2efe4c6a215019fb7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://be2538011dc9cfea90fe3fdf861804d4f36944262a852e2efe4c6a215019fb7e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trwfk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1e468924dd5d2c21d28331698458147151b2c74b04a9154c3f0638b271ffb36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c1e468924dd5d2c21d28331698458147151b2c74b04a9154c3f0638b271ffb36\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trwfk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf3e60925690b1b555efc2db95efcef76510c147b6338b65b071bf0729561a6b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bf3e60925690b1b555efc2db95efcef76510c147b6338b65b071bf0729561a6b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trwfk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://83ad0f06e7035a28c9d0207484d22ac175226fac31b1d5e233ce7231cb957fb3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://83ad0f06e7035a28c9d0207484d22ac175226fac31b1d5e233ce7231cb957fb3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trwfk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://78d2f9571b05eeb98c339f4165ca858289b85192d254ea86d4fb2eae7ea2e61e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://78d2f9571b05eeb98c339f4165ca858289b85192d254ea86d4fb2eae7ea2e61e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trwfk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:27:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-78bl2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:33Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:33 crc kubenswrapper[5008]: I0129 15:28:33.305123 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:33 crc kubenswrapper[5008]: I0129 15:28:33.305168 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:33 crc kubenswrapper[5008]: I0129 15:28:33.305182 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:33 crc kubenswrapper[5008]: I0129 15:28:33.305200 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:33 crc kubenswrapper[5008]: I0129 15:28:33.305215 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:33Z","lastTransitionTime":"2026-01-29T15:28:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:33 crc kubenswrapper[5008]: I0129 15:28:33.306710 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-qj8wb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ffbfcf6-99e5-450c-8c72-b2db9365d93e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb113f45b58a5039b88d2c176d718d5a012e21c1785781c1fcda5843d529a9af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8mvmz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:28:01Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-qj8wb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:33Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:33 crc kubenswrapper[5008]: I0129 15:28:33.310725 5008 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-27 22:42:14.424102166 +0000 UTC Jan 29 15:28:33 crc kubenswrapper[5008]: I0129 15:28:33.323394 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kkc6c" Jan 29 15:28:33 crc kubenswrapper[5008]: E0129 15:28:33.323567 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kkc6c" podUID="f3716fd8-7f9b-44e2-ac3c-e907d8793dc9" Jan 29 15:28:33 crc kubenswrapper[5008]: I0129 15:28:33.410698 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:33 crc kubenswrapper[5008]: I0129 15:28:33.411095 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:33 crc kubenswrapper[5008]: I0129 15:28:33.411118 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:33 crc kubenswrapper[5008]: I0129 15:28:33.411142 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:33 crc kubenswrapper[5008]: I0129 15:28:33.411157 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:33Z","lastTransitionTime":"2026-01-29T15:28:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:33 crc kubenswrapper[5008]: I0129 15:28:33.515532 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:33 crc kubenswrapper[5008]: I0129 15:28:33.515567 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:33 crc kubenswrapper[5008]: I0129 15:28:33.515575 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:33 crc kubenswrapper[5008]: I0129 15:28:33.515590 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:33 crc kubenswrapper[5008]: I0129 15:28:33.515599 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:33Z","lastTransitionTime":"2026-01-29T15:28:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:33 crc kubenswrapper[5008]: I0129 15:28:33.618370 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:33 crc kubenswrapper[5008]: I0129 15:28:33.618449 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:33 crc kubenswrapper[5008]: I0129 15:28:33.618463 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:33 crc kubenswrapper[5008]: I0129 15:28:33.618480 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:33 crc kubenswrapper[5008]: I0129 15:28:33.618492 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:33Z","lastTransitionTime":"2026-01-29T15:28:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:33 crc kubenswrapper[5008]: I0129 15:28:33.721199 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:33 crc kubenswrapper[5008]: I0129 15:28:33.721302 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:33 crc kubenswrapper[5008]: I0129 15:28:33.721371 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:33 crc kubenswrapper[5008]: I0129 15:28:33.721402 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:33 crc kubenswrapper[5008]: I0129 15:28:33.721424 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:33Z","lastTransitionTime":"2026-01-29T15:28:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:33 crc kubenswrapper[5008]: I0129 15:28:33.823596 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:33 crc kubenswrapper[5008]: I0129 15:28:33.823699 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:33 crc kubenswrapper[5008]: I0129 15:28:33.823718 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:33 crc kubenswrapper[5008]: I0129 15:28:33.823748 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:33 crc kubenswrapper[5008]: I0129 15:28:33.823767 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:33Z","lastTransitionTime":"2026-01-29T15:28:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:33 crc kubenswrapper[5008]: I0129 15:28:33.926258 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:33 crc kubenswrapper[5008]: I0129 15:28:33.926339 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:33 crc kubenswrapper[5008]: I0129 15:28:33.926358 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:33 crc kubenswrapper[5008]: I0129 15:28:33.926382 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:33 crc kubenswrapper[5008]: I0129 15:28:33.926401 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:33Z","lastTransitionTime":"2026-01-29T15:28:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:34 crc kubenswrapper[5008]: I0129 15:28:34.029878 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:34 crc kubenswrapper[5008]: I0129 15:28:34.030304 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:34 crc kubenswrapper[5008]: I0129 15:28:34.030484 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:34 crc kubenswrapper[5008]: I0129 15:28:34.030676 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:34 crc kubenswrapper[5008]: I0129 15:28:34.030850 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:34Z","lastTransitionTime":"2026-01-29T15:28:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:34 crc kubenswrapper[5008]: I0129 15:28:34.135029 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:34 crc kubenswrapper[5008]: I0129 15:28:34.135078 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:34 crc kubenswrapper[5008]: I0129 15:28:34.135096 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:34 crc kubenswrapper[5008]: I0129 15:28:34.135119 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:34 crc kubenswrapper[5008]: I0129 15:28:34.135136 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:34Z","lastTransitionTime":"2026-01-29T15:28:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:34 crc kubenswrapper[5008]: I0129 15:28:34.238843 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:34 crc kubenswrapper[5008]: I0129 15:28:34.238913 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:34 crc kubenswrapper[5008]: I0129 15:28:34.238937 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:34 crc kubenswrapper[5008]: I0129 15:28:34.238966 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:34 crc kubenswrapper[5008]: I0129 15:28:34.238988 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:34Z","lastTransitionTime":"2026-01-29T15:28:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:34 crc kubenswrapper[5008]: I0129 15:28:34.310885 5008 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-24 16:32:41.172364017 +0000 UTC Jan 29 15:28:34 crc kubenswrapper[5008]: I0129 15:28:34.323517 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 15:28:34 crc kubenswrapper[5008]: E0129 15:28:34.323726 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 15:28:34 crc kubenswrapper[5008]: I0129 15:28:34.323969 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 15:28:34 crc kubenswrapper[5008]: E0129 15:28:34.324063 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 15:28:34 crc kubenswrapper[5008]: I0129 15:28:34.324125 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 15:28:34 crc kubenswrapper[5008]: E0129 15:28:34.324267 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 15:28:34 crc kubenswrapper[5008]: I0129 15:28:34.341612 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:34 crc kubenswrapper[5008]: I0129 15:28:34.341655 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:34 crc kubenswrapper[5008]: I0129 15:28:34.341666 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:34 crc kubenswrapper[5008]: I0129 15:28:34.341695 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:34 crc kubenswrapper[5008]: I0129 15:28:34.341720 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:34Z","lastTransitionTime":"2026-01-29T15:28:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:34 crc kubenswrapper[5008]: I0129 15:28:34.447556 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:34 crc kubenswrapper[5008]: I0129 15:28:34.447800 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:34 crc kubenswrapper[5008]: I0129 15:28:34.447810 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:34 crc kubenswrapper[5008]: I0129 15:28:34.447828 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:34 crc kubenswrapper[5008]: I0129 15:28:34.447851 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:34Z","lastTransitionTime":"2026-01-29T15:28:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:34 crc kubenswrapper[5008]: I0129 15:28:34.551855 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:34 crc kubenswrapper[5008]: I0129 15:28:34.551927 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:34 crc kubenswrapper[5008]: I0129 15:28:34.551944 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:34 crc kubenswrapper[5008]: I0129 15:28:34.551969 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:34 crc kubenswrapper[5008]: I0129 15:28:34.551986 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:34Z","lastTransitionTime":"2026-01-29T15:28:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:34 crc kubenswrapper[5008]: I0129 15:28:34.654545 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:34 crc kubenswrapper[5008]: I0129 15:28:34.654588 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:34 crc kubenswrapper[5008]: I0129 15:28:34.654600 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:34 crc kubenswrapper[5008]: I0129 15:28:34.654618 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:34 crc kubenswrapper[5008]: I0129 15:28:34.654632 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:34Z","lastTransitionTime":"2026-01-29T15:28:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:34 crc kubenswrapper[5008]: I0129 15:28:34.758137 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:34 crc kubenswrapper[5008]: I0129 15:28:34.758190 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:34 crc kubenswrapper[5008]: I0129 15:28:34.758206 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:34 crc kubenswrapper[5008]: I0129 15:28:34.758230 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:34 crc kubenswrapper[5008]: I0129 15:28:34.758248 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:34Z","lastTransitionTime":"2026-01-29T15:28:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:34 crc kubenswrapper[5008]: I0129 15:28:34.861371 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:34 crc kubenswrapper[5008]: I0129 15:28:34.861427 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:34 crc kubenswrapper[5008]: I0129 15:28:34.861443 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:34 crc kubenswrapper[5008]: I0129 15:28:34.861466 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:34 crc kubenswrapper[5008]: I0129 15:28:34.861483 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:34Z","lastTransitionTime":"2026-01-29T15:28:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:34 crc kubenswrapper[5008]: I0129 15:28:34.964425 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:34 crc kubenswrapper[5008]: I0129 15:28:34.964508 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:34 crc kubenswrapper[5008]: I0129 15:28:34.964533 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:34 crc kubenswrapper[5008]: I0129 15:28:34.964562 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:34 crc kubenswrapper[5008]: I0129 15:28:34.964582 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:34Z","lastTransitionTime":"2026-01-29T15:28:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:35 crc kubenswrapper[5008]: I0129 15:28:35.022979 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:35 crc kubenswrapper[5008]: I0129 15:28:35.023053 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:35 crc kubenswrapper[5008]: I0129 15:28:35.023076 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:35 crc kubenswrapper[5008]: I0129 15:28:35.023107 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:35 crc kubenswrapper[5008]: I0129 15:28:35.023130 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:35Z","lastTransitionTime":"2026-01-29T15:28:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:35 crc kubenswrapper[5008]: E0129 15:28:35.045056 5008 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:28:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:35Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:28:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:35Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:28:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:35Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:28:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:35Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"23463cb0-4db2-46f4-86c5-cabe2301deff\\\",\\\"systemUUID\\\":\\\"ad986a03-9926-4209-a3e1-d38e666bee86\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:35Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:35 crc kubenswrapper[5008]: I0129 15:28:35.050912 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:35 crc kubenswrapper[5008]: I0129 15:28:35.050963 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:35 crc kubenswrapper[5008]: I0129 15:28:35.050982 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:35 crc kubenswrapper[5008]: I0129 15:28:35.051005 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:35 crc kubenswrapper[5008]: I0129 15:28:35.051022 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:35Z","lastTransitionTime":"2026-01-29T15:28:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:35 crc kubenswrapper[5008]: E0129 15:28:35.072007 5008 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:28:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:35Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:28:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:35Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:28:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:35Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:28:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:35Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"23463cb0-4db2-46f4-86c5-cabe2301deff\\\",\\\"systemUUID\\\":\\\"ad986a03-9926-4209-a3e1-d38e666bee86\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:35Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:35 crc kubenswrapper[5008]: I0129 15:28:35.076883 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:35 crc kubenswrapper[5008]: I0129 15:28:35.076963 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:35 crc kubenswrapper[5008]: I0129 15:28:35.076987 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:35 crc kubenswrapper[5008]: I0129 15:28:35.077018 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:35 crc kubenswrapper[5008]: I0129 15:28:35.077042 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:35Z","lastTransitionTime":"2026-01-29T15:28:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:35 crc kubenswrapper[5008]: E0129 15:28:35.098086 5008 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:28:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:35Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:28:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:35Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:28:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:35Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:28:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:35Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"23463cb0-4db2-46f4-86c5-cabe2301deff\\\",\\\"systemUUID\\\":\\\"ad986a03-9926-4209-a3e1-d38e666bee86\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:35Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:35 crc kubenswrapper[5008]: I0129 15:28:35.104039 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:35 crc kubenswrapper[5008]: I0129 15:28:35.104114 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:35 crc kubenswrapper[5008]: I0129 15:28:35.104140 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:35 crc kubenswrapper[5008]: I0129 15:28:35.104171 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:35 crc kubenswrapper[5008]: I0129 15:28:35.104194 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:35Z","lastTransitionTime":"2026-01-29T15:28:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:35 crc kubenswrapper[5008]: E0129 15:28:35.120050 5008 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:28:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:35Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:28:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:35Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:28:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:35Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:28:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:35Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"23463cb0-4db2-46f4-86c5-cabe2301deff\\\",\\\"systemUUID\\\":\\\"ad986a03-9926-4209-a3e1-d38e666bee86\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:35Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:35 crc kubenswrapper[5008]: I0129 15:28:35.124606 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:35 crc kubenswrapper[5008]: I0129 15:28:35.124642 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:35 crc kubenswrapper[5008]: I0129 15:28:35.124654 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:35 crc kubenswrapper[5008]: I0129 15:28:35.124672 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:35 crc kubenswrapper[5008]: I0129 15:28:35.124686 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:35Z","lastTransitionTime":"2026-01-29T15:28:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:35 crc kubenswrapper[5008]: E0129 15:28:35.141359 5008 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:28:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:35Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:28:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:35Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:28:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:35Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:28:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:35Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"23463cb0-4db2-46f4-86c5-cabe2301deff\\\",\\\"systemUUID\\\":\\\"ad986a03-9926-4209-a3e1-d38e666bee86\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:35Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:35 crc kubenswrapper[5008]: E0129 15:28:35.141525 5008 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 29 15:28:35 crc kubenswrapper[5008]: I0129 15:28:35.143664 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:35 crc kubenswrapper[5008]: I0129 15:28:35.143725 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:35 crc kubenswrapper[5008]: I0129 15:28:35.143742 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:35 crc kubenswrapper[5008]: I0129 15:28:35.143768 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:35 crc kubenswrapper[5008]: I0129 15:28:35.143822 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:35Z","lastTransitionTime":"2026-01-29T15:28:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:35 crc kubenswrapper[5008]: I0129 15:28:35.246985 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:35 crc kubenswrapper[5008]: I0129 15:28:35.247067 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:35 crc kubenswrapper[5008]: I0129 15:28:35.247089 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:35 crc kubenswrapper[5008]: I0129 15:28:35.247118 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:35 crc kubenswrapper[5008]: I0129 15:28:35.247139 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:35Z","lastTransitionTime":"2026-01-29T15:28:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:35 crc kubenswrapper[5008]: I0129 15:28:35.311630 5008 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-19 01:21:28.490023277 +0000 UTC Jan 29 15:28:35 crc kubenswrapper[5008]: I0129 15:28:35.323154 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kkc6c" Jan 29 15:28:35 crc kubenswrapper[5008]: E0129 15:28:35.323359 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kkc6c" podUID="f3716fd8-7f9b-44e2-ac3c-e907d8793dc9" Jan 29 15:28:35 crc kubenswrapper[5008]: I0129 15:28:35.350213 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:35 crc kubenswrapper[5008]: I0129 15:28:35.350246 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:35 crc kubenswrapper[5008]: I0129 15:28:35.350255 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:35 crc kubenswrapper[5008]: I0129 15:28:35.350269 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:35 crc kubenswrapper[5008]: I0129 15:28:35.350279 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:35Z","lastTransitionTime":"2026-01-29T15:28:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:35 crc kubenswrapper[5008]: I0129 15:28:35.453569 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:35 crc kubenswrapper[5008]: I0129 15:28:35.453638 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:35 crc kubenswrapper[5008]: I0129 15:28:35.453652 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:35 crc kubenswrapper[5008]: I0129 15:28:35.453678 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:35 crc kubenswrapper[5008]: I0129 15:28:35.453697 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:35Z","lastTransitionTime":"2026-01-29T15:28:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:35 crc kubenswrapper[5008]: I0129 15:28:35.557023 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:35 crc kubenswrapper[5008]: I0129 15:28:35.557089 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:35 crc kubenswrapper[5008]: I0129 15:28:35.557106 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:35 crc kubenswrapper[5008]: I0129 15:28:35.557129 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:35 crc kubenswrapper[5008]: I0129 15:28:35.557146 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:35Z","lastTransitionTime":"2026-01-29T15:28:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:35 crc kubenswrapper[5008]: I0129 15:28:35.660524 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:35 crc kubenswrapper[5008]: I0129 15:28:35.660577 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:35 crc kubenswrapper[5008]: I0129 15:28:35.660595 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:35 crc kubenswrapper[5008]: I0129 15:28:35.660616 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:35 crc kubenswrapper[5008]: I0129 15:28:35.660630 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:35Z","lastTransitionTime":"2026-01-29T15:28:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:35 crc kubenswrapper[5008]: I0129 15:28:35.763573 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:35 crc kubenswrapper[5008]: I0129 15:28:35.763623 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:35 crc kubenswrapper[5008]: I0129 15:28:35.763634 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:35 crc kubenswrapper[5008]: I0129 15:28:35.763651 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:35 crc kubenswrapper[5008]: I0129 15:28:35.763665 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:35Z","lastTransitionTime":"2026-01-29T15:28:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:35 crc kubenswrapper[5008]: I0129 15:28:35.866180 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:35 crc kubenswrapper[5008]: I0129 15:28:35.866263 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:35 crc kubenswrapper[5008]: I0129 15:28:35.866281 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:35 crc kubenswrapper[5008]: I0129 15:28:35.866311 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:35 crc kubenswrapper[5008]: I0129 15:28:35.866329 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:35Z","lastTransitionTime":"2026-01-29T15:28:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:35 crc kubenswrapper[5008]: I0129 15:28:35.969217 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:35 crc kubenswrapper[5008]: I0129 15:28:35.969299 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:35 crc kubenswrapper[5008]: I0129 15:28:35.969318 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:35 crc kubenswrapper[5008]: I0129 15:28:35.969348 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:35 crc kubenswrapper[5008]: I0129 15:28:35.969369 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:35Z","lastTransitionTime":"2026-01-29T15:28:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:36 crc kubenswrapper[5008]: I0129 15:28:36.072536 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:36 crc kubenswrapper[5008]: I0129 15:28:36.072578 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:36 crc kubenswrapper[5008]: I0129 15:28:36.072591 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:36 crc kubenswrapper[5008]: I0129 15:28:36.072608 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:36 crc kubenswrapper[5008]: I0129 15:28:36.072620 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:36Z","lastTransitionTime":"2026-01-29T15:28:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:36 crc kubenswrapper[5008]: I0129 15:28:36.175853 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:36 crc kubenswrapper[5008]: I0129 15:28:36.175900 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:36 crc kubenswrapper[5008]: I0129 15:28:36.175918 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:36 crc kubenswrapper[5008]: I0129 15:28:36.175944 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:36 crc kubenswrapper[5008]: I0129 15:28:36.175960 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:36Z","lastTransitionTime":"2026-01-29T15:28:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:36 crc kubenswrapper[5008]: I0129 15:28:36.278607 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:36 crc kubenswrapper[5008]: I0129 15:28:36.278681 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:36 crc kubenswrapper[5008]: I0129 15:28:36.278698 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:36 crc kubenswrapper[5008]: I0129 15:28:36.278721 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:36 crc kubenswrapper[5008]: I0129 15:28:36.278736 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:36Z","lastTransitionTime":"2026-01-29T15:28:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:36 crc kubenswrapper[5008]: I0129 15:28:36.311944 5008 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-09 09:48:25.0020584 +0000 UTC Jan 29 15:28:36 crc kubenswrapper[5008]: I0129 15:28:36.323607 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 15:28:36 crc kubenswrapper[5008]: I0129 15:28:36.323686 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 15:28:36 crc kubenswrapper[5008]: I0129 15:28:36.323623 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 15:28:36 crc kubenswrapper[5008]: E0129 15:28:36.323825 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 15:28:36 crc kubenswrapper[5008]: E0129 15:28:36.323934 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 15:28:36 crc kubenswrapper[5008]: E0129 15:28:36.324034 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 15:28:36 crc kubenswrapper[5008]: I0129 15:28:36.381989 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:36 crc kubenswrapper[5008]: I0129 15:28:36.382031 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:36 crc kubenswrapper[5008]: I0129 15:28:36.382043 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:36 crc kubenswrapper[5008]: I0129 15:28:36.382059 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:36 crc kubenswrapper[5008]: I0129 15:28:36.382083 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:36Z","lastTransitionTime":"2026-01-29T15:28:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:36 crc kubenswrapper[5008]: I0129 15:28:36.485999 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:36 crc kubenswrapper[5008]: I0129 15:28:36.486038 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:36 crc kubenswrapper[5008]: I0129 15:28:36.486047 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:36 crc kubenswrapper[5008]: I0129 15:28:36.486062 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:36 crc kubenswrapper[5008]: I0129 15:28:36.486071 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:36Z","lastTransitionTime":"2026-01-29T15:28:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:36 crc kubenswrapper[5008]: I0129 15:28:36.588951 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:36 crc kubenswrapper[5008]: I0129 15:28:36.588989 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:36 crc kubenswrapper[5008]: I0129 15:28:36.588998 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:36 crc kubenswrapper[5008]: I0129 15:28:36.589013 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:36 crc kubenswrapper[5008]: I0129 15:28:36.589022 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:36Z","lastTransitionTime":"2026-01-29T15:28:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:36 crc kubenswrapper[5008]: I0129 15:28:36.691756 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:36 crc kubenswrapper[5008]: I0129 15:28:36.691837 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:36 crc kubenswrapper[5008]: I0129 15:28:36.691855 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:36 crc kubenswrapper[5008]: I0129 15:28:36.691883 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:36 crc kubenswrapper[5008]: I0129 15:28:36.691919 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:36Z","lastTransitionTime":"2026-01-29T15:28:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:36 crc kubenswrapper[5008]: I0129 15:28:36.794712 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:36 crc kubenswrapper[5008]: I0129 15:28:36.794756 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:36 crc kubenswrapper[5008]: I0129 15:28:36.794768 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:36 crc kubenswrapper[5008]: I0129 15:28:36.794806 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:36 crc kubenswrapper[5008]: I0129 15:28:36.794819 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:36Z","lastTransitionTime":"2026-01-29T15:28:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:36 crc kubenswrapper[5008]: I0129 15:28:36.897911 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:36 crc kubenswrapper[5008]: I0129 15:28:36.897977 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:36 crc kubenswrapper[5008]: I0129 15:28:36.897995 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:36 crc kubenswrapper[5008]: I0129 15:28:36.898019 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:36 crc kubenswrapper[5008]: I0129 15:28:36.898036 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:36Z","lastTransitionTime":"2026-01-29T15:28:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:37 crc kubenswrapper[5008]: I0129 15:28:37.000894 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:37 crc kubenswrapper[5008]: I0129 15:28:37.001009 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:37 crc kubenswrapper[5008]: I0129 15:28:37.001037 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:37 crc kubenswrapper[5008]: I0129 15:28:37.001112 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:37 crc kubenswrapper[5008]: I0129 15:28:37.001142 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:37Z","lastTransitionTime":"2026-01-29T15:28:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:37 crc kubenswrapper[5008]: I0129 15:28:37.103505 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:37 crc kubenswrapper[5008]: I0129 15:28:37.103570 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:37 crc kubenswrapper[5008]: I0129 15:28:37.103584 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:37 crc kubenswrapper[5008]: I0129 15:28:37.103627 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:37 crc kubenswrapper[5008]: I0129 15:28:37.103642 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:37Z","lastTransitionTime":"2026-01-29T15:28:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:37 crc kubenswrapper[5008]: I0129 15:28:37.207315 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:37 crc kubenswrapper[5008]: I0129 15:28:37.207379 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:37 crc kubenswrapper[5008]: I0129 15:28:37.207396 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:37 crc kubenswrapper[5008]: I0129 15:28:37.207420 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:37 crc kubenswrapper[5008]: I0129 15:28:37.207437 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:37Z","lastTransitionTime":"2026-01-29T15:28:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:37 crc kubenswrapper[5008]: I0129 15:28:37.310453 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:37 crc kubenswrapper[5008]: I0129 15:28:37.310521 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:37 crc kubenswrapper[5008]: I0129 15:28:37.310538 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:37 crc kubenswrapper[5008]: I0129 15:28:37.310565 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:37 crc kubenswrapper[5008]: I0129 15:28:37.310583 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:37Z","lastTransitionTime":"2026-01-29T15:28:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:37 crc kubenswrapper[5008]: I0129 15:28:37.312978 5008 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-27 10:56:41.055200248 +0000 UTC Jan 29 15:28:37 crc kubenswrapper[5008]: I0129 15:28:37.323422 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kkc6c" Jan 29 15:28:37 crc kubenswrapper[5008]: E0129 15:28:37.323894 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kkc6c" podUID="f3716fd8-7f9b-44e2-ac3c-e907d8793dc9" Jan 29 15:28:37 crc kubenswrapper[5008]: I0129 15:28:37.360345 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pqg9w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1d092513-7735-4c98-9734-57bc46b99280\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://08beb10f1715c1ca4bbe5b5ecf918e595f3befca424a2b65a06e682936dcc9c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2xcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://84bee79a5084a74e833cfe4bac65bc4b319e7a41e9f3e8c7ee7de383385da1a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2xcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eddc7bcf8b28e2d71e41dbad61e84e0e0ac1e2702628a400e9c16dcc4303cad1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2xcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b82de879355c27b3c577b5d5a292b2c1db266e6d92a8e01409bf87ede71ba420\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2xcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e0b6c0db5ed1e87ffade45aa1c7194322bbf680050f9b7328a3584db57e1554\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2xcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://676b28dc78242b0ec7c7a3643a048da9020c807de1f4ddd0cd801f60a1bf41a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2xcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://643ac2f5dd2119b6ede74fb609222a3e5d7643c302ea60d1799cf3b8db6e2120\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://643ac2f5dd2119b6ede74fb609222a3e5d7643c302ea60d1799cf3b8db6e2120\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-29T15:28:31Z\\\",\\\"message\\\":\\\"15:28:30.852721 6704 default_network_controller.go:776] Recording success event on pod openshift-etcd/etcd-crc\\\\nI0129 15:28:30.852591 6704 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI0129 15:28:30.852819 6704 model_client.go:382] Update operations generated as: [{Op:update Table:NAT Row:map[external_ip:192.168.126.11 logical_ip:10.217.0.4 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {43933d5e-3c3b-4ff8-8926-04ac25de450e}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0129 15:28:30.852875 6704 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:30Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-pqg9w_openshift-ovn-kubernetes(1d092513-7735-4c98-9734-57bc46b99280)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2xcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dc93128ecb53884c776154eafc7f29837e9c378a10c37df5d85d608ef14d7195\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2xcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6807035e51f1a1b563d2c2de6ad73607b2a3bbb9b4336cb9dfeea693d35fdda6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6807035e51f1a1b563d2c2de6ad73607b2a3bbb9b4336cb9dfeea693d35fdda6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2xcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:27:59Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-pqg9w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:37Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:37 crc kubenswrapper[5008]: I0129 15:28:37.374001 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-kkc6c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f3716fd8-7f9b-44e2-ac3c-e907d8793dc9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tl4fv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tl4fv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:28:13Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-kkc6c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:37Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:37 crc kubenswrapper[5008]: I0129 15:28:37.387979 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f3710d4-b153-4018-a492-367eb8b81ef8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c89e24fc5acc0577d3d738d63e7982aa32a07ecc01952570f6f417286b8747a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://33245f510d76b9610b3e44259d0944eaef5873c4bc31c3f3012a013248d16933\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c76eae897742ba4e95f6d60a81e2da82f1c0b0e220f48473436b03bff9f2f7e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0ee76cb03f96b669c6907a5d4a1520afda186e96b59ddea75f8c0fd7547c9063\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0ee76cb03f96b669c6907a5d4a1520afda186e96b59ddea75f8c0fd7547c9063\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:27:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:27:37Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:37Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:37 crc kubenswrapper[5008]: I0129 15:28:37.406119 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c04122903ba8ec9ecb21ba42f430520d0a097fff8cea9572b066e146d519cf91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:37Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:37 crc kubenswrapper[5008]: I0129 15:28:37.415109 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:37 crc kubenswrapper[5008]: I0129 15:28:37.415211 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:37 crc kubenswrapper[5008]: I0129 15:28:37.415231 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:37 crc kubenswrapper[5008]: I0129 15:28:37.415299 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:37 crc kubenswrapper[5008]: I0129 15:28:37.415322 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:37Z","lastTransitionTime":"2026-01-29T15:28:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:37 crc kubenswrapper[5008]: I0129 15:28:37.421356 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:37Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:37 crc kubenswrapper[5008]: I0129 15:28:37.435654 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-wtvvb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2dede057-dcce-4302-8efe-e2c3640308ec\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://63cab2ec47a6dc148b6d3554a6f4b5c1985ca43bf62bfc444ff3582273cce517\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mtnst\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:27:58Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-wtvvb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:37Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:37 crc kubenswrapper[5008]: I0129 15:28:37.451661 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-gk9q8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ca0fcb2d-733d-4bde-9bbf-3f7082d0e244\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fda885d25c8fd46bd297810d4fb6c23ec0d4bb76993e94ea75a623b0feeed247\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6blck\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4781ea933d8ce868cf1da4b2890797c16012b434ce074870a59307d61a3c731\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6blck\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:27:59Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-gk9q8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:37Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:37 crc kubenswrapper[5008]: I0129 15:28:37.469661 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ae42d856f5916fe3a1dace4ed5ed53a6cab552d169357b7303516719b78ef076\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:37Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:37 crc kubenswrapper[5008]: I0129 15:28:37.487950 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e5526ab405f367c31c46e86dc356f5c21ac7529cd706af08cb6cd35e54dbe33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a34142066431679db41e56f6697765165128986ad22bc919152524672e3035d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:37Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:37 crc kubenswrapper[5008]: I0129 15:28:37.504198 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-78bl2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa065d0b-d690-4a7d-9079-a8f976a7aca3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb7be81711617226cfa9af5ce71166ad176fc477581c03ba781a2746d64bbf31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trwfk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dce68b57fb66d0f4fb38e7ba2da32746311a7705ec80e7dbaaee405bf6175456\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dce68b57fb66d0f4fb38e7ba2da32746311a7705ec80e7dbaaee405bf6175456\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:27:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trwfk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be2538011dc9cfea90fe3fdf861804d4f36944262a852e2efe4c6a215019fb7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://be2538011dc9cfea90fe3fdf861804d4f36944262a852e2efe4c6a215019fb7e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trwfk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1e468924dd5d2c21d28331698458147151b2c74b04a9154c3f0638b271ffb36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c1e468924dd5d2c21d28331698458147151b2c74b04a9154c3f0638b271ffb36\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trwfk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf3e60925690b1b555efc2db95efcef76510c147b6338b65b071bf0729561a6b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bf3e60925690b1b555efc2db95efcef76510c147b6338b65b071bf0729561a6b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trwfk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://83ad0f06e7035a28c9d0207484d22ac175226fac31b1d5e233ce7231cb957fb3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://83ad0f06e7035a28c9d0207484d22ac175226fac31b1d5e233ce7231cb957fb3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trwfk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://78d2f9571b05eeb98c339f4165ca858289b85192d254ea86d4fb2eae7ea2e61e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://78d2f9571b05eeb98c339f4165ca858289b85192d254ea86d4fb2eae7ea2e61e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trwfk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:27:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-78bl2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:37Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:37 crc kubenswrapper[5008]: I0129 15:28:37.517941 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:37 crc kubenswrapper[5008]: I0129 15:28:37.517977 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:37 crc kubenswrapper[5008]: I0129 15:28:37.517988 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:37 crc kubenswrapper[5008]: I0129 15:28:37.518004 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:37 crc kubenswrapper[5008]: I0129 15:28:37.518015 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:37Z","lastTransitionTime":"2026-01-29T15:28:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:37 crc kubenswrapper[5008]: I0129 15:28:37.518677 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-qj8wb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ffbfcf6-99e5-450c-8c72-b2db9365d93e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb113f45b58a5039b88d2c176d718d5a012e21c1785781c1fcda5843d529a9af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8mvmz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:28:01Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-qj8wb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:37Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:37 crc kubenswrapper[5008]: I0129 15:28:37.532666 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-42hcz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cdd8ae23-3f9f-49f8-928d-46dad823fde4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a44b0a7b0b53c339b51d5391ad7e0eb342bdb491b4af37a98f48788b8e2c077b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg75x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:27:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-42hcz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:37Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:37 crc kubenswrapper[5008]: I0129 15:28:37.546946 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-p5kdp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f5a0b69-5edd-467c-a822-093f1689df1d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6930478f2ddb5112eb944beac7cabb3e235fe16465a4706e8c665ce9481bc49d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gq2fz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://98ea0d7c1f2e3e9fc74e8e58ae26ab486c6b75f655273070cebee814c7c99e0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gq2fz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:28:11Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-p5kdp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:37Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:37 crc kubenswrapper[5008]: I0129 15:28:37.575902 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"77958faa-02ef-4792-b792-6094f922cd1d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de76f0d6e08ee14b4a5ab39a21ebdc63bdf379dcd5b648ae46a4edcc2a49f20e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1dcda54222f387e6560d3e297be72e19032a975feb916bc12a220870207a3f35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3cb618c2c44502074cb37ce1e688d187254eafae3916372a16c8ab845fed767a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff8e5fd243880ce71f07c5c532cad2cdff0e4bca2d0083280be78206a1a4c854\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7393e24277d74a2b9987e6cdc54cd65485f5bc57d93ec25a2cb8479923db1feb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://380711ea042d739a804ab6da4c0361004cc9ed9a48a5f4b006d168df6a84ebb5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://380711ea042d739a804ab6da4c0361004cc9ed9a48a5f4b006d168df6a84ebb5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:27:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d9134a6829b7df9b42aeae161ea1f3961837d6a0b322b1adcd2417c47c0f5d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1d9134a6829b7df9b42aeae161ea1f3961837d6a0b322b1adcd2417c47c0f5d5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:27:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://9206e5978757b0979fa411a384d9e5b4728b01a769f87383df38dbb8f0f18e4b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9206e5978757b0979fa411a384d9e5b4728b01a769f87383df38dbb8f0f18e4b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:27:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:27:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:27:37Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:37Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:37 crc kubenswrapper[5008]: I0129 15:28:37.594800 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2624b9eb-bfe1-4c46-8825-6152c5e00565\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://12266e3ba2ed2e5d6d1e7ee893a0d59cd4575c8870cb1e129ca0fd9b8623467f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c778df6f5c031669143db37980250c01473f3d9856acc44a6ef51852822f99ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://de7c341c7443f28f5919ef6baeb21377b5571637ad807dd7515a5f28c218034b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0f710dffd08d1bbb467ff9d2c6a5d5beed779550747459407916e743506ab27\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:27:37Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:37Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:37 crc kubenswrapper[5008]: I0129 15:28:37.616037 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d62f7cc2-d2d7-4c9a-9432-8b4fb9f3fcf2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4702656214e54c2881bf198364622648679a9981721d09a6b1551a134c63b7d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ed1794f8b68a0810301b6f7b91e03cfb269b35084dd97b2f153789ba70970f2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://677c04a1dffb767e6149ccb064772548ca29cf553afc20cb4eb82a5f85742ff0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://412b5d429b7a86a87e710ba4a0c81a54b03108f41ce6cc29f429aede063eb76c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4d710e35a02d14289e2d5fe6b35c08621e78c96b7e9e30451ffd6d51962fb761\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"message\\\":\\\"file observer\\\\nW0129 15:27:57.701071 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0129 15:27:57.704726 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 15:27:57.707574 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-445213743/tls.crt::/tmp/serving-cert-445213743/tls.key\\\\\\\"\\\\nI0129 15:27:58.036057 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0129 15:27:58.041904 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0129 15:27:58.041936 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0129 15:27:58.041959 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0129 15:27:58.041967 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0129 15:27:58.046875 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0129 15:27:58.046901 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 15:27:58.046906 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 15:27:58.046911 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0129 15:27:58.046914 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0129 15:27:58.046917 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0129 15:27:58.046919 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0129 15:27:58.047110 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0129 15:27:58.052272 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T15:27:42Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3397d4d59fbac09e49247425eb263f25d13c62a72013146c981b606f6389165d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:40Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://25a0c747be0a011a60911a631709b27620d8ebc5afea1d21dbeb71f26d971f6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://25a0c747be0a011a60911a631709b27620d8ebc5afea1d21dbeb71f26d971f6e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:27:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:27:37Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:37Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:37 crc kubenswrapper[5008]: I0129 15:28:37.620208 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:37 crc kubenswrapper[5008]: I0129 15:28:37.620279 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:37 crc kubenswrapper[5008]: I0129 15:28:37.620293 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:37 crc kubenswrapper[5008]: I0129 15:28:37.620311 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:37 crc kubenswrapper[5008]: I0129 15:28:37.620325 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:37Z","lastTransitionTime":"2026-01-29T15:28:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:37 crc kubenswrapper[5008]: I0129 15:28:37.630382 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:37Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:37 crc kubenswrapper[5008]: I0129 15:28:37.643918 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:37Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:37 crc kubenswrapper[5008]: I0129 15:28:37.723963 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:37 crc kubenswrapper[5008]: I0129 15:28:37.724017 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:37 crc kubenswrapper[5008]: I0129 15:28:37.724036 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:37 crc kubenswrapper[5008]: I0129 15:28:37.724059 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:37 crc kubenswrapper[5008]: I0129 15:28:37.724077 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:37Z","lastTransitionTime":"2026-01-29T15:28:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:37 crc kubenswrapper[5008]: I0129 15:28:37.827597 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:37 crc kubenswrapper[5008]: I0129 15:28:37.827904 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:37 crc kubenswrapper[5008]: I0129 15:28:37.827926 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:37 crc kubenswrapper[5008]: I0129 15:28:37.827951 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:37 crc kubenswrapper[5008]: I0129 15:28:37.827970 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:37Z","lastTransitionTime":"2026-01-29T15:28:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:37 crc kubenswrapper[5008]: I0129 15:28:37.931547 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:37 crc kubenswrapper[5008]: I0129 15:28:37.931628 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:37 crc kubenswrapper[5008]: I0129 15:28:37.931647 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:37 crc kubenswrapper[5008]: I0129 15:28:37.931673 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:37 crc kubenswrapper[5008]: I0129 15:28:37.931689 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:37Z","lastTransitionTime":"2026-01-29T15:28:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:38 crc kubenswrapper[5008]: I0129 15:28:38.035193 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:38 crc kubenswrapper[5008]: I0129 15:28:38.035270 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:38 crc kubenswrapper[5008]: I0129 15:28:38.035295 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:38 crc kubenswrapper[5008]: I0129 15:28:38.035327 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:38 crc kubenswrapper[5008]: I0129 15:28:38.035353 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:38Z","lastTransitionTime":"2026-01-29T15:28:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:38 crc kubenswrapper[5008]: I0129 15:28:38.138386 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:38 crc kubenswrapper[5008]: I0129 15:28:38.138458 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:38 crc kubenswrapper[5008]: I0129 15:28:38.138471 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:38 crc kubenswrapper[5008]: I0129 15:28:38.138488 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:38 crc kubenswrapper[5008]: I0129 15:28:38.138500 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:38Z","lastTransitionTime":"2026-01-29T15:28:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:38 crc kubenswrapper[5008]: I0129 15:28:38.241312 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:38 crc kubenswrapper[5008]: I0129 15:28:38.241416 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:38 crc kubenswrapper[5008]: I0129 15:28:38.241438 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:38 crc kubenswrapper[5008]: I0129 15:28:38.241459 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:38 crc kubenswrapper[5008]: I0129 15:28:38.241473 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:38Z","lastTransitionTime":"2026-01-29T15:28:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:38 crc kubenswrapper[5008]: I0129 15:28:38.313892 5008 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-02 13:24:27.291243988 +0000 UTC Jan 29 15:28:38 crc kubenswrapper[5008]: I0129 15:28:38.323340 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 15:28:38 crc kubenswrapper[5008]: I0129 15:28:38.323392 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 15:28:38 crc kubenswrapper[5008]: I0129 15:28:38.323513 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 15:28:38 crc kubenswrapper[5008]: E0129 15:28:38.323510 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 15:28:38 crc kubenswrapper[5008]: E0129 15:28:38.323648 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 15:28:38 crc kubenswrapper[5008]: E0129 15:28:38.323754 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 15:28:38 crc kubenswrapper[5008]: I0129 15:28:38.344466 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:38 crc kubenswrapper[5008]: I0129 15:28:38.344531 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:38 crc kubenswrapper[5008]: I0129 15:28:38.344555 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:38 crc kubenswrapper[5008]: I0129 15:28:38.344587 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:38 crc kubenswrapper[5008]: I0129 15:28:38.344611 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:38Z","lastTransitionTime":"2026-01-29T15:28:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:38 crc kubenswrapper[5008]: I0129 15:28:38.447991 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:38 crc kubenswrapper[5008]: I0129 15:28:38.448124 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:38 crc kubenswrapper[5008]: I0129 15:28:38.448137 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:38 crc kubenswrapper[5008]: I0129 15:28:38.448155 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:38 crc kubenswrapper[5008]: I0129 15:28:38.448168 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:38Z","lastTransitionTime":"2026-01-29T15:28:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:38 crc kubenswrapper[5008]: I0129 15:28:38.551306 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:38 crc kubenswrapper[5008]: I0129 15:28:38.551409 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:38 crc kubenswrapper[5008]: I0129 15:28:38.551434 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:38 crc kubenswrapper[5008]: I0129 15:28:38.551466 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:38 crc kubenswrapper[5008]: I0129 15:28:38.551488 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:38Z","lastTransitionTime":"2026-01-29T15:28:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:38 crc kubenswrapper[5008]: I0129 15:28:38.655256 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:38 crc kubenswrapper[5008]: I0129 15:28:38.655980 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:38 crc kubenswrapper[5008]: I0129 15:28:38.656022 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:38 crc kubenswrapper[5008]: I0129 15:28:38.656041 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:38 crc kubenswrapper[5008]: I0129 15:28:38.656052 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:38Z","lastTransitionTime":"2026-01-29T15:28:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:38 crc kubenswrapper[5008]: I0129 15:28:38.760389 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:38 crc kubenswrapper[5008]: I0129 15:28:38.760440 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:38 crc kubenswrapper[5008]: I0129 15:28:38.760450 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:38 crc kubenswrapper[5008]: I0129 15:28:38.760467 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:38 crc kubenswrapper[5008]: I0129 15:28:38.760479 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:38Z","lastTransitionTime":"2026-01-29T15:28:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:38 crc kubenswrapper[5008]: I0129 15:28:38.862471 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:38 crc kubenswrapper[5008]: I0129 15:28:38.862711 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:38 crc kubenswrapper[5008]: I0129 15:28:38.862734 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:38 crc kubenswrapper[5008]: I0129 15:28:38.862752 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:38 crc kubenswrapper[5008]: I0129 15:28:38.862764 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:38Z","lastTransitionTime":"2026-01-29T15:28:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:38 crc kubenswrapper[5008]: I0129 15:28:38.965734 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:38 crc kubenswrapper[5008]: I0129 15:28:38.965843 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:38 crc kubenswrapper[5008]: I0129 15:28:38.965861 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:38 crc kubenswrapper[5008]: I0129 15:28:38.965880 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:38 crc kubenswrapper[5008]: I0129 15:28:38.965895 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:38Z","lastTransitionTime":"2026-01-29T15:28:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:39 crc kubenswrapper[5008]: I0129 15:28:39.068564 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:39 crc kubenswrapper[5008]: I0129 15:28:39.068622 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:39 crc kubenswrapper[5008]: I0129 15:28:39.068637 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:39 crc kubenswrapper[5008]: I0129 15:28:39.068658 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:39 crc kubenswrapper[5008]: I0129 15:28:39.068672 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:39Z","lastTransitionTime":"2026-01-29T15:28:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:39 crc kubenswrapper[5008]: I0129 15:28:39.172102 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:39 crc kubenswrapper[5008]: I0129 15:28:39.172156 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:39 crc kubenswrapper[5008]: I0129 15:28:39.172177 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:39 crc kubenswrapper[5008]: I0129 15:28:39.172201 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:39 crc kubenswrapper[5008]: I0129 15:28:39.172219 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:39Z","lastTransitionTime":"2026-01-29T15:28:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:39 crc kubenswrapper[5008]: I0129 15:28:39.274539 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:39 crc kubenswrapper[5008]: I0129 15:28:39.274612 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:39 crc kubenswrapper[5008]: I0129 15:28:39.274621 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:39 crc kubenswrapper[5008]: I0129 15:28:39.274633 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:39 crc kubenswrapper[5008]: I0129 15:28:39.274642 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:39Z","lastTransitionTime":"2026-01-29T15:28:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:39 crc kubenswrapper[5008]: I0129 15:28:39.314578 5008 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-03 04:34:25.98226525 +0000 UTC Jan 29 15:28:39 crc kubenswrapper[5008]: I0129 15:28:39.323304 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kkc6c" Jan 29 15:28:39 crc kubenswrapper[5008]: E0129 15:28:39.323461 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kkc6c" podUID="f3716fd8-7f9b-44e2-ac3c-e907d8793dc9" Jan 29 15:28:39 crc kubenswrapper[5008]: I0129 15:28:39.377011 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:39 crc kubenswrapper[5008]: I0129 15:28:39.377081 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:39 crc kubenswrapper[5008]: I0129 15:28:39.377100 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:39 crc kubenswrapper[5008]: I0129 15:28:39.377126 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:39 crc kubenswrapper[5008]: I0129 15:28:39.377144 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:39Z","lastTransitionTime":"2026-01-29T15:28:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:39 crc kubenswrapper[5008]: I0129 15:28:39.480216 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:39 crc kubenswrapper[5008]: I0129 15:28:39.480282 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:39 crc kubenswrapper[5008]: I0129 15:28:39.480300 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:39 crc kubenswrapper[5008]: I0129 15:28:39.480326 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:39 crc kubenswrapper[5008]: I0129 15:28:39.480344 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:39Z","lastTransitionTime":"2026-01-29T15:28:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:39 crc kubenswrapper[5008]: I0129 15:28:39.582528 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:39 crc kubenswrapper[5008]: I0129 15:28:39.582606 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:39 crc kubenswrapper[5008]: I0129 15:28:39.582639 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:39 crc kubenswrapper[5008]: I0129 15:28:39.582674 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:39 crc kubenswrapper[5008]: I0129 15:28:39.582700 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:39Z","lastTransitionTime":"2026-01-29T15:28:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:39 crc kubenswrapper[5008]: I0129 15:28:39.685427 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:39 crc kubenswrapper[5008]: I0129 15:28:39.685485 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:39 crc kubenswrapper[5008]: I0129 15:28:39.685495 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:39 crc kubenswrapper[5008]: I0129 15:28:39.685515 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:39 crc kubenswrapper[5008]: I0129 15:28:39.685530 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:39Z","lastTransitionTime":"2026-01-29T15:28:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:39 crc kubenswrapper[5008]: I0129 15:28:39.788653 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:39 crc kubenswrapper[5008]: I0129 15:28:39.788739 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:39 crc kubenswrapper[5008]: I0129 15:28:39.788751 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:39 crc kubenswrapper[5008]: I0129 15:28:39.788769 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:39 crc kubenswrapper[5008]: I0129 15:28:39.788802 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:39Z","lastTransitionTime":"2026-01-29T15:28:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:39 crc kubenswrapper[5008]: I0129 15:28:39.891696 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:39 crc kubenswrapper[5008]: I0129 15:28:39.891766 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:39 crc kubenswrapper[5008]: I0129 15:28:39.891820 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:39 crc kubenswrapper[5008]: I0129 15:28:39.891849 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:39 crc kubenswrapper[5008]: I0129 15:28:39.891871 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:39Z","lastTransitionTime":"2026-01-29T15:28:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:39 crc kubenswrapper[5008]: I0129 15:28:39.993981 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:39 crc kubenswrapper[5008]: I0129 15:28:39.994056 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:39 crc kubenswrapper[5008]: I0129 15:28:39.994075 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:39 crc kubenswrapper[5008]: I0129 15:28:39.994103 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:39 crc kubenswrapper[5008]: I0129 15:28:39.994121 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:39Z","lastTransitionTime":"2026-01-29T15:28:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:40 crc kubenswrapper[5008]: I0129 15:28:40.097583 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:40 crc kubenswrapper[5008]: I0129 15:28:40.097657 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:40 crc kubenswrapper[5008]: I0129 15:28:40.097675 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:40 crc kubenswrapper[5008]: I0129 15:28:40.097699 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:40 crc kubenswrapper[5008]: I0129 15:28:40.097718 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:40Z","lastTransitionTime":"2026-01-29T15:28:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:40 crc kubenswrapper[5008]: I0129 15:28:40.201340 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:40 crc kubenswrapper[5008]: I0129 15:28:40.201424 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:40 crc kubenswrapper[5008]: I0129 15:28:40.201444 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:40 crc kubenswrapper[5008]: I0129 15:28:40.201473 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:40 crc kubenswrapper[5008]: I0129 15:28:40.201492 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:40Z","lastTransitionTime":"2026-01-29T15:28:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:40 crc kubenswrapper[5008]: I0129 15:28:40.304437 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:40 crc kubenswrapper[5008]: I0129 15:28:40.304499 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:40 crc kubenswrapper[5008]: I0129 15:28:40.304516 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:40 crc kubenswrapper[5008]: I0129 15:28:40.304539 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:40 crc kubenswrapper[5008]: I0129 15:28:40.304556 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:40Z","lastTransitionTime":"2026-01-29T15:28:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:40 crc kubenswrapper[5008]: I0129 15:28:40.315161 5008 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-10 16:21:33.219017574 +0000 UTC Jan 29 15:28:40 crc kubenswrapper[5008]: I0129 15:28:40.322696 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 15:28:40 crc kubenswrapper[5008]: I0129 15:28:40.322825 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 15:28:40 crc kubenswrapper[5008]: I0129 15:28:40.322852 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 15:28:40 crc kubenswrapper[5008]: E0129 15:28:40.322958 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 15:28:40 crc kubenswrapper[5008]: E0129 15:28:40.323276 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 15:28:40 crc kubenswrapper[5008]: E0129 15:28:40.323276 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 15:28:40 crc kubenswrapper[5008]: I0129 15:28:40.408074 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:40 crc kubenswrapper[5008]: I0129 15:28:40.408116 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:40 crc kubenswrapper[5008]: I0129 15:28:40.408125 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:40 crc kubenswrapper[5008]: I0129 15:28:40.408140 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:40 crc kubenswrapper[5008]: I0129 15:28:40.408152 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:40Z","lastTransitionTime":"2026-01-29T15:28:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:40 crc kubenswrapper[5008]: I0129 15:28:40.511239 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:40 crc kubenswrapper[5008]: I0129 15:28:40.511320 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:40 crc kubenswrapper[5008]: I0129 15:28:40.511334 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:40 crc kubenswrapper[5008]: I0129 15:28:40.511350 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:40 crc kubenswrapper[5008]: I0129 15:28:40.511362 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:40Z","lastTransitionTime":"2026-01-29T15:28:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:40 crc kubenswrapper[5008]: I0129 15:28:40.614426 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:40 crc kubenswrapper[5008]: I0129 15:28:40.614488 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:40 crc kubenswrapper[5008]: I0129 15:28:40.614506 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:40 crc kubenswrapper[5008]: I0129 15:28:40.614540 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:40 crc kubenswrapper[5008]: I0129 15:28:40.614556 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:40Z","lastTransitionTime":"2026-01-29T15:28:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:40 crc kubenswrapper[5008]: I0129 15:28:40.717020 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:40 crc kubenswrapper[5008]: I0129 15:28:40.717072 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:40 crc kubenswrapper[5008]: I0129 15:28:40.717084 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:40 crc kubenswrapper[5008]: I0129 15:28:40.717107 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:40 crc kubenswrapper[5008]: I0129 15:28:40.717118 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:40Z","lastTransitionTime":"2026-01-29T15:28:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:40 crc kubenswrapper[5008]: I0129 15:28:40.820530 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:40 crc kubenswrapper[5008]: I0129 15:28:40.820574 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:40 crc kubenswrapper[5008]: I0129 15:28:40.820586 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:40 crc kubenswrapper[5008]: I0129 15:28:40.820604 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:40 crc kubenswrapper[5008]: I0129 15:28:40.820616 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:40Z","lastTransitionTime":"2026-01-29T15:28:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:40 crc kubenswrapper[5008]: I0129 15:28:40.922900 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:40 crc kubenswrapper[5008]: I0129 15:28:40.922974 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:40 crc kubenswrapper[5008]: I0129 15:28:40.923017 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:40 crc kubenswrapper[5008]: I0129 15:28:40.923049 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:40 crc kubenswrapper[5008]: I0129 15:28:40.923073 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:40Z","lastTransitionTime":"2026-01-29T15:28:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:41 crc kubenswrapper[5008]: I0129 15:28:41.025588 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:41 crc kubenswrapper[5008]: I0129 15:28:41.025628 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:41 crc kubenswrapper[5008]: I0129 15:28:41.025636 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:41 crc kubenswrapper[5008]: I0129 15:28:41.025651 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:41 crc kubenswrapper[5008]: I0129 15:28:41.025661 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:41Z","lastTransitionTime":"2026-01-29T15:28:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:41 crc kubenswrapper[5008]: I0129 15:28:41.133339 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:41 crc kubenswrapper[5008]: I0129 15:28:41.134052 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:41 crc kubenswrapper[5008]: I0129 15:28:41.134100 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:41 crc kubenswrapper[5008]: I0129 15:28:41.134129 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:41 crc kubenswrapper[5008]: I0129 15:28:41.134149 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:41Z","lastTransitionTime":"2026-01-29T15:28:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:41 crc kubenswrapper[5008]: I0129 15:28:41.237175 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:41 crc kubenswrapper[5008]: I0129 15:28:41.237249 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:41 crc kubenswrapper[5008]: I0129 15:28:41.237267 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:41 crc kubenswrapper[5008]: I0129 15:28:41.237293 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:41 crc kubenswrapper[5008]: I0129 15:28:41.237313 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:41Z","lastTransitionTime":"2026-01-29T15:28:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:41 crc kubenswrapper[5008]: I0129 15:28:41.315557 5008 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-07 11:45:36.612504547 +0000 UTC Jan 29 15:28:41 crc kubenswrapper[5008]: I0129 15:28:41.323343 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kkc6c" Jan 29 15:28:41 crc kubenswrapper[5008]: E0129 15:28:41.323907 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kkc6c" podUID="f3716fd8-7f9b-44e2-ac3c-e907d8793dc9" Jan 29 15:28:41 crc kubenswrapper[5008]: I0129 15:28:41.340390 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:41 crc kubenswrapper[5008]: I0129 15:28:41.340511 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:41 crc kubenswrapper[5008]: I0129 15:28:41.340534 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:41 crc kubenswrapper[5008]: I0129 15:28:41.340607 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:41 crc kubenswrapper[5008]: I0129 15:28:41.340624 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:41Z","lastTransitionTime":"2026-01-29T15:28:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:41 crc kubenswrapper[5008]: I0129 15:28:41.443532 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:41 crc kubenswrapper[5008]: I0129 15:28:41.443607 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:41 crc kubenswrapper[5008]: I0129 15:28:41.443633 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:41 crc kubenswrapper[5008]: I0129 15:28:41.443665 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:41 crc kubenswrapper[5008]: I0129 15:28:41.443692 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:41Z","lastTransitionTime":"2026-01-29T15:28:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:41 crc kubenswrapper[5008]: I0129 15:28:41.546995 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:41 crc kubenswrapper[5008]: I0129 15:28:41.547055 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:41 crc kubenswrapper[5008]: I0129 15:28:41.547076 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:41 crc kubenswrapper[5008]: I0129 15:28:41.547099 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:41 crc kubenswrapper[5008]: I0129 15:28:41.547114 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:41Z","lastTransitionTime":"2026-01-29T15:28:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:41 crc kubenswrapper[5008]: I0129 15:28:41.650308 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:41 crc kubenswrapper[5008]: I0129 15:28:41.650357 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:41 crc kubenswrapper[5008]: I0129 15:28:41.650368 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:41 crc kubenswrapper[5008]: I0129 15:28:41.650387 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:41 crc kubenswrapper[5008]: I0129 15:28:41.650401 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:41Z","lastTransitionTime":"2026-01-29T15:28:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:41 crc kubenswrapper[5008]: I0129 15:28:41.753877 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:41 crc kubenswrapper[5008]: I0129 15:28:41.753932 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:41 crc kubenswrapper[5008]: I0129 15:28:41.753943 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:41 crc kubenswrapper[5008]: I0129 15:28:41.753964 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:41 crc kubenswrapper[5008]: I0129 15:28:41.753976 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:41Z","lastTransitionTime":"2026-01-29T15:28:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:41 crc kubenswrapper[5008]: I0129 15:28:41.856840 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:41 crc kubenswrapper[5008]: I0129 15:28:41.856884 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:41 crc kubenswrapper[5008]: I0129 15:28:41.856897 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:41 crc kubenswrapper[5008]: I0129 15:28:41.856916 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:41 crc kubenswrapper[5008]: I0129 15:28:41.856927 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:41Z","lastTransitionTime":"2026-01-29T15:28:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:41 crc kubenswrapper[5008]: I0129 15:28:41.960015 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:41 crc kubenswrapper[5008]: I0129 15:28:41.960081 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:41 crc kubenswrapper[5008]: I0129 15:28:41.960094 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:41 crc kubenswrapper[5008]: I0129 15:28:41.960115 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:41 crc kubenswrapper[5008]: I0129 15:28:41.960129 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:41Z","lastTransitionTime":"2026-01-29T15:28:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:42 crc kubenswrapper[5008]: I0129 15:28:42.063315 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:42 crc kubenswrapper[5008]: I0129 15:28:42.063386 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:42 crc kubenswrapper[5008]: I0129 15:28:42.063398 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:42 crc kubenswrapper[5008]: I0129 15:28:42.063419 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:42 crc kubenswrapper[5008]: I0129 15:28:42.063430 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:42Z","lastTransitionTime":"2026-01-29T15:28:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:42 crc kubenswrapper[5008]: I0129 15:28:42.166525 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:42 crc kubenswrapper[5008]: I0129 15:28:42.166594 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:42 crc kubenswrapper[5008]: I0129 15:28:42.166640 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:42 crc kubenswrapper[5008]: I0129 15:28:42.166744 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:42 crc kubenswrapper[5008]: I0129 15:28:42.166768 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:42Z","lastTransitionTime":"2026-01-29T15:28:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:42 crc kubenswrapper[5008]: I0129 15:28:42.270304 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:42 crc kubenswrapper[5008]: I0129 15:28:42.270377 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:42 crc kubenswrapper[5008]: I0129 15:28:42.270400 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:42 crc kubenswrapper[5008]: I0129 15:28:42.270438 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:42 crc kubenswrapper[5008]: I0129 15:28:42.270457 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:42Z","lastTransitionTime":"2026-01-29T15:28:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:42 crc kubenswrapper[5008]: I0129 15:28:42.316729 5008 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-19 19:02:46.197367575 +0000 UTC Jan 29 15:28:42 crc kubenswrapper[5008]: I0129 15:28:42.323129 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 15:28:42 crc kubenswrapper[5008]: I0129 15:28:42.323229 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 15:28:42 crc kubenswrapper[5008]: E0129 15:28:42.323294 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 15:28:42 crc kubenswrapper[5008]: I0129 15:28:42.323311 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 15:28:42 crc kubenswrapper[5008]: E0129 15:28:42.323440 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 15:28:42 crc kubenswrapper[5008]: E0129 15:28:42.323556 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 15:28:42 crc kubenswrapper[5008]: I0129 15:28:42.373623 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:42 crc kubenswrapper[5008]: I0129 15:28:42.373669 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:42 crc kubenswrapper[5008]: I0129 15:28:42.373707 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:42 crc kubenswrapper[5008]: I0129 15:28:42.373725 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:42 crc kubenswrapper[5008]: I0129 15:28:42.373738 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:42Z","lastTransitionTime":"2026-01-29T15:28:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:42 crc kubenswrapper[5008]: I0129 15:28:42.476713 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:42 crc kubenswrapper[5008]: I0129 15:28:42.476772 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:42 crc kubenswrapper[5008]: I0129 15:28:42.476810 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:42 crc kubenswrapper[5008]: I0129 15:28:42.476869 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:42 crc kubenswrapper[5008]: I0129 15:28:42.476884 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:42Z","lastTransitionTime":"2026-01-29T15:28:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:42 crc kubenswrapper[5008]: I0129 15:28:42.579981 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:42 crc kubenswrapper[5008]: I0129 15:28:42.580060 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:42 crc kubenswrapper[5008]: I0129 15:28:42.580076 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:42 crc kubenswrapper[5008]: I0129 15:28:42.580099 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:42 crc kubenswrapper[5008]: I0129 15:28:42.580118 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:42Z","lastTransitionTime":"2026-01-29T15:28:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:42 crc kubenswrapper[5008]: I0129 15:28:42.682409 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:42 crc kubenswrapper[5008]: I0129 15:28:42.682450 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:42 crc kubenswrapper[5008]: I0129 15:28:42.682459 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:42 crc kubenswrapper[5008]: I0129 15:28:42.682472 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:42 crc kubenswrapper[5008]: I0129 15:28:42.682483 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:42Z","lastTransitionTime":"2026-01-29T15:28:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:42 crc kubenswrapper[5008]: I0129 15:28:42.785279 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:42 crc kubenswrapper[5008]: I0129 15:28:42.785333 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:42 crc kubenswrapper[5008]: I0129 15:28:42.785345 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:42 crc kubenswrapper[5008]: I0129 15:28:42.785367 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:42 crc kubenswrapper[5008]: I0129 15:28:42.785380 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:42Z","lastTransitionTime":"2026-01-29T15:28:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:42 crc kubenswrapper[5008]: I0129 15:28:42.888589 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:42 crc kubenswrapper[5008]: I0129 15:28:42.888663 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:42 crc kubenswrapper[5008]: I0129 15:28:42.888680 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:42 crc kubenswrapper[5008]: I0129 15:28:42.888710 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:42 crc kubenswrapper[5008]: I0129 15:28:42.888725 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:42Z","lastTransitionTime":"2026-01-29T15:28:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:42 crc kubenswrapper[5008]: I0129 15:28:42.991057 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:42 crc kubenswrapper[5008]: I0129 15:28:42.991106 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:42 crc kubenswrapper[5008]: I0129 15:28:42.991121 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:42 crc kubenswrapper[5008]: I0129 15:28:42.991144 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:42 crc kubenswrapper[5008]: I0129 15:28:42.991159 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:42Z","lastTransitionTime":"2026-01-29T15:28:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:43 crc kubenswrapper[5008]: I0129 15:28:43.094324 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:43 crc kubenswrapper[5008]: I0129 15:28:43.094441 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:43 crc kubenswrapper[5008]: I0129 15:28:43.094468 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:43 crc kubenswrapper[5008]: I0129 15:28:43.094502 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:43 crc kubenswrapper[5008]: I0129 15:28:43.094526 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:43Z","lastTransitionTime":"2026-01-29T15:28:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:43 crc kubenswrapper[5008]: I0129 15:28:43.198086 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:43 crc kubenswrapper[5008]: I0129 15:28:43.198147 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:43 crc kubenswrapper[5008]: I0129 15:28:43.198162 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:43 crc kubenswrapper[5008]: I0129 15:28:43.198187 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:43 crc kubenswrapper[5008]: I0129 15:28:43.198202 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:43Z","lastTransitionTime":"2026-01-29T15:28:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:43 crc kubenswrapper[5008]: I0129 15:28:43.300737 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:43 crc kubenswrapper[5008]: I0129 15:28:43.300852 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:43 crc kubenswrapper[5008]: I0129 15:28:43.300875 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:43 crc kubenswrapper[5008]: I0129 15:28:43.300905 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:43 crc kubenswrapper[5008]: I0129 15:28:43.300928 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:43Z","lastTransitionTime":"2026-01-29T15:28:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:43 crc kubenswrapper[5008]: I0129 15:28:43.317384 5008 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-29 21:38:02.658613977 +0000 UTC Jan 29 15:28:43 crc kubenswrapper[5008]: I0129 15:28:43.322676 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kkc6c" Jan 29 15:28:43 crc kubenswrapper[5008]: E0129 15:28:43.323076 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kkc6c" podUID="f3716fd8-7f9b-44e2-ac3c-e907d8793dc9" Jan 29 15:28:43 crc kubenswrapper[5008]: I0129 15:28:43.324647 5008 scope.go:117] "RemoveContainer" containerID="643ac2f5dd2119b6ede74fb609222a3e5d7643c302ea60d1799cf3b8db6e2120" Jan 29 15:28:43 crc kubenswrapper[5008]: E0129 15:28:43.325007 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-pqg9w_openshift-ovn-kubernetes(1d092513-7735-4c98-9734-57bc46b99280)\"" pod="openshift-ovn-kubernetes/ovnkube-node-pqg9w" podUID="1d092513-7735-4c98-9734-57bc46b99280" Jan 29 15:28:43 crc kubenswrapper[5008]: I0129 15:28:43.342017 5008 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc"] Jan 29 15:28:43 crc kubenswrapper[5008]: I0129 15:28:43.404170 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:43 crc kubenswrapper[5008]: I0129 15:28:43.404210 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:43 crc kubenswrapper[5008]: I0129 15:28:43.404219 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:43 crc kubenswrapper[5008]: I0129 15:28:43.404233 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:43 crc kubenswrapper[5008]: I0129 15:28:43.404243 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:43Z","lastTransitionTime":"2026-01-29T15:28:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:43 crc kubenswrapper[5008]: I0129 15:28:43.507691 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:43 crc kubenswrapper[5008]: I0129 15:28:43.508162 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:43 crc kubenswrapper[5008]: I0129 15:28:43.508332 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:43 crc kubenswrapper[5008]: I0129 15:28:43.508474 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:43 crc kubenswrapper[5008]: I0129 15:28:43.508625 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:43Z","lastTransitionTime":"2026-01-29T15:28:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:43 crc kubenswrapper[5008]: I0129 15:28:43.611693 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:43 crc kubenswrapper[5008]: I0129 15:28:43.611742 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:43 crc kubenswrapper[5008]: I0129 15:28:43.611754 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:43 crc kubenswrapper[5008]: I0129 15:28:43.611769 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:43 crc kubenswrapper[5008]: I0129 15:28:43.611799 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:43Z","lastTransitionTime":"2026-01-29T15:28:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:43 crc kubenswrapper[5008]: I0129 15:28:43.714808 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:43 crc kubenswrapper[5008]: I0129 15:28:43.714851 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:43 crc kubenswrapper[5008]: I0129 15:28:43.715149 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:43 crc kubenswrapper[5008]: I0129 15:28:43.715168 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:43 crc kubenswrapper[5008]: I0129 15:28:43.715180 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:43Z","lastTransitionTime":"2026-01-29T15:28:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:43 crc kubenswrapper[5008]: I0129 15:28:43.818336 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:43 crc kubenswrapper[5008]: I0129 15:28:43.818410 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:43 crc kubenswrapper[5008]: I0129 15:28:43.818434 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:43 crc kubenswrapper[5008]: I0129 15:28:43.818465 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:43 crc kubenswrapper[5008]: I0129 15:28:43.818486 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:43Z","lastTransitionTime":"2026-01-29T15:28:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:43 crc kubenswrapper[5008]: I0129 15:28:43.921527 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:43 crc kubenswrapper[5008]: I0129 15:28:43.921575 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:43 crc kubenswrapper[5008]: I0129 15:28:43.921588 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:43 crc kubenswrapper[5008]: I0129 15:28:43.921608 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:43 crc kubenswrapper[5008]: I0129 15:28:43.921620 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:43Z","lastTransitionTime":"2026-01-29T15:28:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:44 crc kubenswrapper[5008]: I0129 15:28:44.024291 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:44 crc kubenswrapper[5008]: I0129 15:28:44.024340 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:44 crc kubenswrapper[5008]: I0129 15:28:44.024358 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:44 crc kubenswrapper[5008]: I0129 15:28:44.024383 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:44 crc kubenswrapper[5008]: I0129 15:28:44.024403 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:44Z","lastTransitionTime":"2026-01-29T15:28:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:44 crc kubenswrapper[5008]: I0129 15:28:44.127432 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:44 crc kubenswrapper[5008]: I0129 15:28:44.127513 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:44 crc kubenswrapper[5008]: I0129 15:28:44.127537 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:44 crc kubenswrapper[5008]: I0129 15:28:44.127569 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:44 crc kubenswrapper[5008]: I0129 15:28:44.127592 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:44Z","lastTransitionTime":"2026-01-29T15:28:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:44 crc kubenswrapper[5008]: I0129 15:28:44.230292 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:44 crc kubenswrapper[5008]: I0129 15:28:44.230340 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:44 crc kubenswrapper[5008]: I0129 15:28:44.230353 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:44 crc kubenswrapper[5008]: I0129 15:28:44.230370 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:44 crc kubenswrapper[5008]: I0129 15:28:44.230383 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:44Z","lastTransitionTime":"2026-01-29T15:28:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:44 crc kubenswrapper[5008]: I0129 15:28:44.318455 5008 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-27 08:14:01.955995801 +0000 UTC Jan 29 15:28:44 crc kubenswrapper[5008]: I0129 15:28:44.322758 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 15:28:44 crc kubenswrapper[5008]: I0129 15:28:44.322855 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 15:28:44 crc kubenswrapper[5008]: E0129 15:28:44.322891 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 15:28:44 crc kubenswrapper[5008]: E0129 15:28:44.322988 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 15:28:44 crc kubenswrapper[5008]: I0129 15:28:44.323052 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 15:28:44 crc kubenswrapper[5008]: E0129 15:28:44.323107 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 15:28:44 crc kubenswrapper[5008]: I0129 15:28:44.332481 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:44 crc kubenswrapper[5008]: I0129 15:28:44.332506 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:44 crc kubenswrapper[5008]: I0129 15:28:44.332517 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:44 crc kubenswrapper[5008]: I0129 15:28:44.332530 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:44 crc kubenswrapper[5008]: I0129 15:28:44.332541 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:44Z","lastTransitionTime":"2026-01-29T15:28:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:44 crc kubenswrapper[5008]: I0129 15:28:44.435025 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:44 crc kubenswrapper[5008]: I0129 15:28:44.435068 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:44 crc kubenswrapper[5008]: I0129 15:28:44.435086 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:44 crc kubenswrapper[5008]: I0129 15:28:44.435107 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:44 crc kubenswrapper[5008]: I0129 15:28:44.435125 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:44Z","lastTransitionTime":"2026-01-29T15:28:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:44 crc kubenswrapper[5008]: I0129 15:28:44.538540 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:44 crc kubenswrapper[5008]: I0129 15:28:44.538576 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:44 crc kubenswrapper[5008]: I0129 15:28:44.538586 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:44 crc kubenswrapper[5008]: I0129 15:28:44.538599 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:44 crc kubenswrapper[5008]: I0129 15:28:44.538609 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:44Z","lastTransitionTime":"2026-01-29T15:28:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:44 crc kubenswrapper[5008]: I0129 15:28:44.641316 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:44 crc kubenswrapper[5008]: I0129 15:28:44.641351 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:44 crc kubenswrapper[5008]: I0129 15:28:44.641361 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:44 crc kubenswrapper[5008]: I0129 15:28:44.641373 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:44 crc kubenswrapper[5008]: I0129 15:28:44.641382 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:44Z","lastTransitionTime":"2026-01-29T15:28:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:44 crc kubenswrapper[5008]: I0129 15:28:44.743974 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:44 crc kubenswrapper[5008]: I0129 15:28:44.744012 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:44 crc kubenswrapper[5008]: I0129 15:28:44.744021 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:44 crc kubenswrapper[5008]: I0129 15:28:44.744035 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:44 crc kubenswrapper[5008]: I0129 15:28:44.744045 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:44Z","lastTransitionTime":"2026-01-29T15:28:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:44 crc kubenswrapper[5008]: I0129 15:28:44.846510 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:44 crc kubenswrapper[5008]: I0129 15:28:44.846555 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:44 crc kubenswrapper[5008]: I0129 15:28:44.846568 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:44 crc kubenswrapper[5008]: I0129 15:28:44.846586 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:44 crc kubenswrapper[5008]: I0129 15:28:44.846597 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:44Z","lastTransitionTime":"2026-01-29T15:28:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:44 crc kubenswrapper[5008]: I0129 15:28:44.949011 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:44 crc kubenswrapper[5008]: I0129 15:28:44.949054 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:44 crc kubenswrapper[5008]: I0129 15:28:44.949064 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:44 crc kubenswrapper[5008]: I0129 15:28:44.949079 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:44 crc kubenswrapper[5008]: I0129 15:28:44.949088 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:44Z","lastTransitionTime":"2026-01-29T15:28:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:44 crc kubenswrapper[5008]: I0129 15:28:44.975525 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/f3716fd8-7f9b-44e2-ac3c-e907d8793dc9-metrics-certs\") pod \"network-metrics-daemon-kkc6c\" (UID: \"f3716fd8-7f9b-44e2-ac3c-e907d8793dc9\") " pod="openshift-multus/network-metrics-daemon-kkc6c" Jan 29 15:28:44 crc kubenswrapper[5008]: E0129 15:28:44.975638 5008 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 29 15:28:44 crc kubenswrapper[5008]: E0129 15:28:44.975699 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f3716fd8-7f9b-44e2-ac3c-e907d8793dc9-metrics-certs podName:f3716fd8-7f9b-44e2-ac3c-e907d8793dc9 nodeName:}" failed. No retries permitted until 2026-01-29 15:29:16.975683055 +0000 UTC m=+100.648537292 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/f3716fd8-7f9b-44e2-ac3c-e907d8793dc9-metrics-certs") pod "network-metrics-daemon-kkc6c" (UID: "f3716fd8-7f9b-44e2-ac3c-e907d8793dc9") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 29 15:28:45 crc kubenswrapper[5008]: I0129 15:28:45.052289 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:45 crc kubenswrapper[5008]: I0129 15:28:45.052329 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:45 crc kubenswrapper[5008]: I0129 15:28:45.052341 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:45 crc kubenswrapper[5008]: I0129 15:28:45.052358 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:45 crc kubenswrapper[5008]: I0129 15:28:45.052369 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:45Z","lastTransitionTime":"2026-01-29T15:28:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:45 crc kubenswrapper[5008]: I0129 15:28:45.154942 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:45 crc kubenswrapper[5008]: I0129 15:28:45.154984 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:45 crc kubenswrapper[5008]: I0129 15:28:45.154996 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:45 crc kubenswrapper[5008]: I0129 15:28:45.155017 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:45 crc kubenswrapper[5008]: I0129 15:28:45.155035 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:45Z","lastTransitionTime":"2026-01-29T15:28:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:45 crc kubenswrapper[5008]: I0129 15:28:45.257210 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:45 crc kubenswrapper[5008]: I0129 15:28:45.257273 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:45 crc kubenswrapper[5008]: I0129 15:28:45.257286 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:45 crc kubenswrapper[5008]: I0129 15:28:45.257315 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:45 crc kubenswrapper[5008]: I0129 15:28:45.257328 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:45Z","lastTransitionTime":"2026-01-29T15:28:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:45 crc kubenswrapper[5008]: I0129 15:28:45.319338 5008 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-13 15:20:34.713155412 +0000 UTC Jan 29 15:28:45 crc kubenswrapper[5008]: I0129 15:28:45.322836 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kkc6c" Jan 29 15:28:45 crc kubenswrapper[5008]: E0129 15:28:45.323101 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kkc6c" podUID="f3716fd8-7f9b-44e2-ac3c-e907d8793dc9" Jan 29 15:28:45 crc kubenswrapper[5008]: I0129 15:28:45.359302 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:45 crc kubenswrapper[5008]: I0129 15:28:45.359343 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:45 crc kubenswrapper[5008]: I0129 15:28:45.359352 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:45 crc kubenswrapper[5008]: I0129 15:28:45.359367 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:45 crc kubenswrapper[5008]: I0129 15:28:45.359376 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:45Z","lastTransitionTime":"2026-01-29T15:28:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:45 crc kubenswrapper[5008]: I0129 15:28:45.461949 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:45 crc kubenswrapper[5008]: I0129 15:28:45.461998 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:45 crc kubenswrapper[5008]: I0129 15:28:45.462009 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:45 crc kubenswrapper[5008]: I0129 15:28:45.462024 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:45 crc kubenswrapper[5008]: I0129 15:28:45.462035 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:45Z","lastTransitionTime":"2026-01-29T15:28:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:45 crc kubenswrapper[5008]: I0129 15:28:45.466897 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:45 crc kubenswrapper[5008]: I0129 15:28:45.466951 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:45 crc kubenswrapper[5008]: I0129 15:28:45.466970 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:45 crc kubenswrapper[5008]: I0129 15:28:45.466993 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:45 crc kubenswrapper[5008]: I0129 15:28:45.467008 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:45Z","lastTransitionTime":"2026-01-29T15:28:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:45 crc kubenswrapper[5008]: E0129 15:28:45.485432 5008 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:28:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:28:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:45Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:28:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:28:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:45Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"23463cb0-4db2-46f4-86c5-cabe2301deff\\\",\\\"systemUUID\\\":\\\"ad986a03-9926-4209-a3e1-d38e666bee86\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:45Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:45 crc kubenswrapper[5008]: I0129 15:28:45.490109 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:45 crc kubenswrapper[5008]: I0129 15:28:45.490164 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:45 crc kubenswrapper[5008]: I0129 15:28:45.490181 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:45 crc kubenswrapper[5008]: I0129 15:28:45.490205 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:45 crc kubenswrapper[5008]: I0129 15:28:45.490225 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:45Z","lastTransitionTime":"2026-01-29T15:28:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:45 crc kubenswrapper[5008]: E0129 15:28:45.507073 5008 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:28:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:28:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:45Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:28:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:28:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:45Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"23463cb0-4db2-46f4-86c5-cabe2301deff\\\",\\\"systemUUID\\\":\\\"ad986a03-9926-4209-a3e1-d38e666bee86\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:45Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:45 crc kubenswrapper[5008]: I0129 15:28:45.511498 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:45 crc kubenswrapper[5008]: I0129 15:28:45.511536 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:45 crc kubenswrapper[5008]: I0129 15:28:45.511552 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:45 crc kubenswrapper[5008]: I0129 15:28:45.511569 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:45 crc kubenswrapper[5008]: I0129 15:28:45.511582 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:45Z","lastTransitionTime":"2026-01-29T15:28:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:45 crc kubenswrapper[5008]: E0129 15:28:45.527226 5008 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:28:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:28:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:45Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:28:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:28:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:45Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"23463cb0-4db2-46f4-86c5-cabe2301deff\\\",\\\"systemUUID\\\":\\\"ad986a03-9926-4209-a3e1-d38e666bee86\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:45Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:45 crc kubenswrapper[5008]: I0129 15:28:45.537552 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:45 crc kubenswrapper[5008]: I0129 15:28:45.537604 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:45 crc kubenswrapper[5008]: I0129 15:28:45.537615 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:45 crc kubenswrapper[5008]: I0129 15:28:45.537631 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:45 crc kubenswrapper[5008]: I0129 15:28:45.537641 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:45Z","lastTransitionTime":"2026-01-29T15:28:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:45 crc kubenswrapper[5008]: E0129 15:28:45.554129 5008 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:28:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:28:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:45Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:28:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:28:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:45Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"23463cb0-4db2-46f4-86c5-cabe2301deff\\\",\\\"systemUUID\\\":\\\"ad986a03-9926-4209-a3e1-d38e666bee86\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:45Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:45 crc kubenswrapper[5008]: I0129 15:28:45.557971 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:45 crc kubenswrapper[5008]: I0129 15:28:45.558026 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:45 crc kubenswrapper[5008]: I0129 15:28:45.558043 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:45 crc kubenswrapper[5008]: I0129 15:28:45.558071 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:45 crc kubenswrapper[5008]: I0129 15:28:45.558087 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:45Z","lastTransitionTime":"2026-01-29T15:28:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:45 crc kubenswrapper[5008]: E0129 15:28:45.570934 5008 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:28:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:28:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:45Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:28:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:28:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:45Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"23463cb0-4db2-46f4-86c5-cabe2301deff\\\",\\\"systemUUID\\\":\\\"ad986a03-9926-4209-a3e1-d38e666bee86\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:45Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:45 crc kubenswrapper[5008]: E0129 15:28:45.571087 5008 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 29 15:28:45 crc kubenswrapper[5008]: I0129 15:28:45.572338 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:45 crc kubenswrapper[5008]: I0129 15:28:45.572365 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:45 crc kubenswrapper[5008]: I0129 15:28:45.572375 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:45 crc kubenswrapper[5008]: I0129 15:28:45.572392 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:45 crc kubenswrapper[5008]: I0129 15:28:45.572404 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:45Z","lastTransitionTime":"2026-01-29T15:28:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:45 crc kubenswrapper[5008]: I0129 15:28:45.674743 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:45 crc kubenswrapper[5008]: I0129 15:28:45.674891 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:45 crc kubenswrapper[5008]: I0129 15:28:45.675056 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:45 crc kubenswrapper[5008]: I0129 15:28:45.675084 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:45 crc kubenswrapper[5008]: I0129 15:28:45.675100 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:45Z","lastTransitionTime":"2026-01-29T15:28:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:45 crc kubenswrapper[5008]: I0129 15:28:45.776947 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:45 crc kubenswrapper[5008]: I0129 15:28:45.777011 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:45 crc kubenswrapper[5008]: I0129 15:28:45.777029 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:45 crc kubenswrapper[5008]: I0129 15:28:45.777052 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:45 crc kubenswrapper[5008]: I0129 15:28:45.777071 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:45Z","lastTransitionTime":"2026-01-29T15:28:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:45 crc kubenswrapper[5008]: I0129 15:28:45.879360 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:45 crc kubenswrapper[5008]: I0129 15:28:45.879404 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:45 crc kubenswrapper[5008]: I0129 15:28:45.879416 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:45 crc kubenswrapper[5008]: I0129 15:28:45.879433 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:45 crc kubenswrapper[5008]: I0129 15:28:45.879446 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:45Z","lastTransitionTime":"2026-01-29T15:28:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:45 crc kubenswrapper[5008]: I0129 15:28:45.981159 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:45 crc kubenswrapper[5008]: I0129 15:28:45.981230 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:45 crc kubenswrapper[5008]: I0129 15:28:45.981253 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:45 crc kubenswrapper[5008]: I0129 15:28:45.981284 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:45 crc kubenswrapper[5008]: I0129 15:28:45.981305 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:45Z","lastTransitionTime":"2026-01-29T15:28:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:46 crc kubenswrapper[5008]: I0129 15:28:46.083233 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:46 crc kubenswrapper[5008]: I0129 15:28:46.083291 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:46 crc kubenswrapper[5008]: I0129 15:28:46.083308 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:46 crc kubenswrapper[5008]: I0129 15:28:46.083333 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:46 crc kubenswrapper[5008]: I0129 15:28:46.083349 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:46Z","lastTransitionTime":"2026-01-29T15:28:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:46 crc kubenswrapper[5008]: I0129 15:28:46.185905 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:46 crc kubenswrapper[5008]: I0129 15:28:46.185988 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:46 crc kubenswrapper[5008]: I0129 15:28:46.186006 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:46 crc kubenswrapper[5008]: I0129 15:28:46.186036 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:46 crc kubenswrapper[5008]: I0129 15:28:46.186054 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:46Z","lastTransitionTime":"2026-01-29T15:28:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:46 crc kubenswrapper[5008]: I0129 15:28:46.288492 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:46 crc kubenswrapper[5008]: I0129 15:28:46.288519 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:46 crc kubenswrapper[5008]: I0129 15:28:46.288528 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:46 crc kubenswrapper[5008]: I0129 15:28:46.288541 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:46 crc kubenswrapper[5008]: I0129 15:28:46.288552 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:46Z","lastTransitionTime":"2026-01-29T15:28:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:46 crc kubenswrapper[5008]: I0129 15:28:46.320484 5008 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-13 00:29:14.437186072 +0000 UTC Jan 29 15:28:46 crc kubenswrapper[5008]: I0129 15:28:46.323032 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 15:28:46 crc kubenswrapper[5008]: I0129 15:28:46.323087 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 15:28:46 crc kubenswrapper[5008]: I0129 15:28:46.323050 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 15:28:46 crc kubenswrapper[5008]: E0129 15:28:46.323249 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 15:28:46 crc kubenswrapper[5008]: E0129 15:28:46.323317 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 15:28:46 crc kubenswrapper[5008]: E0129 15:28:46.323434 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 15:28:46 crc kubenswrapper[5008]: I0129 15:28:46.392406 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:46 crc kubenswrapper[5008]: I0129 15:28:46.392450 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:46 crc kubenswrapper[5008]: I0129 15:28:46.392460 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:46 crc kubenswrapper[5008]: I0129 15:28:46.392478 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:46 crc kubenswrapper[5008]: I0129 15:28:46.392489 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:46Z","lastTransitionTime":"2026-01-29T15:28:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:46 crc kubenswrapper[5008]: I0129 15:28:46.495047 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:46 crc kubenswrapper[5008]: I0129 15:28:46.495095 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:46 crc kubenswrapper[5008]: I0129 15:28:46.495109 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:46 crc kubenswrapper[5008]: I0129 15:28:46.495126 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:46 crc kubenswrapper[5008]: I0129 15:28:46.495137 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:46Z","lastTransitionTime":"2026-01-29T15:28:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:46 crc kubenswrapper[5008]: I0129 15:28:46.598134 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:46 crc kubenswrapper[5008]: I0129 15:28:46.598184 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:46 crc kubenswrapper[5008]: I0129 15:28:46.598198 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:46 crc kubenswrapper[5008]: I0129 15:28:46.598217 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:46 crc kubenswrapper[5008]: I0129 15:28:46.598231 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:46Z","lastTransitionTime":"2026-01-29T15:28:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:46 crc kubenswrapper[5008]: I0129 15:28:46.701389 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:46 crc kubenswrapper[5008]: I0129 15:28:46.701436 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:46 crc kubenswrapper[5008]: I0129 15:28:46.701447 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:46 crc kubenswrapper[5008]: I0129 15:28:46.701465 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:46 crc kubenswrapper[5008]: I0129 15:28:46.701478 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:46Z","lastTransitionTime":"2026-01-29T15:28:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:46 crc kubenswrapper[5008]: I0129 15:28:46.803301 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:46 crc kubenswrapper[5008]: I0129 15:28:46.803364 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:46 crc kubenswrapper[5008]: I0129 15:28:46.803384 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:46 crc kubenswrapper[5008]: I0129 15:28:46.803416 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:46 crc kubenswrapper[5008]: I0129 15:28:46.803431 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:46Z","lastTransitionTime":"2026-01-29T15:28:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:46 crc kubenswrapper[5008]: I0129 15:28:46.906659 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:46 crc kubenswrapper[5008]: I0129 15:28:46.906720 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:46 crc kubenswrapper[5008]: I0129 15:28:46.906738 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:46 crc kubenswrapper[5008]: I0129 15:28:46.906762 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:46 crc kubenswrapper[5008]: I0129 15:28:46.906778 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:46Z","lastTransitionTime":"2026-01-29T15:28:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:47 crc kubenswrapper[5008]: I0129 15:28:47.009069 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:47 crc kubenswrapper[5008]: I0129 15:28:47.009106 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:47 crc kubenswrapper[5008]: I0129 15:28:47.009117 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:47 crc kubenswrapper[5008]: I0129 15:28:47.009134 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:47 crc kubenswrapper[5008]: I0129 15:28:47.009148 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:47Z","lastTransitionTime":"2026-01-29T15:28:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:47 crc kubenswrapper[5008]: I0129 15:28:47.112317 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:47 crc kubenswrapper[5008]: I0129 15:28:47.112361 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:47 crc kubenswrapper[5008]: I0129 15:28:47.112373 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:47 crc kubenswrapper[5008]: I0129 15:28:47.112391 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:47 crc kubenswrapper[5008]: I0129 15:28:47.112402 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:47Z","lastTransitionTime":"2026-01-29T15:28:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:47 crc kubenswrapper[5008]: I0129 15:28:47.214598 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:47 crc kubenswrapper[5008]: I0129 15:28:47.214632 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:47 crc kubenswrapper[5008]: I0129 15:28:47.214643 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:47 crc kubenswrapper[5008]: I0129 15:28:47.214658 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:47 crc kubenswrapper[5008]: I0129 15:28:47.214668 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:47Z","lastTransitionTime":"2026-01-29T15:28:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:47 crc kubenswrapper[5008]: I0129 15:28:47.317019 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:47 crc kubenswrapper[5008]: I0129 15:28:47.317080 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:47 crc kubenswrapper[5008]: I0129 15:28:47.317107 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:47 crc kubenswrapper[5008]: I0129 15:28:47.317132 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:47 crc kubenswrapper[5008]: I0129 15:28:47.317144 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:47Z","lastTransitionTime":"2026-01-29T15:28:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:47 crc kubenswrapper[5008]: I0129 15:28:47.321259 5008 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-20 01:12:12.405440458 +0000 UTC Jan 29 15:28:47 crc kubenswrapper[5008]: I0129 15:28:47.323642 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kkc6c" Jan 29 15:28:47 crc kubenswrapper[5008]: E0129 15:28:47.323800 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kkc6c" podUID="f3716fd8-7f9b-44e2-ac3c-e907d8793dc9" Jan 29 15:28:47 crc kubenswrapper[5008]: I0129 15:28:47.336696 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ae42d856f5916fe3a1dace4ed5ed53a6cab552d169357b7303516719b78ef076\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:47Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:47 crc kubenswrapper[5008]: I0129 15:28:47.347720 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e5526ab405f367c31c46e86dc356f5c21ac7529cd706af08cb6cd35e54dbe33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a34142066431679db41e56f6697765165128986ad22bc919152524672e3035d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:47Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:47 crc kubenswrapper[5008]: I0129 15:28:47.358550 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4aed8a0d-ecac-43fd-a31e-04cfbb01f872\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e2cad6ba94fe1fbb01c043c1e8eabda3989f05822a3a7a6e105d2cd8aa794333\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://83662d418c40cdea3f8af62c97834fd30d88d2fe441ca4a0576566e8f6e9bc1d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://83662d418c40cdea3f8af62c97834fd30d88d2fe441ca4a0576566e8f6e9bc1d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:27:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:27:37Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:47Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:47 crc kubenswrapper[5008]: I0129 15:28:47.372712 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-78bl2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa065d0b-d690-4a7d-9079-a8f976a7aca3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb7be81711617226cfa9af5ce71166ad176fc477581c03ba781a2746d64bbf31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trwfk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dce68b57fb66d0f4fb38e7ba2da32746311a7705ec80e7dbaaee405bf6175456\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dce68b57fb66d0f4fb38e7ba2da32746311a7705ec80e7dbaaee405bf6175456\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:27:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trwfk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be2538011dc9cfea90fe3fdf861804d4f36944262a852e2efe4c6a215019fb7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://be2538011dc9cfea90fe3fdf861804d4f36944262a852e2efe4c6a215019fb7e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trwfk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1e468924dd5d2c21d28331698458147151b2c74b04a9154c3f0638b271ffb36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c1e468924dd5d2c21d28331698458147151b2c74b04a9154c3f0638b271ffb36\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trwfk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf3e60925690b1b555efc2db95efcef76510c147b6338b65b071bf0729561a6b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bf3e60925690b1b555efc2db95efcef76510c147b6338b65b071bf0729561a6b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trwfk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://83ad0f06e7035a28c9d0207484d22ac175226fac31b1d5e233ce7231cb957fb3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://83ad0f06e7035a28c9d0207484d22ac175226fac31b1d5e233ce7231cb957fb3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trwfk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://78d2f9571b05eeb98c339f4165ca858289b85192d254ea86d4fb2eae7ea2e61e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://78d2f9571b05eeb98c339f4165ca858289b85192d254ea86d4fb2eae7ea2e61e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trwfk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:27:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-78bl2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:47Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:47 crc kubenswrapper[5008]: I0129 15:28:47.382659 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-qj8wb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ffbfcf6-99e5-450c-8c72-b2db9365d93e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb113f45b58a5039b88d2c176d718d5a012e21c1785781c1fcda5843d529a9af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8mvmz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:28:01Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-qj8wb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:47Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:47 crc kubenswrapper[5008]: I0129 15:28:47.396300 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:47Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:47 crc kubenswrapper[5008]: I0129 15:28:47.410819 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-42hcz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cdd8ae23-3f9f-49f8-928d-46dad823fde4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a44b0a7b0b53c339b51d5391ad7e0eb342bdb491b4af37a98f48788b8e2c077b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg75x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:27:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-42hcz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:47Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:47 crc kubenswrapper[5008]: I0129 15:28:47.419724 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:47 crc kubenswrapper[5008]: I0129 15:28:47.419860 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:47 crc kubenswrapper[5008]: I0129 15:28:47.419882 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:47 crc kubenswrapper[5008]: I0129 15:28:47.419909 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:47 crc kubenswrapper[5008]: I0129 15:28:47.419928 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:47Z","lastTransitionTime":"2026-01-29T15:28:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:47 crc kubenswrapper[5008]: I0129 15:28:47.428160 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-p5kdp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f5a0b69-5edd-467c-a822-093f1689df1d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6930478f2ddb5112eb944beac7cabb3e235fe16465a4706e8c665ce9481bc49d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gq2fz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://98ea0d7c1f2e3e9fc74e8e58ae26ab486c6b75f655273070cebee814c7c99e0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gq2fz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:28:11Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-p5kdp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:47Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:47 crc kubenswrapper[5008]: I0129 15:28:47.450135 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"77958faa-02ef-4792-b792-6094f922cd1d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de76f0d6e08ee14b4a5ab39a21ebdc63bdf379dcd5b648ae46a4edcc2a49f20e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1dcda54222f387e6560d3e297be72e19032a975feb916bc12a220870207a3f35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3cb618c2c44502074cb37ce1e688d187254eafae3916372a16c8ab845fed767a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff8e5fd243880ce71f07c5c532cad2cdff0e4bca2d0083280be78206a1a4c854\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7393e24277d74a2b9987e6cdc54cd65485f5bc57d93ec25a2cb8479923db1feb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://380711ea042d739a804ab6da4c0361004cc9ed9a48a5f4b006d168df6a84ebb5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://380711ea042d739a804ab6da4c0361004cc9ed9a48a5f4b006d168df6a84ebb5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:27:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d9134a6829b7df9b42aeae161ea1f3961837d6a0b322b1adcd2417c47c0f5d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1d9134a6829b7df9b42aeae161ea1f3961837d6a0b322b1adcd2417c47c0f5d5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:27:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://9206e5978757b0979fa411a384d9e5b4728b01a769f87383df38dbb8f0f18e4b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9206e5978757b0979fa411a384d9e5b4728b01a769f87383df38dbb8f0f18e4b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:27:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:27:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:27:37Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:47Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:47 crc kubenswrapper[5008]: I0129 15:28:47.462855 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2624b9eb-bfe1-4c46-8825-6152c5e00565\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://12266e3ba2ed2e5d6d1e7ee893a0d59cd4575c8870cb1e129ca0fd9b8623467f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c778df6f5c031669143db37980250c01473f3d9856acc44a6ef51852822f99ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://de7c341c7443f28f5919ef6baeb21377b5571637ad807dd7515a5f28c218034b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0f710dffd08d1bbb467ff9d2c6a5d5beed779550747459407916e743506ab27\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:27:37Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:47Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:47 crc kubenswrapper[5008]: I0129 15:28:47.479055 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d62f7cc2-d2d7-4c9a-9432-8b4fb9f3fcf2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4702656214e54c2881bf198364622648679a9981721d09a6b1551a134c63b7d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ed1794f8b68a0810301b6f7b91e03cfb269b35084dd97b2f153789ba70970f2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://677c04a1dffb767e6149ccb064772548ca29cf553afc20cb4eb82a5f85742ff0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://412b5d429b7a86a87e710ba4a0c81a54b03108f41ce6cc29f429aede063eb76c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4d710e35a02d14289e2d5fe6b35c08621e78c96b7e9e30451ffd6d51962fb761\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"message\\\":\\\"file observer\\\\nW0129 15:27:57.701071 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0129 15:27:57.704726 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 15:27:57.707574 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-445213743/tls.crt::/tmp/serving-cert-445213743/tls.key\\\\\\\"\\\\nI0129 15:27:58.036057 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0129 15:27:58.041904 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0129 15:27:58.041936 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0129 15:27:58.041959 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0129 15:27:58.041967 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0129 15:27:58.046875 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0129 15:27:58.046901 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 15:27:58.046906 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 15:27:58.046911 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0129 15:27:58.046914 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0129 15:27:58.046917 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0129 15:27:58.046919 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0129 15:27:58.047110 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0129 15:27:58.052272 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T15:27:42Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3397d4d59fbac09e49247425eb263f25d13c62a72013146c981b606f6389165d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:40Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://25a0c747be0a011a60911a631709b27620d8ebc5afea1d21dbeb71f26d971f6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://25a0c747be0a011a60911a631709b27620d8ebc5afea1d21dbeb71f26d971f6e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:27:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:27:37Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:47Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:47 crc kubenswrapper[5008]: I0129 15:28:47.494466 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:47Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:47 crc kubenswrapper[5008]: I0129 15:28:47.506878 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-gk9q8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ca0fcb2d-733d-4bde-9bbf-3f7082d0e244\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fda885d25c8fd46bd297810d4fb6c23ec0d4bb76993e94ea75a623b0feeed247\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6blck\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4781ea933d8ce868cf1da4b2890797c16012b434ce074870a59307d61a3c731\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6blck\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:27:59Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-gk9q8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:47Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:47 crc kubenswrapper[5008]: I0129 15:28:47.522816 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:47 crc kubenswrapper[5008]: I0129 15:28:47.522853 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:47 crc kubenswrapper[5008]: I0129 15:28:47.522863 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:47 crc kubenswrapper[5008]: I0129 15:28:47.522877 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:47 crc kubenswrapper[5008]: I0129 15:28:47.522886 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:47Z","lastTransitionTime":"2026-01-29T15:28:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:47 crc kubenswrapper[5008]: I0129 15:28:47.525905 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pqg9w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1d092513-7735-4c98-9734-57bc46b99280\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://08beb10f1715c1ca4bbe5b5ecf918e595f3befca424a2b65a06e682936dcc9c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2xcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://84bee79a5084a74e833cfe4bac65bc4b319e7a41e9f3e8c7ee7de383385da1a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2xcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eddc7bcf8b28e2d71e41dbad61e84e0e0ac1e2702628a400e9c16dcc4303cad1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2xcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b82de879355c27b3c577b5d5a292b2c1db266e6d92a8e01409bf87ede71ba420\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2xcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e0b6c0db5ed1e87ffade45aa1c7194322bbf680050f9b7328a3584db57e1554\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2xcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://676b28dc78242b0ec7c7a3643a048da9020c807de1f4ddd0cd801f60a1bf41a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2xcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://643ac2f5dd2119b6ede74fb609222a3e5d7643c302ea60d1799cf3b8db6e2120\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://643ac2f5dd2119b6ede74fb609222a3e5d7643c302ea60d1799cf3b8db6e2120\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-29T15:28:31Z\\\",\\\"message\\\":\\\"15:28:30.852721 6704 default_network_controller.go:776] Recording success event on pod openshift-etcd/etcd-crc\\\\nI0129 15:28:30.852591 6704 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI0129 15:28:30.852819 6704 model_client.go:382] Update operations generated as: [{Op:update Table:NAT Row:map[external_ip:192.168.126.11 logical_ip:10.217.0.4 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {43933d5e-3c3b-4ff8-8926-04ac25de450e}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0129 15:28:30.852875 6704 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:30Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-pqg9w_openshift-ovn-kubernetes(1d092513-7735-4c98-9734-57bc46b99280)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2xcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dc93128ecb53884c776154eafc7f29837e9c378a10c37df5d85d608ef14d7195\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2xcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6807035e51f1a1b563d2c2de6ad73607b2a3bbb9b4336cb9dfeea693d35fdda6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6807035e51f1a1b563d2c2de6ad73607b2a3bbb9b4336cb9dfeea693d35fdda6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2xcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:27:59Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-pqg9w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:47Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:47 crc kubenswrapper[5008]: I0129 15:28:47.537712 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-kkc6c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f3716fd8-7f9b-44e2-ac3c-e907d8793dc9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tl4fv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tl4fv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:28:13Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-kkc6c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:47Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:47 crc kubenswrapper[5008]: I0129 15:28:47.551252 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f3710d4-b153-4018-a492-367eb8b81ef8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c89e24fc5acc0577d3d738d63e7982aa32a07ecc01952570f6f417286b8747a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://33245f510d76b9610b3e44259d0944eaef5873c4bc31c3f3012a013248d16933\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c76eae897742ba4e95f6d60a81e2da82f1c0b0e220f48473436b03bff9f2f7e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0ee76cb03f96b669c6907a5d4a1520afda186e96b59ddea75f8c0fd7547c9063\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0ee76cb03f96b669c6907a5d4a1520afda186e96b59ddea75f8c0fd7547c9063\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:27:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:27:37Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:47Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:47 crc kubenswrapper[5008]: I0129 15:28:47.565728 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c04122903ba8ec9ecb21ba42f430520d0a097fff8cea9572b066e146d519cf91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:47Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:47 crc kubenswrapper[5008]: I0129 15:28:47.577157 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:47Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:47 crc kubenswrapper[5008]: I0129 15:28:47.590528 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-wtvvb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2dede057-dcce-4302-8efe-e2c3640308ec\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://63cab2ec47a6dc148b6d3554a6f4b5c1985ca43bf62bfc444ff3582273cce517\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mtnst\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:27:58Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-wtvvb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:47Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:47 crc kubenswrapper[5008]: I0129 15:28:47.624929 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:47 crc kubenswrapper[5008]: I0129 15:28:47.624964 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:47 crc kubenswrapper[5008]: I0129 15:28:47.624974 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:47 crc kubenswrapper[5008]: I0129 15:28:47.624995 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:47 crc kubenswrapper[5008]: I0129 15:28:47.625005 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:47Z","lastTransitionTime":"2026-01-29T15:28:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:47 crc kubenswrapper[5008]: I0129 15:28:47.726554 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:47 crc kubenswrapper[5008]: I0129 15:28:47.726604 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:47 crc kubenswrapper[5008]: I0129 15:28:47.726617 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:47 crc kubenswrapper[5008]: I0129 15:28:47.726635 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:47 crc kubenswrapper[5008]: I0129 15:28:47.726648 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:47Z","lastTransitionTime":"2026-01-29T15:28:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:47 crc kubenswrapper[5008]: I0129 15:28:47.828986 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:47 crc kubenswrapper[5008]: I0129 15:28:47.829039 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:47 crc kubenswrapper[5008]: I0129 15:28:47.829049 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:47 crc kubenswrapper[5008]: I0129 15:28:47.829070 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:47 crc kubenswrapper[5008]: I0129 15:28:47.829082 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:47Z","lastTransitionTime":"2026-01-29T15:28:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:47 crc kubenswrapper[5008]: I0129 15:28:47.932441 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:47 crc kubenswrapper[5008]: I0129 15:28:47.932501 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:47 crc kubenswrapper[5008]: I0129 15:28:47.932558 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:47 crc kubenswrapper[5008]: I0129 15:28:47.932588 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:47 crc kubenswrapper[5008]: I0129 15:28:47.932620 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:47Z","lastTransitionTime":"2026-01-29T15:28:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:47 crc kubenswrapper[5008]: I0129 15:28:47.973007 5008 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-42hcz_cdd8ae23-3f9f-49f8-928d-46dad823fde4/kube-multus/0.log" Jan 29 15:28:47 crc kubenswrapper[5008]: I0129 15:28:47.973060 5008 generic.go:334] "Generic (PLEG): container finished" podID="cdd8ae23-3f9f-49f8-928d-46dad823fde4" containerID="a44b0a7b0b53c339b51d5391ad7e0eb342bdb491b4af37a98f48788b8e2c077b" exitCode=1 Jan 29 15:28:47 crc kubenswrapper[5008]: I0129 15:28:47.973090 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-42hcz" event={"ID":"cdd8ae23-3f9f-49f8-928d-46dad823fde4","Type":"ContainerDied","Data":"a44b0a7b0b53c339b51d5391ad7e0eb342bdb491b4af37a98f48788b8e2c077b"} Jan 29 15:28:47 crc kubenswrapper[5008]: I0129 15:28:47.973464 5008 scope.go:117] "RemoveContainer" containerID="a44b0a7b0b53c339b51d5391ad7e0eb342bdb491b4af37a98f48788b8e2c077b" Jan 29 15:28:47 crc kubenswrapper[5008]: I0129 15:28:47.988070 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4aed8a0d-ecac-43fd-a31e-04cfbb01f872\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e2cad6ba94fe1fbb01c043c1e8eabda3989f05822a3a7a6e105d2cd8aa794333\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://83662d418c40cdea3f8af62c97834fd30d88d2fe441ca4a0576566e8f6e9bc1d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://83662d418c40cdea3f8af62c97834fd30d88d2fe441ca4a0576566e8f6e9bc1d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:27:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:27:37Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:47Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:48 crc kubenswrapper[5008]: I0129 15:28:48.010681 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-78bl2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa065d0b-d690-4a7d-9079-a8f976a7aca3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb7be81711617226cfa9af5ce71166ad176fc477581c03ba781a2746d64bbf31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trwfk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dce68b57fb66d0f4fb38e7ba2da32746311a7705ec80e7dbaaee405bf6175456\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dce68b57fb66d0f4fb38e7ba2da32746311a7705ec80e7dbaaee405bf6175456\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:27:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trwfk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be2538011dc9cfea90fe3fdf861804d4f36944262a852e2efe4c6a215019fb7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://be2538011dc9cfea90fe3fdf861804d4f36944262a852e2efe4c6a215019fb7e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trwfk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1e468924dd5d2c21d28331698458147151b2c74b04a9154c3f0638b271ffb36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c1e468924dd5d2c21d28331698458147151b2c74b04a9154c3f0638b271ffb36\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trwfk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf3e60925690b1b555efc2db95efcef76510c147b6338b65b071bf0729561a6b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bf3e60925690b1b555efc2db95efcef76510c147b6338b65b071bf0729561a6b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trwfk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://83ad0f06e7035a28c9d0207484d22ac175226fac31b1d5e233ce7231cb957fb3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://83ad0f06e7035a28c9d0207484d22ac175226fac31b1d5e233ce7231cb957fb3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trwfk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://78d2f9571b05eeb98c339f4165ca858289b85192d254ea86d4fb2eae7ea2e61e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://78d2f9571b05eeb98c339f4165ca858289b85192d254ea86d4fb2eae7ea2e61e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trwfk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:27:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-78bl2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:48Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:48 crc kubenswrapper[5008]: I0129 15:28:48.023905 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-qj8wb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ffbfcf6-99e5-450c-8c72-b2db9365d93e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb113f45b58a5039b88d2c176d718d5a012e21c1785781c1fcda5843d529a9af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8mvmz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:28:01Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-qj8wb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:48Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:48 crc kubenswrapper[5008]: I0129 15:28:48.036068 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:48 crc kubenswrapper[5008]: I0129 15:28:48.036403 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:48 crc kubenswrapper[5008]: I0129 15:28:48.036412 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:48 crc kubenswrapper[5008]: I0129 15:28:48.036426 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:48 crc kubenswrapper[5008]: I0129 15:28:48.036436 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:48Z","lastTransitionTime":"2026-01-29T15:28:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:48 crc kubenswrapper[5008]: I0129 15:28:48.041395 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-p5kdp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f5a0b69-5edd-467c-a822-093f1689df1d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6930478f2ddb5112eb944beac7cabb3e235fe16465a4706e8c665ce9481bc49d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gq2fz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://98ea0d7c1f2e3e9fc74e8e58ae26ab486c6b75f655273070cebee814c7c99e0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gq2fz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:28:11Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-p5kdp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:48Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:48 crc kubenswrapper[5008]: I0129 15:28:48.065803 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"77958faa-02ef-4792-b792-6094f922cd1d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de76f0d6e08ee14b4a5ab39a21ebdc63bdf379dcd5b648ae46a4edcc2a49f20e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1dcda54222f387e6560d3e297be72e19032a975feb916bc12a220870207a3f35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3cb618c2c44502074cb37ce1e688d187254eafae3916372a16c8ab845fed767a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff8e5fd243880ce71f07c5c532cad2cdff0e4bca2d0083280be78206a1a4c854\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7393e24277d74a2b9987e6cdc54cd65485f5bc57d93ec25a2cb8479923db1feb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://380711ea042d739a804ab6da4c0361004cc9ed9a48a5f4b006d168df6a84ebb5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://380711ea042d739a804ab6da4c0361004cc9ed9a48a5f4b006d168df6a84ebb5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:27:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d9134a6829b7df9b42aeae161ea1f3961837d6a0b322b1adcd2417c47c0f5d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1d9134a6829b7df9b42aeae161ea1f3961837d6a0b322b1adcd2417c47c0f5d5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:27:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://9206e5978757b0979fa411a384d9e5b4728b01a769f87383df38dbb8f0f18e4b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9206e5978757b0979fa411a384d9e5b4728b01a769f87383df38dbb8f0f18e4b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:27:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:27:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:27:37Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:48Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:48 crc kubenswrapper[5008]: I0129 15:28:48.079557 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2624b9eb-bfe1-4c46-8825-6152c5e00565\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://12266e3ba2ed2e5d6d1e7ee893a0d59cd4575c8870cb1e129ca0fd9b8623467f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c778df6f5c031669143db37980250c01473f3d9856acc44a6ef51852822f99ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://de7c341c7443f28f5919ef6baeb21377b5571637ad807dd7515a5f28c218034b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0f710dffd08d1bbb467ff9d2c6a5d5beed779550747459407916e743506ab27\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:27:37Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:48Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:48 crc kubenswrapper[5008]: I0129 15:28:48.097620 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d62f7cc2-d2d7-4c9a-9432-8b4fb9f3fcf2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4702656214e54c2881bf198364622648679a9981721d09a6b1551a134c63b7d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ed1794f8b68a0810301b6f7b91e03cfb269b35084dd97b2f153789ba70970f2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://677c04a1dffb767e6149ccb064772548ca29cf553afc20cb4eb82a5f85742ff0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://412b5d429b7a86a87e710ba4a0c81a54b03108f41ce6cc29f429aede063eb76c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4d710e35a02d14289e2d5fe6b35c08621e78c96b7e9e30451ffd6d51962fb761\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"message\\\":\\\"file observer\\\\nW0129 15:27:57.701071 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0129 15:27:57.704726 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 15:27:57.707574 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-445213743/tls.crt::/tmp/serving-cert-445213743/tls.key\\\\\\\"\\\\nI0129 15:27:58.036057 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0129 15:27:58.041904 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0129 15:27:58.041936 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0129 15:27:58.041959 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0129 15:27:58.041967 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0129 15:27:58.046875 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0129 15:27:58.046901 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 15:27:58.046906 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 15:27:58.046911 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0129 15:27:58.046914 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0129 15:27:58.046917 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0129 15:27:58.046919 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0129 15:27:58.047110 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0129 15:27:58.052272 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T15:27:42Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3397d4d59fbac09e49247425eb263f25d13c62a72013146c981b606f6389165d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:40Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://25a0c747be0a011a60911a631709b27620d8ebc5afea1d21dbeb71f26d971f6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://25a0c747be0a011a60911a631709b27620d8ebc5afea1d21dbeb71f26d971f6e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:27:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:27:37Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:48Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:48 crc kubenswrapper[5008]: I0129 15:28:48.112046 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:48Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:48 crc kubenswrapper[5008]: I0129 15:28:48.128353 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:48Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:48 crc kubenswrapper[5008]: I0129 15:28:48.139935 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:48 crc kubenswrapper[5008]: I0129 15:28:48.139993 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:48 crc kubenswrapper[5008]: I0129 15:28:48.140015 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:48 crc kubenswrapper[5008]: I0129 15:28:48.140043 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:48 crc kubenswrapper[5008]: I0129 15:28:48.140067 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:48Z","lastTransitionTime":"2026-01-29T15:28:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:48 crc kubenswrapper[5008]: I0129 15:28:48.143538 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-42hcz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cdd8ae23-3f9f-49f8-928d-46dad823fde4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:47Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:47Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a44b0a7b0b53c339b51d5391ad7e0eb342bdb491b4af37a98f48788b8e2c077b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a44b0a7b0b53c339b51d5391ad7e0eb342bdb491b4af37a98f48788b8e2c077b\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-29T15:28:47Z\\\",\\\"message\\\":\\\"2026-01-29T15:28:02+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_5ee9c321-48df-4d5b-add2-57b9ac5ae3f8\\\\n2026-01-29T15:28:02+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_5ee9c321-48df-4d5b-add2-57b9ac5ae3f8 to /host/opt/cni/bin/\\\\n2026-01-29T15:28:02Z [verbose] multus-daemon started\\\\n2026-01-29T15:28:02Z [verbose] Readiness Indicator file check\\\\n2026-01-29T15:28:47Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T15:27:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg75x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:27:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-42hcz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:48Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:48 crc kubenswrapper[5008]: I0129 15:28:48.154925 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-kkc6c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f3716fd8-7f9b-44e2-ac3c-e907d8793dc9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tl4fv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tl4fv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:28:13Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-kkc6c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:48Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:48 crc kubenswrapper[5008]: I0129 15:28:48.165513 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f3710d4-b153-4018-a492-367eb8b81ef8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c89e24fc5acc0577d3d738d63e7982aa32a07ecc01952570f6f417286b8747a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://33245f510d76b9610b3e44259d0944eaef5873c4bc31c3f3012a013248d16933\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c76eae897742ba4e95f6d60a81e2da82f1c0b0e220f48473436b03bff9f2f7e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0ee76cb03f96b669c6907a5d4a1520afda186e96b59ddea75f8c0fd7547c9063\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0ee76cb03f96b669c6907a5d4a1520afda186e96b59ddea75f8c0fd7547c9063\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:27:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:27:37Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:48Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:48 crc kubenswrapper[5008]: I0129 15:28:48.175320 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c04122903ba8ec9ecb21ba42f430520d0a097fff8cea9572b066e146d519cf91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:48Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:48 crc kubenswrapper[5008]: I0129 15:28:48.187794 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:48Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:48 crc kubenswrapper[5008]: I0129 15:28:48.198335 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-wtvvb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2dede057-dcce-4302-8efe-e2c3640308ec\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://63cab2ec47a6dc148b6d3554a6f4b5c1985ca43bf62bfc444ff3582273cce517\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mtnst\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:27:58Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-wtvvb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:48Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:48 crc kubenswrapper[5008]: I0129 15:28:48.213963 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-gk9q8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ca0fcb2d-733d-4bde-9bbf-3f7082d0e244\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fda885d25c8fd46bd297810d4fb6c23ec0d4bb76993e94ea75a623b0feeed247\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6blck\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4781ea933d8ce868cf1da4b2890797c16012b434ce074870a59307d61a3c731\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6blck\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:27:59Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-gk9q8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:48Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:48 crc kubenswrapper[5008]: I0129 15:28:48.232039 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pqg9w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1d092513-7735-4c98-9734-57bc46b99280\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://08beb10f1715c1ca4bbe5b5ecf918e595f3befca424a2b65a06e682936dcc9c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2xcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://84bee79a5084a74e833cfe4bac65bc4b319e7a41e9f3e8c7ee7de383385da1a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2xcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eddc7bcf8b28e2d71e41dbad61e84e0e0ac1e2702628a400e9c16dcc4303cad1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2xcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b82de879355c27b3c577b5d5a292b2c1db266e6d92a8e01409bf87ede71ba420\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2xcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e0b6c0db5ed1e87ffade45aa1c7194322bbf680050f9b7328a3584db57e1554\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2xcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://676b28dc78242b0ec7c7a3643a048da9020c807de1f4ddd0cd801f60a1bf41a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2xcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://643ac2f5dd2119b6ede74fb609222a3e5d7643c302ea60d1799cf3b8db6e2120\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://643ac2f5dd2119b6ede74fb609222a3e5d7643c302ea60d1799cf3b8db6e2120\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-29T15:28:31Z\\\",\\\"message\\\":\\\"15:28:30.852721 6704 default_network_controller.go:776] Recording success event on pod openshift-etcd/etcd-crc\\\\nI0129 15:28:30.852591 6704 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI0129 15:28:30.852819 6704 model_client.go:382] Update operations generated as: [{Op:update Table:NAT Row:map[external_ip:192.168.126.11 logical_ip:10.217.0.4 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {43933d5e-3c3b-4ff8-8926-04ac25de450e}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0129 15:28:30.852875 6704 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:30Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-pqg9w_openshift-ovn-kubernetes(1d092513-7735-4c98-9734-57bc46b99280)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2xcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dc93128ecb53884c776154eafc7f29837e9c378a10c37df5d85d608ef14d7195\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2xcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6807035e51f1a1b563d2c2de6ad73607b2a3bbb9b4336cb9dfeea693d35fdda6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6807035e51f1a1b563d2c2de6ad73607b2a3bbb9b4336cb9dfeea693d35fdda6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2xcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:27:59Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-pqg9w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:48Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:48 crc kubenswrapper[5008]: I0129 15:28:48.242430 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:48 crc kubenswrapper[5008]: I0129 15:28:48.242473 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:48 crc kubenswrapper[5008]: I0129 15:28:48.242485 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:48 crc kubenswrapper[5008]: I0129 15:28:48.242501 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:48 crc kubenswrapper[5008]: I0129 15:28:48.242513 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:48Z","lastTransitionTime":"2026-01-29T15:28:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:48 crc kubenswrapper[5008]: I0129 15:28:48.246896 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ae42d856f5916fe3a1dace4ed5ed53a6cab552d169357b7303516719b78ef076\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:48Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:48 crc kubenswrapper[5008]: I0129 15:28:48.258933 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e5526ab405f367c31c46e86dc356f5c21ac7529cd706af08cb6cd35e54dbe33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a34142066431679db41e56f6697765165128986ad22bc919152524672e3035d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:48Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:48 crc kubenswrapper[5008]: I0129 15:28:48.321680 5008 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-27 13:03:08.453426423 +0000 UTC Jan 29 15:28:48 crc kubenswrapper[5008]: I0129 15:28:48.323134 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 15:28:48 crc kubenswrapper[5008]: E0129 15:28:48.323473 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 15:28:48 crc kubenswrapper[5008]: I0129 15:28:48.323142 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 15:28:48 crc kubenswrapper[5008]: E0129 15:28:48.323591 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 15:28:48 crc kubenswrapper[5008]: I0129 15:28:48.323730 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 15:28:48 crc kubenswrapper[5008]: E0129 15:28:48.324077 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 15:28:48 crc kubenswrapper[5008]: I0129 15:28:48.345237 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:48 crc kubenswrapper[5008]: I0129 15:28:48.345273 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:48 crc kubenswrapper[5008]: I0129 15:28:48.345285 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:48 crc kubenswrapper[5008]: I0129 15:28:48.345301 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:48 crc kubenswrapper[5008]: I0129 15:28:48.345312 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:48Z","lastTransitionTime":"2026-01-29T15:28:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:48 crc kubenswrapper[5008]: I0129 15:28:48.447059 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:48 crc kubenswrapper[5008]: I0129 15:28:48.447389 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:48 crc kubenswrapper[5008]: I0129 15:28:48.447461 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:48 crc kubenswrapper[5008]: I0129 15:28:48.447523 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:48 crc kubenswrapper[5008]: I0129 15:28:48.447591 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:48Z","lastTransitionTime":"2026-01-29T15:28:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:48 crc kubenswrapper[5008]: I0129 15:28:48.549129 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:48 crc kubenswrapper[5008]: I0129 15:28:48.549167 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:48 crc kubenswrapper[5008]: I0129 15:28:48.549178 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:48 crc kubenswrapper[5008]: I0129 15:28:48.549193 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:48 crc kubenswrapper[5008]: I0129 15:28:48.549204 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:48Z","lastTransitionTime":"2026-01-29T15:28:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:48 crc kubenswrapper[5008]: I0129 15:28:48.651597 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:48 crc kubenswrapper[5008]: I0129 15:28:48.651634 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:48 crc kubenswrapper[5008]: I0129 15:28:48.651642 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:48 crc kubenswrapper[5008]: I0129 15:28:48.651656 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:48 crc kubenswrapper[5008]: I0129 15:28:48.651666 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:48Z","lastTransitionTime":"2026-01-29T15:28:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:48 crc kubenswrapper[5008]: I0129 15:28:48.754165 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:48 crc kubenswrapper[5008]: I0129 15:28:48.754202 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:48 crc kubenswrapper[5008]: I0129 15:28:48.754212 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:48 crc kubenswrapper[5008]: I0129 15:28:48.754226 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:48 crc kubenswrapper[5008]: I0129 15:28:48.754235 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:48Z","lastTransitionTime":"2026-01-29T15:28:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:48 crc kubenswrapper[5008]: I0129 15:28:48.856194 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:48 crc kubenswrapper[5008]: I0129 15:28:48.856226 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:48 crc kubenswrapper[5008]: I0129 15:28:48.856234 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:48 crc kubenswrapper[5008]: I0129 15:28:48.856248 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:48 crc kubenswrapper[5008]: I0129 15:28:48.856257 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:48Z","lastTransitionTime":"2026-01-29T15:28:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:48 crc kubenswrapper[5008]: I0129 15:28:48.958918 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:48 crc kubenswrapper[5008]: I0129 15:28:48.958982 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:48 crc kubenswrapper[5008]: I0129 15:28:48.959001 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:48 crc kubenswrapper[5008]: I0129 15:28:48.959027 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:48 crc kubenswrapper[5008]: I0129 15:28:48.959048 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:48Z","lastTransitionTime":"2026-01-29T15:28:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:48 crc kubenswrapper[5008]: I0129 15:28:48.978590 5008 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-42hcz_cdd8ae23-3f9f-49f8-928d-46dad823fde4/kube-multus/0.log" Jan 29 15:28:48 crc kubenswrapper[5008]: I0129 15:28:48.978648 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-42hcz" event={"ID":"cdd8ae23-3f9f-49f8-928d-46dad823fde4","Type":"ContainerStarted","Data":"af9a973786f58d2c63123c28e0b1aedaa9ec4188567960c544cf68f70ba20873"} Jan 29 15:28:48 crc kubenswrapper[5008]: I0129 15:28:48.993563 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4aed8a0d-ecac-43fd-a31e-04cfbb01f872\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e2cad6ba94fe1fbb01c043c1e8eabda3989f05822a3a7a6e105d2cd8aa794333\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://83662d418c40cdea3f8af62c97834fd30d88d2fe441ca4a0576566e8f6e9bc1d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://83662d418c40cdea3f8af62c97834fd30d88d2fe441ca4a0576566e8f6e9bc1d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:27:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:27:37Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:48Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:49 crc kubenswrapper[5008]: I0129 15:28:49.015385 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-78bl2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa065d0b-d690-4a7d-9079-a8f976a7aca3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb7be81711617226cfa9af5ce71166ad176fc477581c03ba781a2746d64bbf31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trwfk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dce68b57fb66d0f4fb38e7ba2da32746311a7705ec80e7dbaaee405bf6175456\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dce68b57fb66d0f4fb38e7ba2da32746311a7705ec80e7dbaaee405bf6175456\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:27:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trwfk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be2538011dc9cfea90fe3fdf861804d4f36944262a852e2efe4c6a215019fb7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://be2538011dc9cfea90fe3fdf861804d4f36944262a852e2efe4c6a215019fb7e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trwfk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1e468924dd5d2c21d28331698458147151b2c74b04a9154c3f0638b271ffb36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c1e468924dd5d2c21d28331698458147151b2c74b04a9154c3f0638b271ffb36\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trwfk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf3e60925690b1b555efc2db95efcef76510c147b6338b65b071bf0729561a6b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bf3e60925690b1b555efc2db95efcef76510c147b6338b65b071bf0729561a6b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trwfk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://83ad0f06e7035a28c9d0207484d22ac175226fac31b1d5e233ce7231cb957fb3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://83ad0f06e7035a28c9d0207484d22ac175226fac31b1d5e233ce7231cb957fb3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trwfk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://78d2f9571b05eeb98c339f4165ca858289b85192d254ea86d4fb2eae7ea2e61e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://78d2f9571b05eeb98c339f4165ca858289b85192d254ea86d4fb2eae7ea2e61e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trwfk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:27:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-78bl2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:49Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:49 crc kubenswrapper[5008]: I0129 15:28:49.027438 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-qj8wb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ffbfcf6-99e5-450c-8c72-b2db9365d93e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb113f45b58a5039b88d2c176d718d5a012e21c1785781c1fcda5843d529a9af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8mvmz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:28:01Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-qj8wb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:49Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:49 crc kubenswrapper[5008]: I0129 15:28:49.039143 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-p5kdp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f5a0b69-5edd-467c-a822-093f1689df1d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6930478f2ddb5112eb944beac7cabb3e235fe16465a4706e8c665ce9481bc49d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gq2fz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://98ea0d7c1f2e3e9fc74e8e58ae26ab486c6b75f655273070cebee814c7c99e0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gq2fz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:28:11Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-p5kdp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:49Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:49 crc kubenswrapper[5008]: I0129 15:28:49.061538 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:49 crc kubenswrapper[5008]: I0129 15:28:49.061577 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:49 crc kubenswrapper[5008]: I0129 15:28:49.061585 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:49 crc kubenswrapper[5008]: I0129 15:28:49.061601 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:49 crc kubenswrapper[5008]: I0129 15:28:49.061611 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:49Z","lastTransitionTime":"2026-01-29T15:28:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:49 crc kubenswrapper[5008]: I0129 15:28:49.103930 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"77958faa-02ef-4792-b792-6094f922cd1d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de76f0d6e08ee14b4a5ab39a21ebdc63bdf379dcd5b648ae46a4edcc2a49f20e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1dcda54222f387e6560d3e297be72e19032a975feb916bc12a220870207a3f35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3cb618c2c44502074cb37ce1e688d187254eafae3916372a16c8ab845fed767a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff8e5fd243880ce71f07c5c532cad2cdff0e4bca2d0083280be78206a1a4c854\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7393e24277d74a2b9987e6cdc54cd65485f5bc57d93ec25a2cb8479923db1feb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://380711ea042d739a804ab6da4c0361004cc9ed9a48a5f4b006d168df6a84ebb5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://380711ea042d739a804ab6da4c0361004cc9ed9a48a5f4b006d168df6a84ebb5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:27:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d9134a6829b7df9b42aeae161ea1f3961837d6a0b322b1adcd2417c47c0f5d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1d9134a6829b7df9b42aeae161ea1f3961837d6a0b322b1adcd2417c47c0f5d5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:27:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://9206e5978757b0979fa411a384d9e5b4728b01a769f87383df38dbb8f0f18e4b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9206e5978757b0979fa411a384d9e5b4728b01a769f87383df38dbb8f0f18e4b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:27:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:27:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:27:37Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:49Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:49 crc kubenswrapper[5008]: I0129 15:28:49.120163 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2624b9eb-bfe1-4c46-8825-6152c5e00565\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://12266e3ba2ed2e5d6d1e7ee893a0d59cd4575c8870cb1e129ca0fd9b8623467f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c778df6f5c031669143db37980250c01473f3d9856acc44a6ef51852822f99ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://de7c341c7443f28f5919ef6baeb21377b5571637ad807dd7515a5f28c218034b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0f710dffd08d1bbb467ff9d2c6a5d5beed779550747459407916e743506ab27\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:27:37Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:49Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:49 crc kubenswrapper[5008]: I0129 15:28:49.135155 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d62f7cc2-d2d7-4c9a-9432-8b4fb9f3fcf2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4702656214e54c2881bf198364622648679a9981721d09a6b1551a134c63b7d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ed1794f8b68a0810301b6f7b91e03cfb269b35084dd97b2f153789ba70970f2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://677c04a1dffb767e6149ccb064772548ca29cf553afc20cb4eb82a5f85742ff0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://412b5d429b7a86a87e710ba4a0c81a54b03108f41ce6cc29f429aede063eb76c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4d710e35a02d14289e2d5fe6b35c08621e78c96b7e9e30451ffd6d51962fb761\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"message\\\":\\\"file observer\\\\nW0129 15:27:57.701071 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0129 15:27:57.704726 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 15:27:57.707574 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-445213743/tls.crt::/tmp/serving-cert-445213743/tls.key\\\\\\\"\\\\nI0129 15:27:58.036057 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0129 15:27:58.041904 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0129 15:27:58.041936 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0129 15:27:58.041959 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0129 15:27:58.041967 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0129 15:27:58.046875 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0129 15:27:58.046901 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 15:27:58.046906 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 15:27:58.046911 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0129 15:27:58.046914 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0129 15:27:58.046917 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0129 15:27:58.046919 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0129 15:27:58.047110 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0129 15:27:58.052272 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T15:27:42Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3397d4d59fbac09e49247425eb263f25d13c62a72013146c981b606f6389165d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:40Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://25a0c747be0a011a60911a631709b27620d8ebc5afea1d21dbeb71f26d971f6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://25a0c747be0a011a60911a631709b27620d8ebc5afea1d21dbeb71f26d971f6e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:27:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:27:37Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:49Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:49 crc kubenswrapper[5008]: I0129 15:28:49.147515 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:49Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:49 crc kubenswrapper[5008]: I0129 15:28:49.159399 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:49Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:49 crc kubenswrapper[5008]: I0129 15:28:49.164245 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:49 crc kubenswrapper[5008]: I0129 15:28:49.164291 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:49 crc kubenswrapper[5008]: I0129 15:28:49.164303 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:49 crc kubenswrapper[5008]: I0129 15:28:49.164321 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:49 crc kubenswrapper[5008]: I0129 15:28:49.164331 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:49Z","lastTransitionTime":"2026-01-29T15:28:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:49 crc kubenswrapper[5008]: I0129 15:28:49.171258 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-42hcz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cdd8ae23-3f9f-49f8-928d-46dad823fde4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://af9a973786f58d2c63123c28e0b1aedaa9ec4188567960c544cf68f70ba20873\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a44b0a7b0b53c339b51d5391ad7e0eb342bdb491b4af37a98f48788b8e2c077b\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-29T15:28:47Z\\\",\\\"message\\\":\\\"2026-01-29T15:28:02+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_5ee9c321-48df-4d5b-add2-57b9ac5ae3f8\\\\n2026-01-29T15:28:02+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_5ee9c321-48df-4d5b-add2-57b9ac5ae3f8 to /host/opt/cni/bin/\\\\n2026-01-29T15:28:02Z [verbose] multus-daemon started\\\\n2026-01-29T15:28:02Z [verbose] Readiness Indicator file check\\\\n2026-01-29T15:28:47Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T15:27:59Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg75x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:27:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-42hcz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:49Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:49 crc kubenswrapper[5008]: I0129 15:28:49.180370 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-kkc6c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f3716fd8-7f9b-44e2-ac3c-e907d8793dc9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tl4fv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tl4fv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:28:13Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-kkc6c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:49Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:49 crc kubenswrapper[5008]: I0129 15:28:49.189964 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f3710d4-b153-4018-a492-367eb8b81ef8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c89e24fc5acc0577d3d738d63e7982aa32a07ecc01952570f6f417286b8747a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://33245f510d76b9610b3e44259d0944eaef5873c4bc31c3f3012a013248d16933\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c76eae897742ba4e95f6d60a81e2da82f1c0b0e220f48473436b03bff9f2f7e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0ee76cb03f96b669c6907a5d4a1520afda186e96b59ddea75f8c0fd7547c9063\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0ee76cb03f96b669c6907a5d4a1520afda186e96b59ddea75f8c0fd7547c9063\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:27:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:27:37Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:49Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:49 crc kubenswrapper[5008]: I0129 15:28:49.202207 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c04122903ba8ec9ecb21ba42f430520d0a097fff8cea9572b066e146d519cf91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:49Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:49 crc kubenswrapper[5008]: I0129 15:28:49.216542 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:49Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:49 crc kubenswrapper[5008]: I0129 15:28:49.227776 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-wtvvb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2dede057-dcce-4302-8efe-e2c3640308ec\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://63cab2ec47a6dc148b6d3554a6f4b5c1985ca43bf62bfc444ff3582273cce517\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mtnst\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:27:58Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-wtvvb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:49Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:49 crc kubenswrapper[5008]: I0129 15:28:49.240802 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-gk9q8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ca0fcb2d-733d-4bde-9bbf-3f7082d0e244\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fda885d25c8fd46bd297810d4fb6c23ec0d4bb76993e94ea75a623b0feeed247\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6blck\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4781ea933d8ce868cf1da4b2890797c16012b434ce074870a59307d61a3c731\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6blck\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:27:59Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-gk9q8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:49Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:49 crc kubenswrapper[5008]: I0129 15:28:49.263683 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pqg9w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1d092513-7735-4c98-9734-57bc46b99280\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://08beb10f1715c1ca4bbe5b5ecf918e595f3befca424a2b65a06e682936dcc9c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2xcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://84bee79a5084a74e833cfe4bac65bc4b319e7a41e9f3e8c7ee7de383385da1a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2xcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eddc7bcf8b28e2d71e41dbad61e84e0e0ac1e2702628a400e9c16dcc4303cad1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2xcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b82de879355c27b3c577b5d5a292b2c1db266e6d92a8e01409bf87ede71ba420\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2xcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e0b6c0db5ed1e87ffade45aa1c7194322bbf680050f9b7328a3584db57e1554\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2xcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://676b28dc78242b0ec7c7a3643a048da9020c807de1f4ddd0cd801f60a1bf41a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2xcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://643ac2f5dd2119b6ede74fb609222a3e5d7643c302ea60d1799cf3b8db6e2120\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://643ac2f5dd2119b6ede74fb609222a3e5d7643c302ea60d1799cf3b8db6e2120\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-29T15:28:31Z\\\",\\\"message\\\":\\\"15:28:30.852721 6704 default_network_controller.go:776] Recording success event on pod openshift-etcd/etcd-crc\\\\nI0129 15:28:30.852591 6704 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI0129 15:28:30.852819 6704 model_client.go:382] Update operations generated as: [{Op:update Table:NAT Row:map[external_ip:192.168.126.11 logical_ip:10.217.0.4 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {43933d5e-3c3b-4ff8-8926-04ac25de450e}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0129 15:28:30.852875 6704 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:30Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-pqg9w_openshift-ovn-kubernetes(1d092513-7735-4c98-9734-57bc46b99280)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2xcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dc93128ecb53884c776154eafc7f29837e9c378a10c37df5d85d608ef14d7195\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2xcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6807035e51f1a1b563d2c2de6ad73607b2a3bbb9b4336cb9dfeea693d35fdda6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6807035e51f1a1b563d2c2de6ad73607b2a3bbb9b4336cb9dfeea693d35fdda6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2xcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:27:59Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-pqg9w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:49Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:49 crc kubenswrapper[5008]: I0129 15:28:49.266281 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:49 crc kubenswrapper[5008]: I0129 15:28:49.266310 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:49 crc kubenswrapper[5008]: I0129 15:28:49.266322 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:49 crc kubenswrapper[5008]: I0129 15:28:49.266345 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:49 crc kubenswrapper[5008]: I0129 15:28:49.266356 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:49Z","lastTransitionTime":"2026-01-29T15:28:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:49 crc kubenswrapper[5008]: I0129 15:28:49.281494 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ae42d856f5916fe3a1dace4ed5ed53a6cab552d169357b7303516719b78ef076\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:49Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:49 crc kubenswrapper[5008]: I0129 15:28:49.296066 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e5526ab405f367c31c46e86dc356f5c21ac7529cd706af08cb6cd35e54dbe33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a34142066431679db41e56f6697765165128986ad22bc919152524672e3035d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:49Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:49 crc kubenswrapper[5008]: I0129 15:28:49.322859 5008 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-19 00:09:12.800761068 +0000 UTC Jan 29 15:28:49 crc kubenswrapper[5008]: I0129 15:28:49.323097 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kkc6c" Jan 29 15:28:49 crc kubenswrapper[5008]: E0129 15:28:49.323309 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kkc6c" podUID="f3716fd8-7f9b-44e2-ac3c-e907d8793dc9" Jan 29 15:28:49 crc kubenswrapper[5008]: I0129 15:28:49.369019 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:49 crc kubenswrapper[5008]: I0129 15:28:49.369055 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:49 crc kubenswrapper[5008]: I0129 15:28:49.369066 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:49 crc kubenswrapper[5008]: I0129 15:28:49.369079 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:49 crc kubenswrapper[5008]: I0129 15:28:49.369089 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:49Z","lastTransitionTime":"2026-01-29T15:28:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:49 crc kubenswrapper[5008]: I0129 15:28:49.471673 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:49 crc kubenswrapper[5008]: I0129 15:28:49.471709 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:49 crc kubenswrapper[5008]: I0129 15:28:49.471721 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:49 crc kubenswrapper[5008]: I0129 15:28:49.471737 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:49 crc kubenswrapper[5008]: I0129 15:28:49.471747 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:49Z","lastTransitionTime":"2026-01-29T15:28:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:49 crc kubenswrapper[5008]: I0129 15:28:49.575211 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:49 crc kubenswrapper[5008]: I0129 15:28:49.575291 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:49 crc kubenswrapper[5008]: I0129 15:28:49.575309 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:49 crc kubenswrapper[5008]: I0129 15:28:49.575339 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:49 crc kubenswrapper[5008]: I0129 15:28:49.575358 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:49Z","lastTransitionTime":"2026-01-29T15:28:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:49 crc kubenswrapper[5008]: I0129 15:28:49.678247 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:49 crc kubenswrapper[5008]: I0129 15:28:49.678282 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:49 crc kubenswrapper[5008]: I0129 15:28:49.678291 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:49 crc kubenswrapper[5008]: I0129 15:28:49.678304 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:49 crc kubenswrapper[5008]: I0129 15:28:49.678313 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:49Z","lastTransitionTime":"2026-01-29T15:28:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:49 crc kubenswrapper[5008]: I0129 15:28:49.781278 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:49 crc kubenswrapper[5008]: I0129 15:28:49.781325 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:49 crc kubenswrapper[5008]: I0129 15:28:49.781338 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:49 crc kubenswrapper[5008]: I0129 15:28:49.781353 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:49 crc kubenswrapper[5008]: I0129 15:28:49.781363 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:49Z","lastTransitionTime":"2026-01-29T15:28:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:49 crc kubenswrapper[5008]: I0129 15:28:49.884031 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:49 crc kubenswrapper[5008]: I0129 15:28:49.884084 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:49 crc kubenswrapper[5008]: I0129 15:28:49.884094 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:49 crc kubenswrapper[5008]: I0129 15:28:49.884109 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:49 crc kubenswrapper[5008]: I0129 15:28:49.884120 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:49Z","lastTransitionTime":"2026-01-29T15:28:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:49 crc kubenswrapper[5008]: I0129 15:28:49.985611 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:49 crc kubenswrapper[5008]: I0129 15:28:49.985661 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:49 crc kubenswrapper[5008]: I0129 15:28:49.985672 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:49 crc kubenswrapper[5008]: I0129 15:28:49.985688 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:49 crc kubenswrapper[5008]: I0129 15:28:49.985699 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:49Z","lastTransitionTime":"2026-01-29T15:28:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:50 crc kubenswrapper[5008]: I0129 15:28:50.088221 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:50 crc kubenswrapper[5008]: I0129 15:28:50.088282 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:50 crc kubenswrapper[5008]: I0129 15:28:50.088295 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:50 crc kubenswrapper[5008]: I0129 15:28:50.088312 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:50 crc kubenswrapper[5008]: I0129 15:28:50.088323 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:50Z","lastTransitionTime":"2026-01-29T15:28:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:50 crc kubenswrapper[5008]: I0129 15:28:50.190200 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:50 crc kubenswrapper[5008]: I0129 15:28:50.190263 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:50 crc kubenswrapper[5008]: I0129 15:28:50.190312 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:50 crc kubenswrapper[5008]: I0129 15:28:50.190329 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:50 crc kubenswrapper[5008]: I0129 15:28:50.190339 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:50Z","lastTransitionTime":"2026-01-29T15:28:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:50 crc kubenswrapper[5008]: I0129 15:28:50.293503 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:50 crc kubenswrapper[5008]: I0129 15:28:50.293541 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:50 crc kubenswrapper[5008]: I0129 15:28:50.293552 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:50 crc kubenswrapper[5008]: I0129 15:28:50.293567 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:50 crc kubenswrapper[5008]: I0129 15:28:50.293578 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:50Z","lastTransitionTime":"2026-01-29T15:28:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:50 crc kubenswrapper[5008]: I0129 15:28:50.323401 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 15:28:50 crc kubenswrapper[5008]: I0129 15:28:50.323473 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 15:28:50 crc kubenswrapper[5008]: E0129 15:28:50.323559 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 15:28:50 crc kubenswrapper[5008]: E0129 15:28:50.323693 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 15:28:50 crc kubenswrapper[5008]: I0129 15:28:50.323836 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 15:28:50 crc kubenswrapper[5008]: I0129 15:28:50.323897 5008 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-08 06:06:41.472838428 +0000 UTC Jan 29 15:28:50 crc kubenswrapper[5008]: E0129 15:28:50.324041 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 15:28:50 crc kubenswrapper[5008]: I0129 15:28:50.397450 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:50 crc kubenswrapper[5008]: I0129 15:28:50.397491 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:50 crc kubenswrapper[5008]: I0129 15:28:50.397503 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:50 crc kubenswrapper[5008]: I0129 15:28:50.397518 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:50 crc kubenswrapper[5008]: I0129 15:28:50.397529 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:50Z","lastTransitionTime":"2026-01-29T15:28:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:50 crc kubenswrapper[5008]: I0129 15:28:50.500069 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:50 crc kubenswrapper[5008]: I0129 15:28:50.500112 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:50 crc kubenswrapper[5008]: I0129 15:28:50.500123 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:50 crc kubenswrapper[5008]: I0129 15:28:50.500139 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:50 crc kubenswrapper[5008]: I0129 15:28:50.500150 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:50Z","lastTransitionTime":"2026-01-29T15:28:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:50 crc kubenswrapper[5008]: I0129 15:28:50.603003 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:50 crc kubenswrapper[5008]: I0129 15:28:50.603050 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:50 crc kubenswrapper[5008]: I0129 15:28:50.603080 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:50 crc kubenswrapper[5008]: I0129 15:28:50.603101 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:50 crc kubenswrapper[5008]: I0129 15:28:50.603115 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:50Z","lastTransitionTime":"2026-01-29T15:28:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:50 crc kubenswrapper[5008]: I0129 15:28:50.705641 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:50 crc kubenswrapper[5008]: I0129 15:28:50.705678 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:50 crc kubenswrapper[5008]: I0129 15:28:50.705689 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:50 crc kubenswrapper[5008]: I0129 15:28:50.705703 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:50 crc kubenswrapper[5008]: I0129 15:28:50.705714 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:50Z","lastTransitionTime":"2026-01-29T15:28:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:50 crc kubenswrapper[5008]: I0129 15:28:50.809149 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:50 crc kubenswrapper[5008]: I0129 15:28:50.809189 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:50 crc kubenswrapper[5008]: I0129 15:28:50.809198 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:50 crc kubenswrapper[5008]: I0129 15:28:50.809211 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:50 crc kubenswrapper[5008]: I0129 15:28:50.809224 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:50Z","lastTransitionTime":"2026-01-29T15:28:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:50 crc kubenswrapper[5008]: I0129 15:28:50.912553 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:50 crc kubenswrapper[5008]: I0129 15:28:50.912649 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:50 crc kubenswrapper[5008]: I0129 15:28:50.912663 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:50 crc kubenswrapper[5008]: I0129 15:28:50.912699 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:50 crc kubenswrapper[5008]: I0129 15:28:50.912716 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:50Z","lastTransitionTime":"2026-01-29T15:28:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:51 crc kubenswrapper[5008]: I0129 15:28:51.015440 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:51 crc kubenswrapper[5008]: I0129 15:28:51.015476 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:51 crc kubenswrapper[5008]: I0129 15:28:51.015487 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:51 crc kubenswrapper[5008]: I0129 15:28:51.015500 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:51 crc kubenswrapper[5008]: I0129 15:28:51.015509 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:51Z","lastTransitionTime":"2026-01-29T15:28:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:51 crc kubenswrapper[5008]: I0129 15:28:51.118134 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:51 crc kubenswrapper[5008]: I0129 15:28:51.118205 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:51 crc kubenswrapper[5008]: I0129 15:28:51.118227 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:51 crc kubenswrapper[5008]: I0129 15:28:51.118258 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:51 crc kubenswrapper[5008]: I0129 15:28:51.118281 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:51Z","lastTransitionTime":"2026-01-29T15:28:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:51 crc kubenswrapper[5008]: I0129 15:28:51.221484 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:51 crc kubenswrapper[5008]: I0129 15:28:51.221547 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:51 crc kubenswrapper[5008]: I0129 15:28:51.221567 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:51 crc kubenswrapper[5008]: I0129 15:28:51.221592 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:51 crc kubenswrapper[5008]: I0129 15:28:51.221610 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:51Z","lastTransitionTime":"2026-01-29T15:28:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:51 crc kubenswrapper[5008]: I0129 15:28:51.322972 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kkc6c" Jan 29 15:28:51 crc kubenswrapper[5008]: E0129 15:28:51.323249 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kkc6c" podUID="f3716fd8-7f9b-44e2-ac3c-e907d8793dc9" Jan 29 15:28:51 crc kubenswrapper[5008]: I0129 15:28:51.324149 5008 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-12 05:06:21.485790109 +0000 UTC Jan 29 15:28:51 crc kubenswrapper[5008]: I0129 15:28:51.324768 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:51 crc kubenswrapper[5008]: I0129 15:28:51.324879 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:51 crc kubenswrapper[5008]: I0129 15:28:51.324929 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:51 crc kubenswrapper[5008]: I0129 15:28:51.324947 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:51 crc kubenswrapper[5008]: I0129 15:28:51.324962 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:51Z","lastTransitionTime":"2026-01-29T15:28:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:51 crc kubenswrapper[5008]: I0129 15:28:51.427374 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:51 crc kubenswrapper[5008]: I0129 15:28:51.427403 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:51 crc kubenswrapper[5008]: I0129 15:28:51.427411 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:51 crc kubenswrapper[5008]: I0129 15:28:51.427423 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:51 crc kubenswrapper[5008]: I0129 15:28:51.427431 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:51Z","lastTransitionTime":"2026-01-29T15:28:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:51 crc kubenswrapper[5008]: I0129 15:28:51.529610 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:51 crc kubenswrapper[5008]: I0129 15:28:51.529663 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:51 crc kubenswrapper[5008]: I0129 15:28:51.529690 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:51 crc kubenswrapper[5008]: I0129 15:28:51.529709 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:51 crc kubenswrapper[5008]: I0129 15:28:51.529723 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:51Z","lastTransitionTime":"2026-01-29T15:28:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:51 crc kubenswrapper[5008]: I0129 15:28:51.632626 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:51 crc kubenswrapper[5008]: I0129 15:28:51.632688 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:51 crc kubenswrapper[5008]: I0129 15:28:51.632705 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:51 crc kubenswrapper[5008]: I0129 15:28:51.632727 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:51 crc kubenswrapper[5008]: I0129 15:28:51.632742 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:51Z","lastTransitionTime":"2026-01-29T15:28:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:51 crc kubenswrapper[5008]: I0129 15:28:51.750358 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:51 crc kubenswrapper[5008]: I0129 15:28:51.750406 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:51 crc kubenswrapper[5008]: I0129 15:28:51.750415 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:51 crc kubenswrapper[5008]: I0129 15:28:51.750432 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:51 crc kubenswrapper[5008]: I0129 15:28:51.750444 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:51Z","lastTransitionTime":"2026-01-29T15:28:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:51 crc kubenswrapper[5008]: I0129 15:28:51.854521 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:51 crc kubenswrapper[5008]: I0129 15:28:51.854558 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:51 crc kubenswrapper[5008]: I0129 15:28:51.854566 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:51 crc kubenswrapper[5008]: I0129 15:28:51.854582 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:51 crc kubenswrapper[5008]: I0129 15:28:51.854592 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:51Z","lastTransitionTime":"2026-01-29T15:28:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:51 crc kubenswrapper[5008]: I0129 15:28:51.956741 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:51 crc kubenswrapper[5008]: I0129 15:28:51.956823 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:51 crc kubenswrapper[5008]: I0129 15:28:51.956834 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:51 crc kubenswrapper[5008]: I0129 15:28:51.956851 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:51 crc kubenswrapper[5008]: I0129 15:28:51.956862 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:51Z","lastTransitionTime":"2026-01-29T15:28:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:52 crc kubenswrapper[5008]: I0129 15:28:52.061024 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:52 crc kubenswrapper[5008]: I0129 15:28:52.061075 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:52 crc kubenswrapper[5008]: I0129 15:28:52.061086 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:52 crc kubenswrapper[5008]: I0129 15:28:52.061103 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:52 crc kubenswrapper[5008]: I0129 15:28:52.061116 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:52Z","lastTransitionTime":"2026-01-29T15:28:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:52 crc kubenswrapper[5008]: I0129 15:28:52.163306 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:52 crc kubenswrapper[5008]: I0129 15:28:52.163362 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:52 crc kubenswrapper[5008]: I0129 15:28:52.163374 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:52 crc kubenswrapper[5008]: I0129 15:28:52.163395 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:52 crc kubenswrapper[5008]: I0129 15:28:52.163407 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:52Z","lastTransitionTime":"2026-01-29T15:28:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:52 crc kubenswrapper[5008]: I0129 15:28:52.266157 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:52 crc kubenswrapper[5008]: I0129 15:28:52.266253 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:52 crc kubenswrapper[5008]: I0129 15:28:52.266276 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:52 crc kubenswrapper[5008]: I0129 15:28:52.266303 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:52 crc kubenswrapper[5008]: I0129 15:28:52.266320 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:52Z","lastTransitionTime":"2026-01-29T15:28:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:52 crc kubenswrapper[5008]: I0129 15:28:52.323387 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 15:28:52 crc kubenswrapper[5008]: I0129 15:28:52.323489 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 15:28:52 crc kubenswrapper[5008]: I0129 15:28:52.323511 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 15:28:52 crc kubenswrapper[5008]: E0129 15:28:52.323678 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 15:28:52 crc kubenswrapper[5008]: E0129 15:28:52.323845 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 15:28:52 crc kubenswrapper[5008]: E0129 15:28:52.323901 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 15:28:52 crc kubenswrapper[5008]: I0129 15:28:52.324283 5008 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-22 10:29:21.671592971 +0000 UTC Jan 29 15:28:52 crc kubenswrapper[5008]: I0129 15:28:52.368859 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:52 crc kubenswrapper[5008]: I0129 15:28:52.368914 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:52 crc kubenswrapper[5008]: I0129 15:28:52.368926 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:52 crc kubenswrapper[5008]: I0129 15:28:52.368941 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:52 crc kubenswrapper[5008]: I0129 15:28:52.368952 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:52Z","lastTransitionTime":"2026-01-29T15:28:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:52 crc kubenswrapper[5008]: I0129 15:28:52.471294 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:52 crc kubenswrapper[5008]: I0129 15:28:52.471349 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:52 crc kubenswrapper[5008]: I0129 15:28:52.471360 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:52 crc kubenswrapper[5008]: I0129 15:28:52.471385 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:52 crc kubenswrapper[5008]: I0129 15:28:52.471401 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:52Z","lastTransitionTime":"2026-01-29T15:28:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:52 crc kubenswrapper[5008]: I0129 15:28:52.574845 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:52 crc kubenswrapper[5008]: I0129 15:28:52.574920 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:52 crc kubenswrapper[5008]: I0129 15:28:52.574940 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:52 crc kubenswrapper[5008]: I0129 15:28:52.574966 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:52 crc kubenswrapper[5008]: I0129 15:28:52.574983 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:52Z","lastTransitionTime":"2026-01-29T15:28:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:52 crc kubenswrapper[5008]: I0129 15:28:52.677864 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:52 crc kubenswrapper[5008]: I0129 15:28:52.677932 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:52 crc kubenswrapper[5008]: I0129 15:28:52.677952 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:52 crc kubenswrapper[5008]: I0129 15:28:52.677977 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:52 crc kubenswrapper[5008]: I0129 15:28:52.677996 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:52Z","lastTransitionTime":"2026-01-29T15:28:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:52 crc kubenswrapper[5008]: I0129 15:28:52.781741 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:52 crc kubenswrapper[5008]: I0129 15:28:52.781816 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:52 crc kubenswrapper[5008]: I0129 15:28:52.781834 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:52 crc kubenswrapper[5008]: I0129 15:28:52.781861 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:52 crc kubenswrapper[5008]: I0129 15:28:52.781876 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:52Z","lastTransitionTime":"2026-01-29T15:28:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:52 crc kubenswrapper[5008]: I0129 15:28:52.884318 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:52 crc kubenswrapper[5008]: I0129 15:28:52.884375 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:52 crc kubenswrapper[5008]: I0129 15:28:52.884387 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:52 crc kubenswrapper[5008]: I0129 15:28:52.884404 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:52 crc kubenswrapper[5008]: I0129 15:28:52.884416 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:52Z","lastTransitionTime":"2026-01-29T15:28:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:52 crc kubenswrapper[5008]: I0129 15:28:52.987582 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:52 crc kubenswrapper[5008]: I0129 15:28:52.987621 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:52 crc kubenswrapper[5008]: I0129 15:28:52.987632 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:52 crc kubenswrapper[5008]: I0129 15:28:52.987649 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:52 crc kubenswrapper[5008]: I0129 15:28:52.987659 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:52Z","lastTransitionTime":"2026-01-29T15:28:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:53 crc kubenswrapper[5008]: I0129 15:28:53.089955 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:53 crc kubenswrapper[5008]: I0129 15:28:53.090018 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:53 crc kubenswrapper[5008]: I0129 15:28:53.090039 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:53 crc kubenswrapper[5008]: I0129 15:28:53.090062 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:53 crc kubenswrapper[5008]: I0129 15:28:53.090078 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:53Z","lastTransitionTime":"2026-01-29T15:28:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:53 crc kubenswrapper[5008]: I0129 15:28:53.193990 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:53 crc kubenswrapper[5008]: I0129 15:28:53.194042 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:53 crc kubenswrapper[5008]: I0129 15:28:53.194053 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:53 crc kubenswrapper[5008]: I0129 15:28:53.194070 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:53 crc kubenswrapper[5008]: I0129 15:28:53.194083 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:53Z","lastTransitionTime":"2026-01-29T15:28:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:53 crc kubenswrapper[5008]: I0129 15:28:53.297286 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:53 crc kubenswrapper[5008]: I0129 15:28:53.297345 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:53 crc kubenswrapper[5008]: I0129 15:28:53.297358 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:53 crc kubenswrapper[5008]: I0129 15:28:53.297381 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:53 crc kubenswrapper[5008]: I0129 15:28:53.297400 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:53Z","lastTransitionTime":"2026-01-29T15:28:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:53 crc kubenswrapper[5008]: I0129 15:28:53.323942 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kkc6c" Jan 29 15:28:53 crc kubenswrapper[5008]: E0129 15:28:53.324134 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kkc6c" podUID="f3716fd8-7f9b-44e2-ac3c-e907d8793dc9" Jan 29 15:28:53 crc kubenswrapper[5008]: I0129 15:28:53.324445 5008 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-10 11:06:36.783606506 +0000 UTC Jan 29 15:28:53 crc kubenswrapper[5008]: I0129 15:28:53.401453 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:53 crc kubenswrapper[5008]: I0129 15:28:53.401520 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:53 crc kubenswrapper[5008]: I0129 15:28:53.401537 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:53 crc kubenswrapper[5008]: I0129 15:28:53.401564 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:53 crc kubenswrapper[5008]: I0129 15:28:53.401587 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:53Z","lastTransitionTime":"2026-01-29T15:28:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:53 crc kubenswrapper[5008]: I0129 15:28:53.503852 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:53 crc kubenswrapper[5008]: I0129 15:28:53.503932 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:53 crc kubenswrapper[5008]: I0129 15:28:53.503951 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:53 crc kubenswrapper[5008]: I0129 15:28:53.503979 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:53 crc kubenswrapper[5008]: I0129 15:28:53.504003 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:53Z","lastTransitionTime":"2026-01-29T15:28:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:53 crc kubenswrapper[5008]: I0129 15:28:53.606718 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:53 crc kubenswrapper[5008]: I0129 15:28:53.606760 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:53 crc kubenswrapper[5008]: I0129 15:28:53.606770 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:53 crc kubenswrapper[5008]: I0129 15:28:53.606829 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:53 crc kubenswrapper[5008]: I0129 15:28:53.606840 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:53Z","lastTransitionTime":"2026-01-29T15:28:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:53 crc kubenswrapper[5008]: I0129 15:28:53.709931 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:53 crc kubenswrapper[5008]: I0129 15:28:53.710005 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:53 crc kubenswrapper[5008]: I0129 15:28:53.710031 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:53 crc kubenswrapper[5008]: I0129 15:28:53.710061 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:53 crc kubenswrapper[5008]: I0129 15:28:53.710083 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:53Z","lastTransitionTime":"2026-01-29T15:28:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:53 crc kubenswrapper[5008]: I0129 15:28:53.813386 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:53 crc kubenswrapper[5008]: I0129 15:28:53.813463 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:53 crc kubenswrapper[5008]: I0129 15:28:53.813476 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:53 crc kubenswrapper[5008]: I0129 15:28:53.813494 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:53 crc kubenswrapper[5008]: I0129 15:28:53.813506 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:53Z","lastTransitionTime":"2026-01-29T15:28:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:53 crc kubenswrapper[5008]: I0129 15:28:53.916445 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:53 crc kubenswrapper[5008]: I0129 15:28:53.916497 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:53 crc kubenswrapper[5008]: I0129 15:28:53.916515 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:53 crc kubenswrapper[5008]: I0129 15:28:53.916535 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:53 crc kubenswrapper[5008]: I0129 15:28:53.916547 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:53Z","lastTransitionTime":"2026-01-29T15:28:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:54 crc kubenswrapper[5008]: I0129 15:28:54.019959 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:54 crc kubenswrapper[5008]: I0129 15:28:54.020034 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:54 crc kubenswrapper[5008]: I0129 15:28:54.020053 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:54 crc kubenswrapper[5008]: I0129 15:28:54.020078 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:54 crc kubenswrapper[5008]: I0129 15:28:54.020093 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:54Z","lastTransitionTime":"2026-01-29T15:28:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:54 crc kubenswrapper[5008]: I0129 15:28:54.123052 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:54 crc kubenswrapper[5008]: I0129 15:28:54.123121 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:54 crc kubenswrapper[5008]: I0129 15:28:54.123145 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:54 crc kubenswrapper[5008]: I0129 15:28:54.123173 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:54 crc kubenswrapper[5008]: I0129 15:28:54.123199 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:54Z","lastTransitionTime":"2026-01-29T15:28:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:54 crc kubenswrapper[5008]: I0129 15:28:54.226414 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:54 crc kubenswrapper[5008]: I0129 15:28:54.226481 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:54 crc kubenswrapper[5008]: I0129 15:28:54.226497 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:54 crc kubenswrapper[5008]: I0129 15:28:54.226521 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:54 crc kubenswrapper[5008]: I0129 15:28:54.226534 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:54Z","lastTransitionTime":"2026-01-29T15:28:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:54 crc kubenswrapper[5008]: I0129 15:28:54.323045 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 15:28:54 crc kubenswrapper[5008]: I0129 15:28:54.323081 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 15:28:54 crc kubenswrapper[5008]: E0129 15:28:54.323160 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 15:28:54 crc kubenswrapper[5008]: I0129 15:28:54.323051 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 15:28:54 crc kubenswrapper[5008]: E0129 15:28:54.323258 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 15:28:54 crc kubenswrapper[5008]: E0129 15:28:54.323404 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 15:28:54 crc kubenswrapper[5008]: I0129 15:28:54.325108 5008 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-01 13:12:21.08656571 +0000 UTC Jan 29 15:28:54 crc kubenswrapper[5008]: I0129 15:28:54.329297 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:54 crc kubenswrapper[5008]: I0129 15:28:54.329333 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:54 crc kubenswrapper[5008]: I0129 15:28:54.329344 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:54 crc kubenswrapper[5008]: I0129 15:28:54.329361 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:54 crc kubenswrapper[5008]: I0129 15:28:54.329374 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:54Z","lastTransitionTime":"2026-01-29T15:28:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:54 crc kubenswrapper[5008]: I0129 15:28:54.432592 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:54 crc kubenswrapper[5008]: I0129 15:28:54.432635 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:54 crc kubenswrapper[5008]: I0129 15:28:54.432643 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:54 crc kubenswrapper[5008]: I0129 15:28:54.432657 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:54 crc kubenswrapper[5008]: I0129 15:28:54.432671 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:54Z","lastTransitionTime":"2026-01-29T15:28:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:54 crc kubenswrapper[5008]: I0129 15:28:54.535651 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:54 crc kubenswrapper[5008]: I0129 15:28:54.535694 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:54 crc kubenswrapper[5008]: I0129 15:28:54.535707 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:54 crc kubenswrapper[5008]: I0129 15:28:54.535724 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:54 crc kubenswrapper[5008]: I0129 15:28:54.535737 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:54Z","lastTransitionTime":"2026-01-29T15:28:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:54 crc kubenswrapper[5008]: I0129 15:28:54.637750 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:54 crc kubenswrapper[5008]: I0129 15:28:54.637831 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:54 crc kubenswrapper[5008]: I0129 15:28:54.637843 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:54 crc kubenswrapper[5008]: I0129 15:28:54.637862 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:54 crc kubenswrapper[5008]: I0129 15:28:54.637874 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:54Z","lastTransitionTime":"2026-01-29T15:28:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:54 crc kubenswrapper[5008]: I0129 15:28:54.740335 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:54 crc kubenswrapper[5008]: I0129 15:28:54.740463 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:54 crc kubenswrapper[5008]: I0129 15:28:54.740523 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:54 crc kubenswrapper[5008]: I0129 15:28:54.740548 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:54 crc kubenswrapper[5008]: I0129 15:28:54.740569 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:54Z","lastTransitionTime":"2026-01-29T15:28:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:54 crc kubenswrapper[5008]: I0129 15:28:54.843438 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:54 crc kubenswrapper[5008]: I0129 15:28:54.843721 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:54 crc kubenswrapper[5008]: I0129 15:28:54.843897 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:54 crc kubenswrapper[5008]: I0129 15:28:54.844008 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:54 crc kubenswrapper[5008]: I0129 15:28:54.844110 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:54Z","lastTransitionTime":"2026-01-29T15:28:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:54 crc kubenswrapper[5008]: I0129 15:28:54.946927 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:54 crc kubenswrapper[5008]: I0129 15:28:54.946991 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:54 crc kubenswrapper[5008]: I0129 15:28:54.947010 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:54 crc kubenswrapper[5008]: I0129 15:28:54.947034 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:54 crc kubenswrapper[5008]: I0129 15:28:54.947051 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:54Z","lastTransitionTime":"2026-01-29T15:28:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:55 crc kubenswrapper[5008]: I0129 15:28:55.050361 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:55 crc kubenswrapper[5008]: I0129 15:28:55.050423 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:55 crc kubenswrapper[5008]: I0129 15:28:55.050442 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:55 crc kubenswrapper[5008]: I0129 15:28:55.050467 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:55 crc kubenswrapper[5008]: I0129 15:28:55.050486 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:55Z","lastTransitionTime":"2026-01-29T15:28:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:55 crc kubenswrapper[5008]: I0129 15:28:55.153515 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:55 crc kubenswrapper[5008]: I0129 15:28:55.153916 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:55 crc kubenswrapper[5008]: I0129 15:28:55.154227 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:55 crc kubenswrapper[5008]: I0129 15:28:55.154417 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:55 crc kubenswrapper[5008]: I0129 15:28:55.154738 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:55Z","lastTransitionTime":"2026-01-29T15:28:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:55 crc kubenswrapper[5008]: I0129 15:28:55.257589 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:55 crc kubenswrapper[5008]: I0129 15:28:55.257635 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:55 crc kubenswrapper[5008]: I0129 15:28:55.257647 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:55 crc kubenswrapper[5008]: I0129 15:28:55.257662 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:55 crc kubenswrapper[5008]: I0129 15:28:55.257674 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:55Z","lastTransitionTime":"2026-01-29T15:28:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:55 crc kubenswrapper[5008]: I0129 15:28:55.323688 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kkc6c" Jan 29 15:28:55 crc kubenswrapper[5008]: E0129 15:28:55.323886 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kkc6c" podUID="f3716fd8-7f9b-44e2-ac3c-e907d8793dc9" Jan 29 15:28:55 crc kubenswrapper[5008]: I0129 15:28:55.325899 5008 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-05 19:24:14.832125552 +0000 UTC Jan 29 15:28:55 crc kubenswrapper[5008]: I0129 15:28:55.360905 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:55 crc kubenswrapper[5008]: I0129 15:28:55.361034 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:55 crc kubenswrapper[5008]: I0129 15:28:55.361058 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:55 crc kubenswrapper[5008]: I0129 15:28:55.361088 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:55 crc kubenswrapper[5008]: I0129 15:28:55.361109 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:55Z","lastTransitionTime":"2026-01-29T15:28:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:55 crc kubenswrapper[5008]: I0129 15:28:55.464058 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:55 crc kubenswrapper[5008]: I0129 15:28:55.464143 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:55 crc kubenswrapper[5008]: I0129 15:28:55.464168 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:55 crc kubenswrapper[5008]: I0129 15:28:55.464197 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:55 crc kubenswrapper[5008]: I0129 15:28:55.464220 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:55Z","lastTransitionTime":"2026-01-29T15:28:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:55 crc kubenswrapper[5008]: I0129 15:28:55.566847 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:55 crc kubenswrapper[5008]: I0129 15:28:55.566905 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:55 crc kubenswrapper[5008]: I0129 15:28:55.566914 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:55 crc kubenswrapper[5008]: I0129 15:28:55.566928 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:55 crc kubenswrapper[5008]: I0129 15:28:55.566937 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:55Z","lastTransitionTime":"2026-01-29T15:28:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:55 crc kubenswrapper[5008]: I0129 15:28:55.669730 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:55 crc kubenswrapper[5008]: I0129 15:28:55.669774 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:55 crc kubenswrapper[5008]: I0129 15:28:55.669803 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:55 crc kubenswrapper[5008]: I0129 15:28:55.669818 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:55 crc kubenswrapper[5008]: I0129 15:28:55.669829 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:55Z","lastTransitionTime":"2026-01-29T15:28:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:55 crc kubenswrapper[5008]: I0129 15:28:55.772233 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:55 crc kubenswrapper[5008]: I0129 15:28:55.772266 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:55 crc kubenswrapper[5008]: I0129 15:28:55.772275 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:55 crc kubenswrapper[5008]: I0129 15:28:55.772289 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:55 crc kubenswrapper[5008]: I0129 15:28:55.772298 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:55Z","lastTransitionTime":"2026-01-29T15:28:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:55 crc kubenswrapper[5008]: I0129 15:28:55.875242 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:55 crc kubenswrapper[5008]: I0129 15:28:55.875301 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:55 crc kubenswrapper[5008]: I0129 15:28:55.875320 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:55 crc kubenswrapper[5008]: I0129 15:28:55.875343 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:55 crc kubenswrapper[5008]: I0129 15:28:55.875360 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:55Z","lastTransitionTime":"2026-01-29T15:28:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:55 crc kubenswrapper[5008]: I0129 15:28:55.948662 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:55 crc kubenswrapper[5008]: I0129 15:28:55.948718 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:55 crc kubenswrapper[5008]: I0129 15:28:55.948734 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:55 crc kubenswrapper[5008]: I0129 15:28:55.948759 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:55 crc kubenswrapper[5008]: I0129 15:28:55.948839 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:55Z","lastTransitionTime":"2026-01-29T15:28:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:55 crc kubenswrapper[5008]: E0129 15:28:55.961888 5008 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:28:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:28:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:55Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:28:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:28:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:55Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"23463cb0-4db2-46f4-86c5-cabe2301deff\\\",\\\"systemUUID\\\":\\\"ad986a03-9926-4209-a3e1-d38e666bee86\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:55Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:55 crc kubenswrapper[5008]: I0129 15:28:55.965064 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:55 crc kubenswrapper[5008]: I0129 15:28:55.965090 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:55 crc kubenswrapper[5008]: I0129 15:28:55.965100 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:55 crc kubenswrapper[5008]: I0129 15:28:55.965113 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:55 crc kubenswrapper[5008]: I0129 15:28:55.965122 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:55Z","lastTransitionTime":"2026-01-29T15:28:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:55 crc kubenswrapper[5008]: E0129 15:28:55.978670 5008 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:28:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:28:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:55Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:28:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:28:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:55Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"23463cb0-4db2-46f4-86c5-cabe2301deff\\\",\\\"systemUUID\\\":\\\"ad986a03-9926-4209-a3e1-d38e666bee86\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:55Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:55 crc kubenswrapper[5008]: I0129 15:28:55.982426 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:55 crc kubenswrapper[5008]: I0129 15:28:55.982481 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:55 crc kubenswrapper[5008]: I0129 15:28:55.982491 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:55 crc kubenswrapper[5008]: I0129 15:28:55.982507 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:55 crc kubenswrapper[5008]: I0129 15:28:55.982518 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:55Z","lastTransitionTime":"2026-01-29T15:28:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:55 crc kubenswrapper[5008]: E0129 15:28:55.993204 5008 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:28:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:28:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:55Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:28:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:28:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:55Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"23463cb0-4db2-46f4-86c5-cabe2301deff\\\",\\\"systemUUID\\\":\\\"ad986a03-9926-4209-a3e1-d38e666bee86\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:55Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:55 crc kubenswrapper[5008]: I0129 15:28:55.996807 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:55 crc kubenswrapper[5008]: I0129 15:28:55.996874 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:55 crc kubenswrapper[5008]: I0129 15:28:55.996891 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:55 crc kubenswrapper[5008]: I0129 15:28:55.996915 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:55 crc kubenswrapper[5008]: I0129 15:28:55.996932 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:55Z","lastTransitionTime":"2026-01-29T15:28:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:56 crc kubenswrapper[5008]: E0129 15:28:56.010175 5008 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:28:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:28:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:55Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:28:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:28:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:55Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"23463cb0-4db2-46f4-86c5-cabe2301deff\\\",\\\"systemUUID\\\":\\\"ad986a03-9926-4209-a3e1-d38e666bee86\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:56Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:56 crc kubenswrapper[5008]: I0129 15:28:56.014200 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:56 crc kubenswrapper[5008]: I0129 15:28:56.014282 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:56 crc kubenswrapper[5008]: I0129 15:28:56.014308 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:56 crc kubenswrapper[5008]: I0129 15:28:56.014340 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:56 crc kubenswrapper[5008]: I0129 15:28:56.014365 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:56Z","lastTransitionTime":"2026-01-29T15:28:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:56 crc kubenswrapper[5008]: E0129 15:28:56.030373 5008 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:28:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:56Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:28:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:56Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:28:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:56Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:28:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:56Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"23463cb0-4db2-46f4-86c5-cabe2301deff\\\",\\\"systemUUID\\\":\\\"ad986a03-9926-4209-a3e1-d38e666bee86\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:56Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:56 crc kubenswrapper[5008]: E0129 15:28:56.030499 5008 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 29 15:28:56 crc kubenswrapper[5008]: I0129 15:28:56.031714 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:56 crc kubenswrapper[5008]: I0129 15:28:56.031764 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:56 crc kubenswrapper[5008]: I0129 15:28:56.031775 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:56 crc kubenswrapper[5008]: I0129 15:28:56.031805 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:56 crc kubenswrapper[5008]: I0129 15:28:56.031815 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:56Z","lastTransitionTime":"2026-01-29T15:28:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:56 crc kubenswrapper[5008]: I0129 15:28:56.135191 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:56 crc kubenswrapper[5008]: I0129 15:28:56.135240 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:56 crc kubenswrapper[5008]: I0129 15:28:56.135252 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:56 crc kubenswrapper[5008]: I0129 15:28:56.135269 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:56 crc kubenswrapper[5008]: I0129 15:28:56.135281 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:56Z","lastTransitionTime":"2026-01-29T15:28:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:56 crc kubenswrapper[5008]: I0129 15:28:56.237955 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:56 crc kubenswrapper[5008]: I0129 15:28:56.238005 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:56 crc kubenswrapper[5008]: I0129 15:28:56.238017 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:56 crc kubenswrapper[5008]: I0129 15:28:56.238035 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:56 crc kubenswrapper[5008]: I0129 15:28:56.238048 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:56Z","lastTransitionTime":"2026-01-29T15:28:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:56 crc kubenswrapper[5008]: I0129 15:28:56.322914 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 15:28:56 crc kubenswrapper[5008]: I0129 15:28:56.322989 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 15:28:56 crc kubenswrapper[5008]: E0129 15:28:56.323065 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 15:28:56 crc kubenswrapper[5008]: E0129 15:28:56.323187 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 15:28:56 crc kubenswrapper[5008]: I0129 15:28:56.323347 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 15:28:56 crc kubenswrapper[5008]: E0129 15:28:56.323717 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 15:28:56 crc kubenswrapper[5008]: I0129 15:28:56.324178 5008 scope.go:117] "RemoveContainer" containerID="643ac2f5dd2119b6ede74fb609222a3e5d7643c302ea60d1799cf3b8db6e2120" Jan 29 15:28:56 crc kubenswrapper[5008]: I0129 15:28:56.326145 5008 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-13 02:20:07.587342446 +0000 UTC Jan 29 15:28:56 crc kubenswrapper[5008]: I0129 15:28:56.343370 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:56 crc kubenswrapper[5008]: I0129 15:28:56.343407 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:56 crc kubenswrapper[5008]: I0129 15:28:56.343418 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:56 crc kubenswrapper[5008]: I0129 15:28:56.343433 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:56 crc kubenswrapper[5008]: I0129 15:28:56.343444 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:56Z","lastTransitionTime":"2026-01-29T15:28:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:56 crc kubenswrapper[5008]: I0129 15:28:56.454190 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:56 crc kubenswrapper[5008]: I0129 15:28:56.454251 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:56 crc kubenswrapper[5008]: I0129 15:28:56.454262 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:56 crc kubenswrapper[5008]: I0129 15:28:56.454330 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:56 crc kubenswrapper[5008]: I0129 15:28:56.454358 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:56Z","lastTransitionTime":"2026-01-29T15:28:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:56 crc kubenswrapper[5008]: I0129 15:28:56.556837 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:56 crc kubenswrapper[5008]: I0129 15:28:56.556885 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:56 crc kubenswrapper[5008]: I0129 15:28:56.556899 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:56 crc kubenswrapper[5008]: I0129 15:28:56.556916 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:56 crc kubenswrapper[5008]: I0129 15:28:56.556929 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:56Z","lastTransitionTime":"2026-01-29T15:28:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:56 crc kubenswrapper[5008]: I0129 15:28:56.659565 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:56 crc kubenswrapper[5008]: I0129 15:28:56.659617 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:56 crc kubenswrapper[5008]: I0129 15:28:56.659629 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:56 crc kubenswrapper[5008]: I0129 15:28:56.659648 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:56 crc kubenswrapper[5008]: I0129 15:28:56.659660 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:56Z","lastTransitionTime":"2026-01-29T15:28:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:56 crc kubenswrapper[5008]: I0129 15:28:56.762062 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:56 crc kubenswrapper[5008]: I0129 15:28:56.762099 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:56 crc kubenswrapper[5008]: I0129 15:28:56.762109 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:56 crc kubenswrapper[5008]: I0129 15:28:56.762122 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:56 crc kubenswrapper[5008]: I0129 15:28:56.762133 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:56Z","lastTransitionTime":"2026-01-29T15:28:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:56 crc kubenswrapper[5008]: I0129 15:28:56.864145 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:56 crc kubenswrapper[5008]: I0129 15:28:56.864180 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:56 crc kubenswrapper[5008]: I0129 15:28:56.864191 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:56 crc kubenswrapper[5008]: I0129 15:28:56.864208 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:56 crc kubenswrapper[5008]: I0129 15:28:56.864219 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:56Z","lastTransitionTime":"2026-01-29T15:28:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:56 crc kubenswrapper[5008]: I0129 15:28:56.966495 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:56 crc kubenswrapper[5008]: I0129 15:28:56.966534 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:56 crc kubenswrapper[5008]: I0129 15:28:56.966544 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:56 crc kubenswrapper[5008]: I0129 15:28:56.966558 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:56 crc kubenswrapper[5008]: I0129 15:28:56.966567 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:56Z","lastTransitionTime":"2026-01-29T15:28:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:57 crc kubenswrapper[5008]: I0129 15:28:57.007406 5008 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-pqg9w_1d092513-7735-4c98-9734-57bc46b99280/ovnkube-controller/2.log" Jan 29 15:28:57 crc kubenswrapper[5008]: I0129 15:28:57.009991 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pqg9w" event={"ID":"1d092513-7735-4c98-9734-57bc46b99280","Type":"ContainerStarted","Data":"c4894794fa383987c6dc74bda3cd40e56fa81dab982e631fe2fb043b74a6afd9"} Jan 29 15:28:57 crc kubenswrapper[5008]: I0129 15:28:57.010373 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-pqg9w" Jan 29 15:28:57 crc kubenswrapper[5008]: I0129 15:28:57.032445 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pqg9w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1d092513-7735-4c98-9734-57bc46b99280\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://08beb10f1715c1ca4bbe5b5ecf918e595f3befca424a2b65a06e682936dcc9c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2xcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://84bee79a5084a74e833cfe4bac65bc4b319e7a41e9f3e8c7ee7de383385da1a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2xcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eddc7bcf8b28e2d71e41dbad61e84e0e0ac1e2702628a400e9c16dcc4303cad1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2xcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b82de879355c27b3c577b5d5a292b2c1db266e6d92a8e01409bf87ede71ba420\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2xcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e0b6c0db5ed1e87ffade45aa1c7194322bbf680050f9b7328a3584db57e1554\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2xcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://676b28dc78242b0ec7c7a3643a048da9020c807de1f4ddd0cd801f60a1bf41a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2xcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4894794fa383987c6dc74bda3cd40e56fa81dab982e631fe2fb043b74a6afd9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://643ac2f5dd2119b6ede74fb609222a3e5d7643c302ea60d1799cf3b8db6e2120\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-29T15:28:31Z\\\",\\\"message\\\":\\\"15:28:30.852721 6704 default_network_controller.go:776] Recording success event on pod openshift-etcd/etcd-crc\\\\nI0129 15:28:30.852591 6704 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI0129 15:28:30.852819 6704 model_client.go:382] Update operations generated as: [{Op:update Table:NAT Row:map[external_ip:192.168.126.11 logical_ip:10.217.0.4 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {43933d5e-3c3b-4ff8-8926-04ac25de450e}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0129 15:28:30.852875 6704 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:30Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2xcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dc93128ecb53884c776154eafc7f29837e9c378a10c37df5d85d608ef14d7195\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2xcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6807035e51f1a1b563d2c2de6ad73607b2a3bbb9b4336cb9dfeea693d35fdda6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6807035e51f1a1b563d2c2de6ad73607b2a3bbb9b4336cb9dfeea693d35fdda6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2xcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:27:59Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-pqg9w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:57Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:57 crc kubenswrapper[5008]: I0129 15:28:57.044288 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-kkc6c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f3716fd8-7f9b-44e2-ac3c-e907d8793dc9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tl4fv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tl4fv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:28:13Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-kkc6c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:57Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:57 crc kubenswrapper[5008]: I0129 15:28:57.055433 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f3710d4-b153-4018-a492-367eb8b81ef8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c89e24fc5acc0577d3d738d63e7982aa32a07ecc01952570f6f417286b8747a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://33245f510d76b9610b3e44259d0944eaef5873c4bc31c3f3012a013248d16933\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c76eae897742ba4e95f6d60a81e2da82f1c0b0e220f48473436b03bff9f2f7e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0ee76cb03f96b669c6907a5d4a1520afda186e96b59ddea75f8c0fd7547c9063\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0ee76cb03f96b669c6907a5d4a1520afda186e96b59ddea75f8c0fd7547c9063\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:27:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:27:37Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:57Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:57 crc kubenswrapper[5008]: I0129 15:28:57.068202 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:57 crc kubenswrapper[5008]: I0129 15:28:57.068238 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:57 crc kubenswrapper[5008]: I0129 15:28:57.068249 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:57 crc kubenswrapper[5008]: I0129 15:28:57.068206 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c04122903ba8ec9ecb21ba42f430520d0a097fff8cea9572b066e146d519cf91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:57Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:57 crc kubenswrapper[5008]: I0129 15:28:57.068265 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:57 crc kubenswrapper[5008]: I0129 15:28:57.068424 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:57Z","lastTransitionTime":"2026-01-29T15:28:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:57 crc kubenswrapper[5008]: I0129 15:28:57.081249 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:57Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:57 crc kubenswrapper[5008]: I0129 15:28:57.090805 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-wtvvb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2dede057-dcce-4302-8efe-e2c3640308ec\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://63cab2ec47a6dc148b6d3554a6f4b5c1985ca43bf62bfc444ff3582273cce517\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mtnst\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:27:58Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-wtvvb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:57Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:57 crc kubenswrapper[5008]: I0129 15:28:57.100634 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-gk9q8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ca0fcb2d-733d-4bde-9bbf-3f7082d0e244\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fda885d25c8fd46bd297810d4fb6c23ec0d4bb76993e94ea75a623b0feeed247\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6blck\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4781ea933d8ce868cf1da4b2890797c16012b434ce074870a59307d61a3c731\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6blck\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:27:59Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-gk9q8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:57Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:57 crc kubenswrapper[5008]: I0129 15:28:57.115515 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ae42d856f5916fe3a1dace4ed5ed53a6cab552d169357b7303516719b78ef076\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:57Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:57 crc kubenswrapper[5008]: I0129 15:28:57.130334 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e5526ab405f367c31c46e86dc356f5c21ac7529cd706af08cb6cd35e54dbe33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a34142066431679db41e56f6697765165128986ad22bc919152524672e3035d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:57Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:57 crc kubenswrapper[5008]: I0129 15:28:57.142862 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4aed8a0d-ecac-43fd-a31e-04cfbb01f872\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e2cad6ba94fe1fbb01c043c1e8eabda3989f05822a3a7a6e105d2cd8aa794333\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://83662d418c40cdea3f8af62c97834fd30d88d2fe441ca4a0576566e8f6e9bc1d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://83662d418c40cdea3f8af62c97834fd30d88d2fe441ca4a0576566e8f6e9bc1d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:27:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:27:37Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:57Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:57 crc kubenswrapper[5008]: I0129 15:28:57.156534 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-78bl2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa065d0b-d690-4a7d-9079-a8f976a7aca3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb7be81711617226cfa9af5ce71166ad176fc477581c03ba781a2746d64bbf31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trwfk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dce68b57fb66d0f4fb38e7ba2da32746311a7705ec80e7dbaaee405bf6175456\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dce68b57fb66d0f4fb38e7ba2da32746311a7705ec80e7dbaaee405bf6175456\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:27:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trwfk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be2538011dc9cfea90fe3fdf861804d4f36944262a852e2efe4c6a215019fb7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://be2538011dc9cfea90fe3fdf861804d4f36944262a852e2efe4c6a215019fb7e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trwfk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1e468924dd5d2c21d28331698458147151b2c74b04a9154c3f0638b271ffb36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c1e468924dd5d2c21d28331698458147151b2c74b04a9154c3f0638b271ffb36\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trwfk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf3e60925690b1b555efc2db95efcef76510c147b6338b65b071bf0729561a6b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bf3e60925690b1b555efc2db95efcef76510c147b6338b65b071bf0729561a6b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trwfk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://83ad0f06e7035a28c9d0207484d22ac175226fac31b1d5e233ce7231cb957fb3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://83ad0f06e7035a28c9d0207484d22ac175226fac31b1d5e233ce7231cb957fb3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trwfk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://78d2f9571b05eeb98c339f4165ca858289b85192d254ea86d4fb2eae7ea2e61e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://78d2f9571b05eeb98c339f4165ca858289b85192d254ea86d4fb2eae7ea2e61e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trwfk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:27:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-78bl2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:57Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:57 crc kubenswrapper[5008]: I0129 15:28:57.167550 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-qj8wb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ffbfcf6-99e5-450c-8c72-b2db9365d93e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb113f45b58a5039b88d2c176d718d5a012e21c1785781c1fcda5843d529a9af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8mvmz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:28:01Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-qj8wb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:57Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:57 crc kubenswrapper[5008]: I0129 15:28:57.170257 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:57 crc kubenswrapper[5008]: I0129 15:28:57.170288 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:57 crc kubenswrapper[5008]: I0129 15:28:57.170300 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:57 crc kubenswrapper[5008]: I0129 15:28:57.170316 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:57 crc kubenswrapper[5008]: I0129 15:28:57.170327 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:57Z","lastTransitionTime":"2026-01-29T15:28:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:57 crc kubenswrapper[5008]: I0129 15:28:57.183376 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-42hcz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cdd8ae23-3f9f-49f8-928d-46dad823fde4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://af9a973786f58d2c63123c28e0b1aedaa9ec4188567960c544cf68f70ba20873\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a44b0a7b0b53c339b51d5391ad7e0eb342bdb491b4af37a98f48788b8e2c077b\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-29T15:28:47Z\\\",\\\"message\\\":\\\"2026-01-29T15:28:02+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_5ee9c321-48df-4d5b-add2-57b9ac5ae3f8\\\\n2026-01-29T15:28:02+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_5ee9c321-48df-4d5b-add2-57b9ac5ae3f8 to /host/opt/cni/bin/\\\\n2026-01-29T15:28:02Z [verbose] multus-daemon started\\\\n2026-01-29T15:28:02Z [verbose] Readiness Indicator file check\\\\n2026-01-29T15:28:47Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T15:27:59Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg75x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:27:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-42hcz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:57Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:57 crc kubenswrapper[5008]: I0129 15:28:57.193217 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-p5kdp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f5a0b69-5edd-467c-a822-093f1689df1d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6930478f2ddb5112eb944beac7cabb3e235fe16465a4706e8c665ce9481bc49d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gq2fz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://98ea0d7c1f2e3e9fc74e8e58ae26ab486c6b75f655273070cebee814c7c99e0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gq2fz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:28:11Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-p5kdp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:57Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:57 crc kubenswrapper[5008]: I0129 15:28:57.211314 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"77958faa-02ef-4792-b792-6094f922cd1d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de76f0d6e08ee14b4a5ab39a21ebdc63bdf379dcd5b648ae46a4edcc2a49f20e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1dcda54222f387e6560d3e297be72e19032a975feb916bc12a220870207a3f35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3cb618c2c44502074cb37ce1e688d187254eafae3916372a16c8ab845fed767a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff8e5fd243880ce71f07c5c532cad2cdff0e4bca2d0083280be78206a1a4c854\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7393e24277d74a2b9987e6cdc54cd65485f5bc57d93ec25a2cb8479923db1feb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://380711ea042d739a804ab6da4c0361004cc9ed9a48a5f4b006d168df6a84ebb5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://380711ea042d739a804ab6da4c0361004cc9ed9a48a5f4b006d168df6a84ebb5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:27:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d9134a6829b7df9b42aeae161ea1f3961837d6a0b322b1adcd2417c47c0f5d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1d9134a6829b7df9b42aeae161ea1f3961837d6a0b322b1adcd2417c47c0f5d5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:27:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://9206e5978757b0979fa411a384d9e5b4728b01a769f87383df38dbb8f0f18e4b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9206e5978757b0979fa411a384d9e5b4728b01a769f87383df38dbb8f0f18e4b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:27:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:27:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:27:37Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:57Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:57 crc kubenswrapper[5008]: I0129 15:28:57.225295 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2624b9eb-bfe1-4c46-8825-6152c5e00565\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://12266e3ba2ed2e5d6d1e7ee893a0d59cd4575c8870cb1e129ca0fd9b8623467f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c778df6f5c031669143db37980250c01473f3d9856acc44a6ef51852822f99ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://de7c341c7443f28f5919ef6baeb21377b5571637ad807dd7515a5f28c218034b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0f710dffd08d1bbb467ff9d2c6a5d5beed779550747459407916e743506ab27\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:27:37Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:57Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:57 crc kubenswrapper[5008]: I0129 15:28:57.238105 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d62f7cc2-d2d7-4c9a-9432-8b4fb9f3fcf2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4702656214e54c2881bf198364622648679a9981721d09a6b1551a134c63b7d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ed1794f8b68a0810301b6f7b91e03cfb269b35084dd97b2f153789ba70970f2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://677c04a1dffb767e6149ccb064772548ca29cf553afc20cb4eb82a5f85742ff0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://412b5d429b7a86a87e710ba4a0c81a54b03108f41ce6cc29f429aede063eb76c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4d710e35a02d14289e2d5fe6b35c08621e78c96b7e9e30451ffd6d51962fb761\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"message\\\":\\\"file observer\\\\nW0129 15:27:57.701071 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0129 15:27:57.704726 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 15:27:57.707574 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-445213743/tls.crt::/tmp/serving-cert-445213743/tls.key\\\\\\\"\\\\nI0129 15:27:58.036057 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0129 15:27:58.041904 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0129 15:27:58.041936 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0129 15:27:58.041959 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0129 15:27:58.041967 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0129 15:27:58.046875 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0129 15:27:58.046901 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 15:27:58.046906 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 15:27:58.046911 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0129 15:27:58.046914 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0129 15:27:58.046917 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0129 15:27:58.046919 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0129 15:27:58.047110 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0129 15:27:58.052272 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T15:27:42Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3397d4d59fbac09e49247425eb263f25d13c62a72013146c981b606f6389165d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:40Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://25a0c747be0a011a60911a631709b27620d8ebc5afea1d21dbeb71f26d971f6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://25a0c747be0a011a60911a631709b27620d8ebc5afea1d21dbeb71f26d971f6e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:27:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:27:37Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:57Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:57 crc kubenswrapper[5008]: I0129 15:28:57.248716 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:57Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:57 crc kubenswrapper[5008]: I0129 15:28:57.258804 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:57Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:57 crc kubenswrapper[5008]: I0129 15:28:57.272646 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:57 crc kubenswrapper[5008]: I0129 15:28:57.272684 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:57 crc kubenswrapper[5008]: I0129 15:28:57.272692 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:57 crc kubenswrapper[5008]: I0129 15:28:57.272706 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:57 crc kubenswrapper[5008]: I0129 15:28:57.272714 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:57Z","lastTransitionTime":"2026-01-29T15:28:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:57 crc kubenswrapper[5008]: I0129 15:28:57.322711 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kkc6c" Jan 29 15:28:57 crc kubenswrapper[5008]: E0129 15:28:57.322943 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kkc6c" podUID="f3716fd8-7f9b-44e2-ac3c-e907d8793dc9" Jan 29 15:28:57 crc kubenswrapper[5008]: I0129 15:28:57.327105 5008 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-11 05:31:41.968963403 +0000 UTC Jan 29 15:28:57 crc kubenswrapper[5008]: I0129 15:28:57.343233 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-78bl2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa065d0b-d690-4a7d-9079-a8f976a7aca3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb7be81711617226cfa9af5ce71166ad176fc477581c03ba781a2746d64bbf31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trwfk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dce68b57fb66d0f4fb38e7ba2da32746311a7705ec80e7dbaaee405bf6175456\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dce68b57fb66d0f4fb38e7ba2da32746311a7705ec80e7dbaaee405bf6175456\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:27:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trwfk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be2538011dc9cfea90fe3fdf861804d4f36944262a852e2efe4c6a215019fb7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://be2538011dc9cfea90fe3fdf861804d4f36944262a852e2efe4c6a215019fb7e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trwfk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1e468924dd5d2c21d28331698458147151b2c74b04a9154c3f0638b271ffb36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c1e468924dd5d2c21d28331698458147151b2c74b04a9154c3f0638b271ffb36\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trwfk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf3e60925690b1b555efc2db95efcef76510c147b6338b65b071bf0729561a6b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bf3e60925690b1b555efc2db95efcef76510c147b6338b65b071bf0729561a6b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trwfk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://83ad0f06e7035a28c9d0207484d22ac175226fac31b1d5e233ce7231cb957fb3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://83ad0f06e7035a28c9d0207484d22ac175226fac31b1d5e233ce7231cb957fb3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trwfk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://78d2f9571b05eeb98c339f4165ca858289b85192d254ea86d4fb2eae7ea2e61e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://78d2f9571b05eeb98c339f4165ca858289b85192d254ea86d4fb2eae7ea2e61e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trwfk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:27:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-78bl2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:57Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:57 crc kubenswrapper[5008]: I0129 15:28:57.355667 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-qj8wb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ffbfcf6-99e5-450c-8c72-b2db9365d93e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb113f45b58a5039b88d2c176d718d5a012e21c1785781c1fcda5843d529a9af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8mvmz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:28:01Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-qj8wb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:57Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:57 crc kubenswrapper[5008]: I0129 15:28:57.364613 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4aed8a0d-ecac-43fd-a31e-04cfbb01f872\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e2cad6ba94fe1fbb01c043c1e8eabda3989f05822a3a7a6e105d2cd8aa794333\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://83662d418c40cdea3f8af62c97834fd30d88d2fe441ca4a0576566e8f6e9bc1d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://83662d418c40cdea3f8af62c97834fd30d88d2fe441ca4a0576566e8f6e9bc1d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:27:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:27:37Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:57Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:57 crc kubenswrapper[5008]: I0129 15:28:57.374932 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:57 crc kubenswrapper[5008]: I0129 15:28:57.375183 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:57 crc kubenswrapper[5008]: I0129 15:28:57.375318 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:57 crc kubenswrapper[5008]: I0129 15:28:57.375435 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:57 crc kubenswrapper[5008]: I0129 15:28:57.375543 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:57Z","lastTransitionTime":"2026-01-29T15:28:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:57 crc kubenswrapper[5008]: I0129 15:28:57.378830 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2624b9eb-bfe1-4c46-8825-6152c5e00565\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://12266e3ba2ed2e5d6d1e7ee893a0d59cd4575c8870cb1e129ca0fd9b8623467f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c778df6f5c031669143db37980250c01473f3d9856acc44a6ef51852822f99ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://de7c341c7443f28f5919ef6baeb21377b5571637ad807dd7515a5f28c218034b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0f710dffd08d1bbb467ff9d2c6a5d5beed779550747459407916e743506ab27\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:27:37Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:57Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:57 crc kubenswrapper[5008]: I0129 15:28:57.393182 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d62f7cc2-d2d7-4c9a-9432-8b4fb9f3fcf2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4702656214e54c2881bf198364622648679a9981721d09a6b1551a134c63b7d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ed1794f8b68a0810301b6f7b91e03cfb269b35084dd97b2f153789ba70970f2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://677c04a1dffb767e6149ccb064772548ca29cf553afc20cb4eb82a5f85742ff0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://412b5d429b7a86a87e710ba4a0c81a54b03108f41ce6cc29f429aede063eb76c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4d710e35a02d14289e2d5fe6b35c08621e78c96b7e9e30451ffd6d51962fb761\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"message\\\":\\\"file observer\\\\nW0129 15:27:57.701071 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0129 15:27:57.704726 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 15:27:57.707574 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-445213743/tls.crt::/tmp/serving-cert-445213743/tls.key\\\\\\\"\\\\nI0129 15:27:58.036057 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0129 15:27:58.041904 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0129 15:27:58.041936 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0129 15:27:58.041959 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0129 15:27:58.041967 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0129 15:27:58.046875 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0129 15:27:58.046901 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 15:27:58.046906 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 15:27:58.046911 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0129 15:27:58.046914 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0129 15:27:58.046917 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0129 15:27:58.046919 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0129 15:27:58.047110 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0129 15:27:58.052272 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T15:27:42Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3397d4d59fbac09e49247425eb263f25d13c62a72013146c981b606f6389165d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:40Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://25a0c747be0a011a60911a631709b27620d8ebc5afea1d21dbeb71f26d971f6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://25a0c747be0a011a60911a631709b27620d8ebc5afea1d21dbeb71f26d971f6e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:27:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:27:37Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:57Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:57 crc kubenswrapper[5008]: I0129 15:28:57.406585 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:57Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:57 crc kubenswrapper[5008]: I0129 15:28:57.420938 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:57Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:57 crc kubenswrapper[5008]: I0129 15:28:57.439336 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-42hcz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cdd8ae23-3f9f-49f8-928d-46dad823fde4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://af9a973786f58d2c63123c28e0b1aedaa9ec4188567960c544cf68f70ba20873\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a44b0a7b0b53c339b51d5391ad7e0eb342bdb491b4af37a98f48788b8e2c077b\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-29T15:28:47Z\\\",\\\"message\\\":\\\"2026-01-29T15:28:02+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_5ee9c321-48df-4d5b-add2-57b9ac5ae3f8\\\\n2026-01-29T15:28:02+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_5ee9c321-48df-4d5b-add2-57b9ac5ae3f8 to /host/opt/cni/bin/\\\\n2026-01-29T15:28:02Z [verbose] multus-daemon started\\\\n2026-01-29T15:28:02Z [verbose] Readiness Indicator file check\\\\n2026-01-29T15:28:47Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T15:27:59Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg75x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:27:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-42hcz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:57Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:57 crc kubenswrapper[5008]: I0129 15:28:57.453253 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-p5kdp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f5a0b69-5edd-467c-a822-093f1689df1d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6930478f2ddb5112eb944beac7cabb3e235fe16465a4706e8c665ce9481bc49d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gq2fz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://98ea0d7c1f2e3e9fc74e8e58ae26ab486c6b75f655273070cebee814c7c99e0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gq2fz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:28:11Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-p5kdp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:57Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:57 crc kubenswrapper[5008]: I0129 15:28:57.473145 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"77958faa-02ef-4792-b792-6094f922cd1d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de76f0d6e08ee14b4a5ab39a21ebdc63bdf379dcd5b648ae46a4edcc2a49f20e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1dcda54222f387e6560d3e297be72e19032a975feb916bc12a220870207a3f35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3cb618c2c44502074cb37ce1e688d187254eafae3916372a16c8ab845fed767a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff8e5fd243880ce71f07c5c532cad2cdff0e4bca2d0083280be78206a1a4c854\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7393e24277d74a2b9987e6cdc54cd65485f5bc57d93ec25a2cb8479923db1feb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://380711ea042d739a804ab6da4c0361004cc9ed9a48a5f4b006d168df6a84ebb5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://380711ea042d739a804ab6da4c0361004cc9ed9a48a5f4b006d168df6a84ebb5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:27:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d9134a6829b7df9b42aeae161ea1f3961837d6a0b322b1adcd2417c47c0f5d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1d9134a6829b7df9b42aeae161ea1f3961837d6a0b322b1adcd2417c47c0f5d5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:27:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://9206e5978757b0979fa411a384d9e5b4728b01a769f87383df38dbb8f0f18e4b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9206e5978757b0979fa411a384d9e5b4728b01a769f87383df38dbb8f0f18e4b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:27:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:27:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:27:37Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:57Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:57 crc kubenswrapper[5008]: I0129 15:28:57.477703 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:57 crc kubenswrapper[5008]: I0129 15:28:57.477736 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:57 crc kubenswrapper[5008]: I0129 15:28:57.477745 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:57 crc kubenswrapper[5008]: I0129 15:28:57.477759 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:57 crc kubenswrapper[5008]: I0129 15:28:57.477769 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:57Z","lastTransitionTime":"2026-01-29T15:28:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:57 crc kubenswrapper[5008]: I0129 15:28:57.484747 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c04122903ba8ec9ecb21ba42f430520d0a097fff8cea9572b066e146d519cf91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:57Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:57 crc kubenswrapper[5008]: I0129 15:28:57.496286 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:57Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:57 crc kubenswrapper[5008]: I0129 15:28:57.504240 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-wtvvb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2dede057-dcce-4302-8efe-e2c3640308ec\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://63cab2ec47a6dc148b6d3554a6f4b5c1985ca43bf62bfc444ff3582273cce517\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mtnst\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:27:58Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-wtvvb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:57Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:57 crc kubenswrapper[5008]: I0129 15:28:57.516351 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-gk9q8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ca0fcb2d-733d-4bde-9bbf-3f7082d0e244\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fda885d25c8fd46bd297810d4fb6c23ec0d4bb76993e94ea75a623b0feeed247\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6blck\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4781ea933d8ce868cf1da4b2890797c16012b434ce074870a59307d61a3c731\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6blck\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:27:59Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-gk9q8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:57Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:57 crc kubenswrapper[5008]: I0129 15:28:57.532843 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pqg9w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1d092513-7735-4c98-9734-57bc46b99280\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://08beb10f1715c1ca4bbe5b5ecf918e595f3befca424a2b65a06e682936dcc9c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2xcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://84bee79a5084a74e833cfe4bac65bc4b319e7a41e9f3e8c7ee7de383385da1a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2xcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eddc7bcf8b28e2d71e41dbad61e84e0e0ac1e2702628a400e9c16dcc4303cad1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2xcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b82de879355c27b3c577b5d5a292b2c1db266e6d92a8e01409bf87ede71ba420\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2xcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e0b6c0db5ed1e87ffade45aa1c7194322bbf680050f9b7328a3584db57e1554\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2xcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://676b28dc78242b0ec7c7a3643a048da9020c807de1f4ddd0cd801f60a1bf41a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2xcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4894794fa383987c6dc74bda3cd40e56fa81dab982e631fe2fb043b74a6afd9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://643ac2f5dd2119b6ede74fb609222a3e5d7643c302ea60d1799cf3b8db6e2120\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-29T15:28:31Z\\\",\\\"message\\\":\\\"15:28:30.852721 6704 default_network_controller.go:776] Recording success event on pod openshift-etcd/etcd-crc\\\\nI0129 15:28:30.852591 6704 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI0129 15:28:30.852819 6704 model_client.go:382] Update operations generated as: [{Op:update Table:NAT Row:map[external_ip:192.168.126.11 logical_ip:10.217.0.4 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {43933d5e-3c3b-4ff8-8926-04ac25de450e}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0129 15:28:30.852875 6704 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:30Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2xcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dc93128ecb53884c776154eafc7f29837e9c378a10c37df5d85d608ef14d7195\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2xcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6807035e51f1a1b563d2c2de6ad73607b2a3bbb9b4336cb9dfeea693d35fdda6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6807035e51f1a1b563d2c2de6ad73607b2a3bbb9b4336cb9dfeea693d35fdda6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2xcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:27:59Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-pqg9w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:57Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:57 crc kubenswrapper[5008]: I0129 15:28:57.545460 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-kkc6c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f3716fd8-7f9b-44e2-ac3c-e907d8793dc9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tl4fv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tl4fv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:28:13Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-kkc6c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:57Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:57 crc kubenswrapper[5008]: I0129 15:28:57.556990 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f3710d4-b153-4018-a492-367eb8b81ef8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c89e24fc5acc0577d3d738d63e7982aa32a07ecc01952570f6f417286b8747a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://33245f510d76b9610b3e44259d0944eaef5873c4bc31c3f3012a013248d16933\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c76eae897742ba4e95f6d60a81e2da82f1c0b0e220f48473436b03bff9f2f7e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0ee76cb03f96b669c6907a5d4a1520afda186e96b59ddea75f8c0fd7547c9063\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0ee76cb03f96b669c6907a5d4a1520afda186e96b59ddea75f8c0fd7547c9063\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:27:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:27:37Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:57Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:57 crc kubenswrapper[5008]: I0129 15:28:57.570848 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e5526ab405f367c31c46e86dc356f5c21ac7529cd706af08cb6cd35e54dbe33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a34142066431679db41e56f6697765165128986ad22bc919152524672e3035d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:57Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:57 crc kubenswrapper[5008]: I0129 15:28:57.580374 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:57 crc kubenswrapper[5008]: I0129 15:28:57.580604 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:57 crc kubenswrapper[5008]: I0129 15:28:57.580709 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:57 crc kubenswrapper[5008]: I0129 15:28:57.580776 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:57 crc kubenswrapper[5008]: I0129 15:28:57.580913 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:57Z","lastTransitionTime":"2026-01-29T15:28:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:57 crc kubenswrapper[5008]: I0129 15:28:57.586474 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ae42d856f5916fe3a1dace4ed5ed53a6cab552d169357b7303516719b78ef076\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:57Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:57 crc kubenswrapper[5008]: I0129 15:28:57.682513 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:57 crc kubenswrapper[5008]: I0129 15:28:57.682555 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:57 crc kubenswrapper[5008]: I0129 15:28:57.682566 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:57 crc kubenswrapper[5008]: I0129 15:28:57.682582 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:57 crc kubenswrapper[5008]: I0129 15:28:57.682595 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:57Z","lastTransitionTime":"2026-01-29T15:28:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:57 crc kubenswrapper[5008]: I0129 15:28:57.784617 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:57 crc kubenswrapper[5008]: I0129 15:28:57.784671 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:57 crc kubenswrapper[5008]: I0129 15:28:57.784688 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:57 crc kubenswrapper[5008]: I0129 15:28:57.784711 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:57 crc kubenswrapper[5008]: I0129 15:28:57.784731 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:57Z","lastTransitionTime":"2026-01-29T15:28:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:57 crc kubenswrapper[5008]: I0129 15:28:57.887538 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:57 crc kubenswrapper[5008]: I0129 15:28:57.887888 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:57 crc kubenswrapper[5008]: I0129 15:28:57.888004 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:57 crc kubenswrapper[5008]: I0129 15:28:57.888108 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:57 crc kubenswrapper[5008]: I0129 15:28:57.888245 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:57Z","lastTransitionTime":"2026-01-29T15:28:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:57 crc kubenswrapper[5008]: I0129 15:28:57.991347 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:57 crc kubenswrapper[5008]: I0129 15:28:57.991386 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:57 crc kubenswrapper[5008]: I0129 15:28:57.991398 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:57 crc kubenswrapper[5008]: I0129 15:28:57.991413 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:57 crc kubenswrapper[5008]: I0129 15:28:57.991423 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:57Z","lastTransitionTime":"2026-01-29T15:28:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:58 crc kubenswrapper[5008]: I0129 15:28:58.014688 5008 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-pqg9w_1d092513-7735-4c98-9734-57bc46b99280/ovnkube-controller/3.log" Jan 29 15:28:58 crc kubenswrapper[5008]: I0129 15:28:58.015990 5008 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-pqg9w_1d092513-7735-4c98-9734-57bc46b99280/ovnkube-controller/2.log" Jan 29 15:28:58 crc kubenswrapper[5008]: I0129 15:28:58.020686 5008 generic.go:334] "Generic (PLEG): container finished" podID="1d092513-7735-4c98-9734-57bc46b99280" containerID="c4894794fa383987c6dc74bda3cd40e56fa81dab982e631fe2fb043b74a6afd9" exitCode=1 Jan 29 15:28:58 crc kubenswrapper[5008]: I0129 15:28:58.020727 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pqg9w" event={"ID":"1d092513-7735-4c98-9734-57bc46b99280","Type":"ContainerDied","Data":"c4894794fa383987c6dc74bda3cd40e56fa81dab982e631fe2fb043b74a6afd9"} Jan 29 15:28:58 crc kubenswrapper[5008]: I0129 15:28:58.020763 5008 scope.go:117] "RemoveContainer" containerID="643ac2f5dd2119b6ede74fb609222a3e5d7643c302ea60d1799cf3b8db6e2120" Jan 29 15:28:58 crc kubenswrapper[5008]: I0129 15:28:58.022113 5008 scope.go:117] "RemoveContainer" containerID="c4894794fa383987c6dc74bda3cd40e56fa81dab982e631fe2fb043b74a6afd9" Jan 29 15:28:58 crc kubenswrapper[5008]: E0129 15:28:58.022500 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-pqg9w_openshift-ovn-kubernetes(1d092513-7735-4c98-9734-57bc46b99280)\"" pod="openshift-ovn-kubernetes/ovnkube-node-pqg9w" podUID="1d092513-7735-4c98-9734-57bc46b99280" Jan 29 15:28:58 crc kubenswrapper[5008]: I0129 15:28:58.035000 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4aed8a0d-ecac-43fd-a31e-04cfbb01f872\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e2cad6ba94fe1fbb01c043c1e8eabda3989f05822a3a7a6e105d2cd8aa794333\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://83662d418c40cdea3f8af62c97834fd30d88d2fe441ca4a0576566e8f6e9bc1d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://83662d418c40cdea3f8af62c97834fd30d88d2fe441ca4a0576566e8f6e9bc1d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:27:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:27:37Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:58Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:58 crc kubenswrapper[5008]: I0129 15:28:58.052289 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-78bl2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa065d0b-d690-4a7d-9079-a8f976a7aca3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb7be81711617226cfa9af5ce71166ad176fc477581c03ba781a2746d64bbf31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trwfk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dce68b57fb66d0f4fb38e7ba2da32746311a7705ec80e7dbaaee405bf6175456\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dce68b57fb66d0f4fb38e7ba2da32746311a7705ec80e7dbaaee405bf6175456\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:27:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trwfk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be2538011dc9cfea90fe3fdf861804d4f36944262a852e2efe4c6a215019fb7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://be2538011dc9cfea90fe3fdf861804d4f36944262a852e2efe4c6a215019fb7e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trwfk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1e468924dd5d2c21d28331698458147151b2c74b04a9154c3f0638b271ffb36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c1e468924dd5d2c21d28331698458147151b2c74b04a9154c3f0638b271ffb36\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trwfk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf3e60925690b1b555efc2db95efcef76510c147b6338b65b071bf0729561a6b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bf3e60925690b1b555efc2db95efcef76510c147b6338b65b071bf0729561a6b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trwfk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://83ad0f06e7035a28c9d0207484d22ac175226fac31b1d5e233ce7231cb957fb3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://83ad0f06e7035a28c9d0207484d22ac175226fac31b1d5e233ce7231cb957fb3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trwfk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://78d2f9571b05eeb98c339f4165ca858289b85192d254ea86d4fb2eae7ea2e61e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://78d2f9571b05eeb98c339f4165ca858289b85192d254ea86d4fb2eae7ea2e61e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trwfk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:27:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-78bl2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:58Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:58 crc kubenswrapper[5008]: I0129 15:28:58.063499 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-qj8wb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ffbfcf6-99e5-450c-8c72-b2db9365d93e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb113f45b58a5039b88d2c176d718d5a012e21c1785781c1fcda5843d529a9af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8mvmz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:28:01Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-qj8wb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:58Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:58 crc kubenswrapper[5008]: I0129 15:28:58.073682 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-p5kdp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f5a0b69-5edd-467c-a822-093f1689df1d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6930478f2ddb5112eb944beac7cabb3e235fe16465a4706e8c665ce9481bc49d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gq2fz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://98ea0d7c1f2e3e9fc74e8e58ae26ab486c6b75f655273070cebee814c7c99e0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gq2fz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:28:11Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-p5kdp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:58Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:58 crc kubenswrapper[5008]: I0129 15:28:58.093424 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"77958faa-02ef-4792-b792-6094f922cd1d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de76f0d6e08ee14b4a5ab39a21ebdc63bdf379dcd5b648ae46a4edcc2a49f20e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1dcda54222f387e6560d3e297be72e19032a975feb916bc12a220870207a3f35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3cb618c2c44502074cb37ce1e688d187254eafae3916372a16c8ab845fed767a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff8e5fd243880ce71f07c5c532cad2cdff0e4bca2d0083280be78206a1a4c854\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7393e24277d74a2b9987e6cdc54cd65485f5bc57d93ec25a2cb8479923db1feb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://380711ea042d739a804ab6da4c0361004cc9ed9a48a5f4b006d168df6a84ebb5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://380711ea042d739a804ab6da4c0361004cc9ed9a48a5f4b006d168df6a84ebb5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:27:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d9134a6829b7df9b42aeae161ea1f3961837d6a0b322b1adcd2417c47c0f5d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1d9134a6829b7df9b42aeae161ea1f3961837d6a0b322b1adcd2417c47c0f5d5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:27:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://9206e5978757b0979fa411a384d9e5b4728b01a769f87383df38dbb8f0f18e4b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9206e5978757b0979fa411a384d9e5b4728b01a769f87383df38dbb8f0f18e4b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:27:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:27:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:27:37Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:58Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:58 crc kubenswrapper[5008]: I0129 15:28:58.094110 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:58 crc kubenswrapper[5008]: I0129 15:28:58.094140 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:58 crc kubenswrapper[5008]: I0129 15:28:58.094148 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:58 crc kubenswrapper[5008]: I0129 15:28:58.094162 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:58 crc kubenswrapper[5008]: I0129 15:28:58.094172 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:58Z","lastTransitionTime":"2026-01-29T15:28:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:58 crc kubenswrapper[5008]: I0129 15:28:58.104966 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2624b9eb-bfe1-4c46-8825-6152c5e00565\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://12266e3ba2ed2e5d6d1e7ee893a0d59cd4575c8870cb1e129ca0fd9b8623467f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c778df6f5c031669143db37980250c01473f3d9856acc44a6ef51852822f99ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://de7c341c7443f28f5919ef6baeb21377b5571637ad807dd7515a5f28c218034b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0f710dffd08d1bbb467ff9d2c6a5d5beed779550747459407916e743506ab27\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:27:37Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:58Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:58 crc kubenswrapper[5008]: I0129 15:28:58.116812 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d62f7cc2-d2d7-4c9a-9432-8b4fb9f3fcf2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4702656214e54c2881bf198364622648679a9981721d09a6b1551a134c63b7d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ed1794f8b68a0810301b6f7b91e03cfb269b35084dd97b2f153789ba70970f2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://677c04a1dffb767e6149ccb064772548ca29cf553afc20cb4eb82a5f85742ff0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://412b5d429b7a86a87e710ba4a0c81a54b03108f41ce6cc29f429aede063eb76c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4d710e35a02d14289e2d5fe6b35c08621e78c96b7e9e30451ffd6d51962fb761\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"message\\\":\\\"file observer\\\\nW0129 15:27:57.701071 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0129 15:27:57.704726 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 15:27:57.707574 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-445213743/tls.crt::/tmp/serving-cert-445213743/tls.key\\\\\\\"\\\\nI0129 15:27:58.036057 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0129 15:27:58.041904 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0129 15:27:58.041936 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0129 15:27:58.041959 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0129 15:27:58.041967 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0129 15:27:58.046875 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0129 15:27:58.046901 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 15:27:58.046906 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 15:27:58.046911 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0129 15:27:58.046914 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0129 15:27:58.046917 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0129 15:27:58.046919 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0129 15:27:58.047110 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0129 15:27:58.052272 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T15:27:42Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3397d4d59fbac09e49247425eb263f25d13c62a72013146c981b606f6389165d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:40Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://25a0c747be0a011a60911a631709b27620d8ebc5afea1d21dbeb71f26d971f6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://25a0c747be0a011a60911a631709b27620d8ebc5afea1d21dbeb71f26d971f6e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:27:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:27:37Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:58Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:58 crc kubenswrapper[5008]: I0129 15:28:58.127720 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:58Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:58 crc kubenswrapper[5008]: I0129 15:28:58.139354 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:58Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:58 crc kubenswrapper[5008]: I0129 15:28:58.150490 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-42hcz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cdd8ae23-3f9f-49f8-928d-46dad823fde4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://af9a973786f58d2c63123c28e0b1aedaa9ec4188567960c544cf68f70ba20873\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a44b0a7b0b53c339b51d5391ad7e0eb342bdb491b4af37a98f48788b8e2c077b\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-29T15:28:47Z\\\",\\\"message\\\":\\\"2026-01-29T15:28:02+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_5ee9c321-48df-4d5b-add2-57b9ac5ae3f8\\\\n2026-01-29T15:28:02+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_5ee9c321-48df-4d5b-add2-57b9ac5ae3f8 to /host/opt/cni/bin/\\\\n2026-01-29T15:28:02Z [verbose] multus-daemon started\\\\n2026-01-29T15:28:02Z [verbose] Readiness Indicator file check\\\\n2026-01-29T15:28:47Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T15:27:59Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg75x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:27:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-42hcz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:58Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:58 crc kubenswrapper[5008]: I0129 15:28:58.159585 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-kkc6c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f3716fd8-7f9b-44e2-ac3c-e907d8793dc9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tl4fv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tl4fv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:28:13Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-kkc6c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:58Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:58 crc kubenswrapper[5008]: I0129 15:28:58.170475 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f3710d4-b153-4018-a492-367eb8b81ef8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c89e24fc5acc0577d3d738d63e7982aa32a07ecc01952570f6f417286b8747a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://33245f510d76b9610b3e44259d0944eaef5873c4bc31c3f3012a013248d16933\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c76eae897742ba4e95f6d60a81e2da82f1c0b0e220f48473436b03bff9f2f7e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0ee76cb03f96b669c6907a5d4a1520afda186e96b59ddea75f8c0fd7547c9063\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0ee76cb03f96b669c6907a5d4a1520afda186e96b59ddea75f8c0fd7547c9063\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:27:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:27:37Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:58Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:58 crc kubenswrapper[5008]: I0129 15:28:58.183234 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c04122903ba8ec9ecb21ba42f430520d0a097fff8cea9572b066e146d519cf91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:58Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:58 crc kubenswrapper[5008]: I0129 15:28:58.196535 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:58Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:58 crc kubenswrapper[5008]: I0129 15:28:58.196651 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:58 crc kubenswrapper[5008]: I0129 15:28:58.196685 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:58 crc kubenswrapper[5008]: I0129 15:28:58.196695 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:58 crc kubenswrapper[5008]: I0129 15:28:58.196714 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:58 crc kubenswrapper[5008]: I0129 15:28:58.196726 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:58Z","lastTransitionTime":"2026-01-29T15:28:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:58 crc kubenswrapper[5008]: I0129 15:28:58.207565 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-wtvvb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2dede057-dcce-4302-8efe-e2c3640308ec\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://63cab2ec47a6dc148b6d3554a6f4b5c1985ca43bf62bfc444ff3582273cce517\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mtnst\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:27:58Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-wtvvb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:58Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:58 crc kubenswrapper[5008]: I0129 15:28:58.219382 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-gk9q8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ca0fcb2d-733d-4bde-9bbf-3f7082d0e244\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fda885d25c8fd46bd297810d4fb6c23ec0d4bb76993e94ea75a623b0feeed247\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6blck\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4781ea933d8ce868cf1da4b2890797c16012b434ce074870a59307d61a3c731\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6blck\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:27:59Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-gk9q8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:58Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:58 crc kubenswrapper[5008]: I0129 15:28:58.236392 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pqg9w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1d092513-7735-4c98-9734-57bc46b99280\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://08beb10f1715c1ca4bbe5b5ecf918e595f3befca424a2b65a06e682936dcc9c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2xcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://84bee79a5084a74e833cfe4bac65bc4b319e7a41e9f3e8c7ee7de383385da1a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2xcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eddc7bcf8b28e2d71e41dbad61e84e0e0ac1e2702628a400e9c16dcc4303cad1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2xcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b82de879355c27b3c577b5d5a292b2c1db266e6d92a8e01409bf87ede71ba420\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2xcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e0b6c0db5ed1e87ffade45aa1c7194322bbf680050f9b7328a3584db57e1554\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2xcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://676b28dc78242b0ec7c7a3643a048da9020c807de1f4ddd0cd801f60a1bf41a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2xcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4894794fa383987c6dc74bda3cd40e56fa81dab982e631fe2fb043b74a6afd9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://643ac2f5dd2119b6ede74fb609222a3e5d7643c302ea60d1799cf3b8db6e2120\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-29T15:28:31Z\\\",\\\"message\\\":\\\"15:28:30.852721 6704 default_network_controller.go:776] Recording success event on pod openshift-etcd/etcd-crc\\\\nI0129 15:28:30.852591 6704 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI0129 15:28:30.852819 6704 model_client.go:382] Update operations generated as: [{Op:update Table:NAT Row:map[external_ip:192.168.126.11 logical_ip:10.217.0.4 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {43933d5e-3c3b-4ff8-8926-04ac25de450e}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0129 15:28:30.852875 6704 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:30Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c4894794fa383987c6dc74bda3cd40e56fa81dab982e631fe2fb043b74a6afd9\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-29T15:28:57Z\\\",\\\"message\\\":\\\"rk=default: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-ingress-operator/metrics_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-ingress-operator/metrics\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.5.244\\\\\\\", Port:9393, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nF0129 15:28:57.018530 7109 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify cert\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2xcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dc93128ecb53884c776154eafc7f29837e9c378a10c37df5d85d608ef14d7195\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2xcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6807035e51f1a1b563d2c2de6ad73607b2a3bbb9b4336cb9dfeea693d35fdda6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6807035e51f1a1b563d2c2de6ad73607b2a3bbb9b4336cb9dfeea693d35fdda6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2xcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:27:59Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-pqg9w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:58Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:58 crc kubenswrapper[5008]: I0129 15:28:58.248563 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ae42d856f5916fe3a1dace4ed5ed53a6cab552d169357b7303516719b78ef076\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:58Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:58 crc kubenswrapper[5008]: I0129 15:28:58.261658 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e5526ab405f367c31c46e86dc356f5c21ac7529cd706af08cb6cd35e54dbe33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a34142066431679db41e56f6697765165128986ad22bc919152524672e3035d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:58Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:58 crc kubenswrapper[5008]: I0129 15:28:58.299518 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:58 crc kubenswrapper[5008]: I0129 15:28:58.299568 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:58 crc kubenswrapper[5008]: I0129 15:28:58.299579 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:58 crc kubenswrapper[5008]: I0129 15:28:58.299595 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:58 crc kubenswrapper[5008]: I0129 15:28:58.299605 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:58Z","lastTransitionTime":"2026-01-29T15:28:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:58 crc kubenswrapper[5008]: I0129 15:28:58.323311 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 15:28:58 crc kubenswrapper[5008]: I0129 15:28:58.323355 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 15:28:58 crc kubenswrapper[5008]: I0129 15:28:58.323415 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 15:28:58 crc kubenswrapper[5008]: E0129 15:28:58.323451 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 15:28:58 crc kubenswrapper[5008]: E0129 15:28:58.323564 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 15:28:58 crc kubenswrapper[5008]: E0129 15:28:58.323719 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 15:28:58 crc kubenswrapper[5008]: I0129 15:28:58.328362 5008 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-29 01:21:43.056989325 +0000 UTC Jan 29 15:28:58 crc kubenswrapper[5008]: I0129 15:28:58.402588 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:58 crc kubenswrapper[5008]: I0129 15:28:58.402701 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:58 crc kubenswrapper[5008]: I0129 15:28:58.402713 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:58 crc kubenswrapper[5008]: I0129 15:28:58.402732 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:58 crc kubenswrapper[5008]: I0129 15:28:58.402747 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:58Z","lastTransitionTime":"2026-01-29T15:28:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:58 crc kubenswrapper[5008]: I0129 15:28:58.505304 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:58 crc kubenswrapper[5008]: I0129 15:28:58.505353 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:58 crc kubenswrapper[5008]: I0129 15:28:58.505370 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:58 crc kubenswrapper[5008]: I0129 15:28:58.505392 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:58 crc kubenswrapper[5008]: I0129 15:28:58.505412 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:58Z","lastTransitionTime":"2026-01-29T15:28:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:58 crc kubenswrapper[5008]: I0129 15:28:58.608191 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:58 crc kubenswrapper[5008]: I0129 15:28:58.608231 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:58 crc kubenswrapper[5008]: I0129 15:28:58.608241 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:58 crc kubenswrapper[5008]: I0129 15:28:58.608257 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:58 crc kubenswrapper[5008]: I0129 15:28:58.608267 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:58Z","lastTransitionTime":"2026-01-29T15:28:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:58 crc kubenswrapper[5008]: I0129 15:28:58.711059 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:58 crc kubenswrapper[5008]: I0129 15:28:58.711129 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:58 crc kubenswrapper[5008]: I0129 15:28:58.711148 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:58 crc kubenswrapper[5008]: I0129 15:28:58.711174 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:58 crc kubenswrapper[5008]: I0129 15:28:58.711194 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:58Z","lastTransitionTime":"2026-01-29T15:28:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:58 crc kubenswrapper[5008]: I0129 15:28:58.814671 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:58 crc kubenswrapper[5008]: I0129 15:28:58.814736 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:58 crc kubenswrapper[5008]: I0129 15:28:58.814753 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:58 crc kubenswrapper[5008]: I0129 15:28:58.814775 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:58 crc kubenswrapper[5008]: I0129 15:28:58.814822 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:58Z","lastTransitionTime":"2026-01-29T15:28:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:58 crc kubenswrapper[5008]: I0129 15:28:58.917594 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:58 crc kubenswrapper[5008]: I0129 15:28:58.917627 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:58 crc kubenswrapper[5008]: I0129 15:28:58.917635 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:58 crc kubenswrapper[5008]: I0129 15:28:58.917649 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:58 crc kubenswrapper[5008]: I0129 15:28:58.917657 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:58Z","lastTransitionTime":"2026-01-29T15:28:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:59 crc kubenswrapper[5008]: I0129 15:28:59.020022 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:59 crc kubenswrapper[5008]: I0129 15:28:59.020086 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:59 crc kubenswrapper[5008]: I0129 15:28:59.020103 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:59 crc kubenswrapper[5008]: I0129 15:28:59.020127 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:59 crc kubenswrapper[5008]: I0129 15:28:59.020143 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:59Z","lastTransitionTime":"2026-01-29T15:28:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:59 crc kubenswrapper[5008]: I0129 15:28:59.024763 5008 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-pqg9w_1d092513-7735-4c98-9734-57bc46b99280/ovnkube-controller/3.log" Jan 29 15:28:59 crc kubenswrapper[5008]: I0129 15:28:59.027621 5008 scope.go:117] "RemoveContainer" containerID="c4894794fa383987c6dc74bda3cd40e56fa81dab982e631fe2fb043b74a6afd9" Jan 29 15:28:59 crc kubenswrapper[5008]: E0129 15:28:59.027798 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-pqg9w_openshift-ovn-kubernetes(1d092513-7735-4c98-9734-57bc46b99280)\"" pod="openshift-ovn-kubernetes/ovnkube-node-pqg9w" podUID="1d092513-7735-4c98-9734-57bc46b99280" Jan 29 15:28:59 crc kubenswrapper[5008]: I0129 15:28:59.041238 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-wtvvb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2dede057-dcce-4302-8efe-e2c3640308ec\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://63cab2ec47a6dc148b6d3554a6f4b5c1985ca43bf62bfc444ff3582273cce517\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mtnst\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:27:58Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-wtvvb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:59Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:59 crc kubenswrapper[5008]: I0129 15:28:59.054118 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-gk9q8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ca0fcb2d-733d-4bde-9bbf-3f7082d0e244\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fda885d25c8fd46bd297810d4fb6c23ec0d4bb76993e94ea75a623b0feeed247\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6blck\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4781ea933d8ce868cf1da4b2890797c16012b434ce074870a59307d61a3c731\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6blck\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:27:59Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-gk9q8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:59Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:59 crc kubenswrapper[5008]: I0129 15:28:59.093541 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pqg9w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1d092513-7735-4c98-9734-57bc46b99280\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://08beb10f1715c1ca4bbe5b5ecf918e595f3befca424a2b65a06e682936dcc9c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2xcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://84bee79a5084a74e833cfe4bac65bc4b319e7a41e9f3e8c7ee7de383385da1a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2xcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eddc7bcf8b28e2d71e41dbad61e84e0e0ac1e2702628a400e9c16dcc4303cad1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2xcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b82de879355c27b3c577b5d5a292b2c1db266e6d92a8e01409bf87ede71ba420\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2xcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e0b6c0db5ed1e87ffade45aa1c7194322bbf680050f9b7328a3584db57e1554\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2xcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://676b28dc78242b0ec7c7a3643a048da9020c807de1f4ddd0cd801f60a1bf41a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2xcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4894794fa383987c6dc74bda3cd40e56fa81dab982e631fe2fb043b74a6afd9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c4894794fa383987c6dc74bda3cd40e56fa81dab982e631fe2fb043b74a6afd9\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-29T15:28:57Z\\\",\\\"message\\\":\\\"rk=default: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-ingress-operator/metrics_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-ingress-operator/metrics\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.5.244\\\\\\\", Port:9393, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nF0129 15:28:57.018530 7109 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify cert\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:56Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-pqg9w_openshift-ovn-kubernetes(1d092513-7735-4c98-9734-57bc46b99280)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2xcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dc93128ecb53884c776154eafc7f29837e9c378a10c37df5d85d608ef14d7195\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2xcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6807035e51f1a1b563d2c2de6ad73607b2a3bbb9b4336cb9dfeea693d35fdda6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6807035e51f1a1b563d2c2de6ad73607b2a3bbb9b4336cb9dfeea693d35fdda6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2xcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:27:59Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-pqg9w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:59Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:59 crc kubenswrapper[5008]: I0129 15:28:59.107289 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-kkc6c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f3716fd8-7f9b-44e2-ac3c-e907d8793dc9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tl4fv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tl4fv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:28:13Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-kkc6c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:59Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:59 crc kubenswrapper[5008]: I0129 15:28:59.120928 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f3710d4-b153-4018-a492-367eb8b81ef8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c89e24fc5acc0577d3d738d63e7982aa32a07ecc01952570f6f417286b8747a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://33245f510d76b9610b3e44259d0944eaef5873c4bc31c3f3012a013248d16933\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c76eae897742ba4e95f6d60a81e2da82f1c0b0e220f48473436b03bff9f2f7e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0ee76cb03f96b669c6907a5d4a1520afda186e96b59ddea75f8c0fd7547c9063\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0ee76cb03f96b669c6907a5d4a1520afda186e96b59ddea75f8c0fd7547c9063\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:27:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:27:37Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:59Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:59 crc kubenswrapper[5008]: I0129 15:28:59.123207 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:59 crc kubenswrapper[5008]: I0129 15:28:59.123289 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:59 crc kubenswrapper[5008]: I0129 15:28:59.123310 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:59 crc kubenswrapper[5008]: I0129 15:28:59.123336 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:59 crc kubenswrapper[5008]: I0129 15:28:59.123357 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:59Z","lastTransitionTime":"2026-01-29T15:28:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:59 crc kubenswrapper[5008]: I0129 15:28:59.134380 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c04122903ba8ec9ecb21ba42f430520d0a097fff8cea9572b066e146d519cf91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:59Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:59 crc kubenswrapper[5008]: I0129 15:28:59.153704 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:59Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:59 crc kubenswrapper[5008]: I0129 15:28:59.174416 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ae42d856f5916fe3a1dace4ed5ed53a6cab552d169357b7303516719b78ef076\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:59Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:59 crc kubenswrapper[5008]: I0129 15:28:59.188762 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e5526ab405f367c31c46e86dc356f5c21ac7529cd706af08cb6cd35e54dbe33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a34142066431679db41e56f6697765165128986ad22bc919152524672e3035d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:59Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:59 crc kubenswrapper[5008]: I0129 15:28:59.203987 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4aed8a0d-ecac-43fd-a31e-04cfbb01f872\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e2cad6ba94fe1fbb01c043c1e8eabda3989f05822a3a7a6e105d2cd8aa794333\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://83662d418c40cdea3f8af62c97834fd30d88d2fe441ca4a0576566e8f6e9bc1d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://83662d418c40cdea3f8af62c97834fd30d88d2fe441ca4a0576566e8f6e9bc1d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:27:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:27:37Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:59Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:59 crc kubenswrapper[5008]: I0129 15:28:59.220840 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-78bl2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa065d0b-d690-4a7d-9079-a8f976a7aca3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb7be81711617226cfa9af5ce71166ad176fc477581c03ba781a2746d64bbf31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trwfk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dce68b57fb66d0f4fb38e7ba2da32746311a7705ec80e7dbaaee405bf6175456\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dce68b57fb66d0f4fb38e7ba2da32746311a7705ec80e7dbaaee405bf6175456\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:27:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trwfk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be2538011dc9cfea90fe3fdf861804d4f36944262a852e2efe4c6a215019fb7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://be2538011dc9cfea90fe3fdf861804d4f36944262a852e2efe4c6a215019fb7e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trwfk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1e468924dd5d2c21d28331698458147151b2c74b04a9154c3f0638b271ffb36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c1e468924dd5d2c21d28331698458147151b2c74b04a9154c3f0638b271ffb36\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trwfk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf3e60925690b1b555efc2db95efcef76510c147b6338b65b071bf0729561a6b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bf3e60925690b1b555efc2db95efcef76510c147b6338b65b071bf0729561a6b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trwfk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://83ad0f06e7035a28c9d0207484d22ac175226fac31b1d5e233ce7231cb957fb3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://83ad0f06e7035a28c9d0207484d22ac175226fac31b1d5e233ce7231cb957fb3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trwfk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://78d2f9571b05eeb98c339f4165ca858289b85192d254ea86d4fb2eae7ea2e61e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://78d2f9571b05eeb98c339f4165ca858289b85192d254ea86d4fb2eae7ea2e61e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trwfk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:27:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-78bl2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:59Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:59 crc kubenswrapper[5008]: I0129 15:28:59.225503 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:59 crc kubenswrapper[5008]: I0129 15:28:59.225554 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:59 crc kubenswrapper[5008]: I0129 15:28:59.225565 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:59 crc kubenswrapper[5008]: I0129 15:28:59.225583 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:59 crc kubenswrapper[5008]: I0129 15:28:59.225596 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:59Z","lastTransitionTime":"2026-01-29T15:28:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:59 crc kubenswrapper[5008]: I0129 15:28:59.231453 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-qj8wb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ffbfcf6-99e5-450c-8c72-b2db9365d93e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb113f45b58a5039b88d2c176d718d5a012e21c1785781c1fcda5843d529a9af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8mvmz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:28:01Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-qj8wb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:59Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:59 crc kubenswrapper[5008]: I0129 15:28:59.243302 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:59Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:59 crc kubenswrapper[5008]: I0129 15:28:59.253958 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:59Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:59 crc kubenswrapper[5008]: I0129 15:28:59.265132 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-42hcz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cdd8ae23-3f9f-49f8-928d-46dad823fde4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://af9a973786f58d2c63123c28e0b1aedaa9ec4188567960c544cf68f70ba20873\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a44b0a7b0b53c339b51d5391ad7e0eb342bdb491b4af37a98f48788b8e2c077b\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-29T15:28:47Z\\\",\\\"message\\\":\\\"2026-01-29T15:28:02+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_5ee9c321-48df-4d5b-add2-57b9ac5ae3f8\\\\n2026-01-29T15:28:02+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_5ee9c321-48df-4d5b-add2-57b9ac5ae3f8 to /host/opt/cni/bin/\\\\n2026-01-29T15:28:02Z [verbose] multus-daemon started\\\\n2026-01-29T15:28:02Z [verbose] Readiness Indicator file check\\\\n2026-01-29T15:28:47Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T15:27:59Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg75x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:27:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-42hcz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:59Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:59 crc kubenswrapper[5008]: I0129 15:28:59.276236 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-p5kdp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f5a0b69-5edd-467c-a822-093f1689df1d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6930478f2ddb5112eb944beac7cabb3e235fe16465a4706e8c665ce9481bc49d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gq2fz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://98ea0d7c1f2e3e9fc74e8e58ae26ab486c6b75f655273070cebee814c7c99e0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gq2fz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:28:11Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-p5kdp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:59Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:59 crc kubenswrapper[5008]: I0129 15:28:59.298386 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"77958faa-02ef-4792-b792-6094f922cd1d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de76f0d6e08ee14b4a5ab39a21ebdc63bdf379dcd5b648ae46a4edcc2a49f20e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1dcda54222f387e6560d3e297be72e19032a975feb916bc12a220870207a3f35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3cb618c2c44502074cb37ce1e688d187254eafae3916372a16c8ab845fed767a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff8e5fd243880ce71f07c5c532cad2cdff0e4bca2d0083280be78206a1a4c854\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7393e24277d74a2b9987e6cdc54cd65485f5bc57d93ec25a2cb8479923db1feb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://380711ea042d739a804ab6da4c0361004cc9ed9a48a5f4b006d168df6a84ebb5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://380711ea042d739a804ab6da4c0361004cc9ed9a48a5f4b006d168df6a84ebb5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:27:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d9134a6829b7df9b42aeae161ea1f3961837d6a0b322b1adcd2417c47c0f5d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1d9134a6829b7df9b42aeae161ea1f3961837d6a0b322b1adcd2417c47c0f5d5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:27:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://9206e5978757b0979fa411a384d9e5b4728b01a769f87383df38dbb8f0f18e4b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9206e5978757b0979fa411a384d9e5b4728b01a769f87383df38dbb8f0f18e4b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:27:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:27:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:27:37Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:59Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:59 crc kubenswrapper[5008]: I0129 15:28:59.313318 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2624b9eb-bfe1-4c46-8825-6152c5e00565\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://12266e3ba2ed2e5d6d1e7ee893a0d59cd4575c8870cb1e129ca0fd9b8623467f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c778df6f5c031669143db37980250c01473f3d9856acc44a6ef51852822f99ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://de7c341c7443f28f5919ef6baeb21377b5571637ad807dd7515a5f28c218034b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0f710dffd08d1bbb467ff9d2c6a5d5beed779550747459407916e743506ab27\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:27:37Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:59Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:59 crc kubenswrapper[5008]: I0129 15:28:59.322907 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kkc6c" Jan 29 15:28:59 crc kubenswrapper[5008]: E0129 15:28:59.323038 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kkc6c" podUID="f3716fd8-7f9b-44e2-ac3c-e907d8793dc9" Jan 29 15:28:59 crc kubenswrapper[5008]: I0129 15:28:59.327321 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:59 crc kubenswrapper[5008]: I0129 15:28:59.327378 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:59 crc kubenswrapper[5008]: I0129 15:28:59.327394 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:59 crc kubenswrapper[5008]: I0129 15:28:59.327418 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:59 crc kubenswrapper[5008]: I0129 15:28:59.327434 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:59Z","lastTransitionTime":"2026-01-29T15:28:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:59 crc kubenswrapper[5008]: I0129 15:28:59.329222 5008 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-08 02:30:35.36247741 +0000 UTC Jan 29 15:28:59 crc kubenswrapper[5008]: I0129 15:28:59.333120 5008 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d62f7cc2-d2d7-4c9a-9432-8b4fb9f3fcf2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:27:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4702656214e54c2881bf198364622648679a9981721d09a6b1551a134c63b7d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ed1794f8b68a0810301b6f7b91e03cfb269b35084dd97b2f153789ba70970f2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://677c04a1dffb767e6149ccb064772548ca29cf553afc20cb4eb82a5f85742ff0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://412b5d429b7a86a87e710ba4a0c81a54b03108f41ce6cc29f429aede063eb76c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4d710e35a02d14289e2d5fe6b35c08621e78c96b7e9e30451ffd6d51962fb761\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T15:27:58Z\\\",\\\"message\\\":\\\"file observer\\\\nW0129 15:27:57.701071 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0129 15:27:57.704726 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 15:27:57.707574 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-445213743/tls.crt::/tmp/serving-cert-445213743/tls.key\\\\\\\"\\\\nI0129 15:27:58.036057 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0129 15:27:58.041904 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0129 15:27:58.041936 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0129 15:27:58.041959 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0129 15:27:58.041967 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0129 15:27:58.046875 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0129 15:27:58.046901 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 15:27:58.046906 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 15:27:58.046911 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0129 15:27:58.046914 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0129 15:27:58.046917 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0129 15:27:58.046919 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0129 15:27:58.047110 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0129 15:27:58.052272 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T15:27:42Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3397d4d59fbac09e49247425eb263f25d13c62a72013146c981b606f6389165d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:27:40Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://25a0c747be0a011a60911a631709b27620d8ebc5afea1d21dbeb71f26d971f6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://25a0c747be0a011a60911a631709b27620d8ebc5afea1d21dbeb71f26d971f6e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:27:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:27:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:27:37Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:59Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:59 crc kubenswrapper[5008]: I0129 15:28:59.429502 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:59 crc kubenswrapper[5008]: I0129 15:28:59.429547 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:59 crc kubenswrapper[5008]: I0129 15:28:59.429559 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:59 crc kubenswrapper[5008]: I0129 15:28:59.429578 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:59 crc kubenswrapper[5008]: I0129 15:28:59.429590 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:59Z","lastTransitionTime":"2026-01-29T15:28:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:59 crc kubenswrapper[5008]: I0129 15:28:59.533069 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:59 crc kubenswrapper[5008]: I0129 15:28:59.533135 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:59 crc kubenswrapper[5008]: I0129 15:28:59.533154 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:59 crc kubenswrapper[5008]: I0129 15:28:59.533180 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:59 crc kubenswrapper[5008]: I0129 15:28:59.533198 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:59Z","lastTransitionTime":"2026-01-29T15:28:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:59 crc kubenswrapper[5008]: I0129 15:28:59.637638 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:59 crc kubenswrapper[5008]: I0129 15:28:59.637703 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:59 crc kubenswrapper[5008]: I0129 15:28:59.637727 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:59 crc kubenswrapper[5008]: I0129 15:28:59.637758 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:59 crc kubenswrapper[5008]: I0129 15:28:59.637816 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:59Z","lastTransitionTime":"2026-01-29T15:28:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:59 crc kubenswrapper[5008]: I0129 15:28:59.741070 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:59 crc kubenswrapper[5008]: I0129 15:28:59.741137 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:59 crc kubenswrapper[5008]: I0129 15:28:59.741156 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:59 crc kubenswrapper[5008]: I0129 15:28:59.741181 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:59 crc kubenswrapper[5008]: I0129 15:28:59.741202 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:59Z","lastTransitionTime":"2026-01-29T15:28:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:59 crc kubenswrapper[5008]: I0129 15:28:59.844824 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:59 crc kubenswrapper[5008]: I0129 15:28:59.844858 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:59 crc kubenswrapper[5008]: I0129 15:28:59.844866 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:59 crc kubenswrapper[5008]: I0129 15:28:59.844880 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:59 crc kubenswrapper[5008]: I0129 15:28:59.844889 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:59Z","lastTransitionTime":"2026-01-29T15:28:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:59 crc kubenswrapper[5008]: I0129 15:28:59.947527 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:59 crc kubenswrapper[5008]: I0129 15:28:59.947574 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:59 crc kubenswrapper[5008]: I0129 15:28:59.947590 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:59 crc kubenswrapper[5008]: I0129 15:28:59.947611 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:59 crc kubenswrapper[5008]: I0129 15:28:59.947629 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:59Z","lastTransitionTime":"2026-01-29T15:28:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:00 crc kubenswrapper[5008]: I0129 15:29:00.057139 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:00 crc kubenswrapper[5008]: I0129 15:29:00.057547 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:00 crc kubenswrapper[5008]: I0129 15:29:00.057563 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:00 crc kubenswrapper[5008]: I0129 15:29:00.057582 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:00 crc kubenswrapper[5008]: I0129 15:29:00.057595 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:00Z","lastTransitionTime":"2026-01-29T15:29:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:00 crc kubenswrapper[5008]: I0129 15:29:00.160299 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:00 crc kubenswrapper[5008]: I0129 15:29:00.160344 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:00 crc kubenswrapper[5008]: I0129 15:29:00.160352 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:00 crc kubenswrapper[5008]: I0129 15:29:00.160367 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:00 crc kubenswrapper[5008]: I0129 15:29:00.160376 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:00Z","lastTransitionTime":"2026-01-29T15:29:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:00 crc kubenswrapper[5008]: I0129 15:29:00.267536 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:00 crc kubenswrapper[5008]: I0129 15:29:00.267584 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:00 crc kubenswrapper[5008]: I0129 15:29:00.267596 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:00 crc kubenswrapper[5008]: I0129 15:29:00.267612 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:00 crc kubenswrapper[5008]: I0129 15:29:00.267621 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:00Z","lastTransitionTime":"2026-01-29T15:29:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:00 crc kubenswrapper[5008]: I0129 15:29:00.323069 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 15:29:00 crc kubenswrapper[5008]: I0129 15:29:00.323198 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 15:29:00 crc kubenswrapper[5008]: I0129 15:29:00.323308 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 15:29:00 crc kubenswrapper[5008]: E0129 15:29:00.323339 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 15:29:00 crc kubenswrapper[5008]: E0129 15:29:00.323405 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 15:29:00 crc kubenswrapper[5008]: E0129 15:29:00.323501 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 15:29:00 crc kubenswrapper[5008]: I0129 15:29:00.330269 5008 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-10 11:27:35.831757744 +0000 UTC Jan 29 15:29:00 crc kubenswrapper[5008]: I0129 15:29:00.370441 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:00 crc kubenswrapper[5008]: I0129 15:29:00.370483 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:00 crc kubenswrapper[5008]: I0129 15:29:00.370494 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:00 crc kubenswrapper[5008]: I0129 15:29:00.370510 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:00 crc kubenswrapper[5008]: I0129 15:29:00.370519 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:00Z","lastTransitionTime":"2026-01-29T15:29:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:00 crc kubenswrapper[5008]: I0129 15:29:00.473813 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:00 crc kubenswrapper[5008]: I0129 15:29:00.473869 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:00 crc kubenswrapper[5008]: I0129 15:29:00.473885 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:00 crc kubenswrapper[5008]: I0129 15:29:00.473908 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:00 crc kubenswrapper[5008]: I0129 15:29:00.473926 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:00Z","lastTransitionTime":"2026-01-29T15:29:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:00 crc kubenswrapper[5008]: I0129 15:29:00.576596 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:00 crc kubenswrapper[5008]: I0129 15:29:00.576632 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:00 crc kubenswrapper[5008]: I0129 15:29:00.576644 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:00 crc kubenswrapper[5008]: I0129 15:29:00.576663 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:00 crc kubenswrapper[5008]: I0129 15:29:00.576675 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:00Z","lastTransitionTime":"2026-01-29T15:29:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:00 crc kubenswrapper[5008]: I0129 15:29:00.679620 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:00 crc kubenswrapper[5008]: I0129 15:29:00.679664 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:00 crc kubenswrapper[5008]: I0129 15:29:00.679682 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:00 crc kubenswrapper[5008]: I0129 15:29:00.679699 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:00 crc kubenswrapper[5008]: I0129 15:29:00.679710 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:00Z","lastTransitionTime":"2026-01-29T15:29:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:00 crc kubenswrapper[5008]: I0129 15:29:00.782453 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:00 crc kubenswrapper[5008]: I0129 15:29:00.782525 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:00 crc kubenswrapper[5008]: I0129 15:29:00.782548 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:00 crc kubenswrapper[5008]: I0129 15:29:00.782579 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:00 crc kubenswrapper[5008]: I0129 15:29:00.782602 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:00Z","lastTransitionTime":"2026-01-29T15:29:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:00 crc kubenswrapper[5008]: I0129 15:29:00.884597 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:00 crc kubenswrapper[5008]: I0129 15:29:00.884638 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:00 crc kubenswrapper[5008]: I0129 15:29:00.884649 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:00 crc kubenswrapper[5008]: I0129 15:29:00.884665 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:00 crc kubenswrapper[5008]: I0129 15:29:00.884676 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:00Z","lastTransitionTime":"2026-01-29T15:29:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:00 crc kubenswrapper[5008]: I0129 15:29:00.987235 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:00 crc kubenswrapper[5008]: I0129 15:29:00.987299 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:00 crc kubenswrapper[5008]: I0129 15:29:00.987319 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:00 crc kubenswrapper[5008]: I0129 15:29:00.987346 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:00 crc kubenswrapper[5008]: I0129 15:29:00.987365 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:00Z","lastTransitionTime":"2026-01-29T15:29:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:01 crc kubenswrapper[5008]: I0129 15:29:01.090368 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:01 crc kubenswrapper[5008]: I0129 15:29:01.090406 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:01 crc kubenswrapper[5008]: I0129 15:29:01.090414 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:01 crc kubenswrapper[5008]: I0129 15:29:01.090427 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:01 crc kubenswrapper[5008]: I0129 15:29:01.090437 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:01Z","lastTransitionTime":"2026-01-29T15:29:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:01 crc kubenswrapper[5008]: I0129 15:29:01.193560 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:01 crc kubenswrapper[5008]: I0129 15:29:01.193642 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:01 crc kubenswrapper[5008]: I0129 15:29:01.193662 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:01 crc kubenswrapper[5008]: I0129 15:29:01.193689 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:01 crc kubenswrapper[5008]: I0129 15:29:01.193707 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:01Z","lastTransitionTime":"2026-01-29T15:29:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:01 crc kubenswrapper[5008]: I0129 15:29:01.296479 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:01 crc kubenswrapper[5008]: I0129 15:29:01.296552 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:01 crc kubenswrapper[5008]: I0129 15:29:01.296569 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:01 crc kubenswrapper[5008]: I0129 15:29:01.296591 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:01 crc kubenswrapper[5008]: I0129 15:29:01.296606 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:01Z","lastTransitionTime":"2026-01-29T15:29:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:01 crc kubenswrapper[5008]: I0129 15:29:01.323390 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kkc6c" Jan 29 15:29:01 crc kubenswrapper[5008]: E0129 15:29:01.323554 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kkc6c" podUID="f3716fd8-7f9b-44e2-ac3c-e907d8793dc9" Jan 29 15:29:01 crc kubenswrapper[5008]: I0129 15:29:01.331115 5008 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-29 09:42:53.850976977 +0000 UTC Jan 29 15:29:01 crc kubenswrapper[5008]: I0129 15:29:01.399360 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:01 crc kubenswrapper[5008]: I0129 15:29:01.399405 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:01 crc kubenswrapper[5008]: I0129 15:29:01.399417 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:01 crc kubenswrapper[5008]: I0129 15:29:01.399438 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:01 crc kubenswrapper[5008]: I0129 15:29:01.399451 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:01Z","lastTransitionTime":"2026-01-29T15:29:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:01 crc kubenswrapper[5008]: I0129 15:29:01.502339 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:01 crc kubenswrapper[5008]: I0129 15:29:01.502382 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:01 crc kubenswrapper[5008]: I0129 15:29:01.502393 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:01 crc kubenswrapper[5008]: I0129 15:29:01.502409 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:01 crc kubenswrapper[5008]: I0129 15:29:01.502420 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:01Z","lastTransitionTime":"2026-01-29T15:29:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:01 crc kubenswrapper[5008]: I0129 15:29:01.605046 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:01 crc kubenswrapper[5008]: I0129 15:29:01.605104 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:01 crc kubenswrapper[5008]: I0129 15:29:01.605157 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:01 crc kubenswrapper[5008]: I0129 15:29:01.605184 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:01 crc kubenswrapper[5008]: I0129 15:29:01.605201 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:01Z","lastTransitionTime":"2026-01-29T15:29:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:01 crc kubenswrapper[5008]: I0129 15:29:01.707577 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:01 crc kubenswrapper[5008]: I0129 15:29:01.707639 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:01 crc kubenswrapper[5008]: I0129 15:29:01.707661 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:01 crc kubenswrapper[5008]: I0129 15:29:01.707690 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:01 crc kubenswrapper[5008]: I0129 15:29:01.707710 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:01Z","lastTransitionTime":"2026-01-29T15:29:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:01 crc kubenswrapper[5008]: I0129 15:29:01.809869 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:01 crc kubenswrapper[5008]: I0129 15:29:01.809907 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:01 crc kubenswrapper[5008]: I0129 15:29:01.809917 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:01 crc kubenswrapper[5008]: I0129 15:29:01.809931 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:01 crc kubenswrapper[5008]: I0129 15:29:01.809946 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:01Z","lastTransitionTime":"2026-01-29T15:29:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:01 crc kubenswrapper[5008]: I0129 15:29:01.912899 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:01 crc kubenswrapper[5008]: I0129 15:29:01.912940 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:01 crc kubenswrapper[5008]: I0129 15:29:01.912951 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:01 crc kubenswrapper[5008]: I0129 15:29:01.912997 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:01 crc kubenswrapper[5008]: I0129 15:29:01.913014 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:01Z","lastTransitionTime":"2026-01-29T15:29:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:02 crc kubenswrapper[5008]: I0129 15:29:02.015897 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:02 crc kubenswrapper[5008]: I0129 15:29:02.015949 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:02 crc kubenswrapper[5008]: I0129 15:29:02.015962 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:02 crc kubenswrapper[5008]: I0129 15:29:02.015984 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:02 crc kubenswrapper[5008]: I0129 15:29:02.015995 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:02Z","lastTransitionTime":"2026-01-29T15:29:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:02 crc kubenswrapper[5008]: I0129 15:29:02.118598 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:02 crc kubenswrapper[5008]: I0129 15:29:02.118675 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:02 crc kubenswrapper[5008]: I0129 15:29:02.118700 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:02 crc kubenswrapper[5008]: I0129 15:29:02.118731 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:02 crc kubenswrapper[5008]: I0129 15:29:02.118754 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:02Z","lastTransitionTime":"2026-01-29T15:29:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:02 crc kubenswrapper[5008]: I0129 15:29:02.220986 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:02 crc kubenswrapper[5008]: I0129 15:29:02.221029 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:02 crc kubenswrapper[5008]: I0129 15:29:02.221037 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:02 crc kubenswrapper[5008]: I0129 15:29:02.221050 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:02 crc kubenswrapper[5008]: I0129 15:29:02.221059 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:02Z","lastTransitionTime":"2026-01-29T15:29:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:02 crc kubenswrapper[5008]: I0129 15:29:02.303095 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 15:29:02 crc kubenswrapper[5008]: I0129 15:29:02.303279 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 15:29:02 crc kubenswrapper[5008]: E0129 15:29:02.303432 5008 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 29 15:29:02 crc kubenswrapper[5008]: E0129 15:29:02.303432 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 15:30:06.303391664 +0000 UTC m=+149.976245961 (durationBeforeRetry 1m4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:29:02 crc kubenswrapper[5008]: E0129 15:29:02.303518 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-29 15:30:06.303488307 +0000 UTC m=+149.976342554 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 29 15:29:02 crc kubenswrapper[5008]: I0129 15:29:02.322654 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 15:29:02 crc kubenswrapper[5008]: I0129 15:29:02.322699 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 15:29:02 crc kubenswrapper[5008]: I0129 15:29:02.322654 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 15:29:02 crc kubenswrapper[5008]: E0129 15:29:02.322867 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 15:29:02 crc kubenswrapper[5008]: E0129 15:29:02.322952 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 15:29:02 crc kubenswrapper[5008]: E0129 15:29:02.323089 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 15:29:02 crc kubenswrapper[5008]: I0129 15:29:02.323817 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:02 crc kubenswrapper[5008]: I0129 15:29:02.323849 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:02 crc kubenswrapper[5008]: I0129 15:29:02.323857 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:02 crc kubenswrapper[5008]: I0129 15:29:02.323870 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:02 crc kubenswrapper[5008]: I0129 15:29:02.323882 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:02Z","lastTransitionTime":"2026-01-29T15:29:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:02 crc kubenswrapper[5008]: I0129 15:29:02.332164 5008 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-10 08:09:44.943932023 +0000 UTC Jan 29 15:29:02 crc kubenswrapper[5008]: I0129 15:29:02.405280 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 15:29:02 crc kubenswrapper[5008]: I0129 15:29:02.405398 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 15:29:02 crc kubenswrapper[5008]: I0129 15:29:02.405476 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 15:29:02 crc kubenswrapper[5008]: E0129 15:29:02.405597 5008 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 29 15:29:02 crc kubenswrapper[5008]: E0129 15:29:02.405645 5008 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 29 15:29:02 crc kubenswrapper[5008]: E0129 15:29:02.405741 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-29 15:30:06.405715221 +0000 UTC m=+150.078569488 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 29 15:29:02 crc kubenswrapper[5008]: E0129 15:29:02.405652 5008 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 29 15:29:02 crc kubenswrapper[5008]: E0129 15:29:02.405768 5008 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 29 15:29:02 crc kubenswrapper[5008]: E0129 15:29:02.405852 5008 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 29 15:29:02 crc kubenswrapper[5008]: E0129 15:29:02.405882 5008 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 29 15:29:02 crc kubenswrapper[5008]: E0129 15:29:02.405850 5008 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 29 15:29:02 crc kubenswrapper[5008]: E0129 15:29:02.405958 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-29 15:30:06.405934739 +0000 UTC m=+150.078789016 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 29 15:29:02 crc kubenswrapper[5008]: E0129 15:29:02.406044 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-29 15:30:06.406016031 +0000 UTC m=+150.078870348 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 29 15:29:02 crc kubenswrapper[5008]: I0129 15:29:02.427430 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:02 crc kubenswrapper[5008]: I0129 15:29:02.427515 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:02 crc kubenswrapper[5008]: I0129 15:29:02.427536 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:02 crc kubenswrapper[5008]: I0129 15:29:02.427572 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:02 crc kubenswrapper[5008]: I0129 15:29:02.427594 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:02Z","lastTransitionTime":"2026-01-29T15:29:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:02 crc kubenswrapper[5008]: I0129 15:29:02.531055 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:02 crc kubenswrapper[5008]: I0129 15:29:02.531104 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:02 crc kubenswrapper[5008]: I0129 15:29:02.531115 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:02 crc kubenswrapper[5008]: I0129 15:29:02.531132 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:02 crc kubenswrapper[5008]: I0129 15:29:02.531142 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:02Z","lastTransitionTime":"2026-01-29T15:29:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:02 crc kubenswrapper[5008]: I0129 15:29:02.633645 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:02 crc kubenswrapper[5008]: I0129 15:29:02.633681 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:02 crc kubenswrapper[5008]: I0129 15:29:02.633691 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:02 crc kubenswrapper[5008]: I0129 15:29:02.633706 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:02 crc kubenswrapper[5008]: I0129 15:29:02.633716 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:02Z","lastTransitionTime":"2026-01-29T15:29:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:02 crc kubenswrapper[5008]: I0129 15:29:02.742652 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:02 crc kubenswrapper[5008]: I0129 15:29:02.742757 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:02 crc kubenswrapper[5008]: I0129 15:29:02.742833 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:02 crc kubenswrapper[5008]: I0129 15:29:02.742872 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:02 crc kubenswrapper[5008]: I0129 15:29:02.742899 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:02Z","lastTransitionTime":"2026-01-29T15:29:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:02 crc kubenswrapper[5008]: I0129 15:29:02.846963 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:02 crc kubenswrapper[5008]: I0129 15:29:02.847038 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:02 crc kubenswrapper[5008]: I0129 15:29:02.847063 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:02 crc kubenswrapper[5008]: I0129 15:29:02.847093 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:02 crc kubenswrapper[5008]: I0129 15:29:02.847113 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:02Z","lastTransitionTime":"2026-01-29T15:29:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:02 crc kubenswrapper[5008]: I0129 15:29:02.950215 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:02 crc kubenswrapper[5008]: I0129 15:29:02.950263 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:02 crc kubenswrapper[5008]: I0129 15:29:02.950274 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:02 crc kubenswrapper[5008]: I0129 15:29:02.950291 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:02 crc kubenswrapper[5008]: I0129 15:29:02.950303 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:02Z","lastTransitionTime":"2026-01-29T15:29:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:03 crc kubenswrapper[5008]: I0129 15:29:03.053732 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:03 crc kubenswrapper[5008]: I0129 15:29:03.053859 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:03 crc kubenswrapper[5008]: I0129 15:29:03.053877 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:03 crc kubenswrapper[5008]: I0129 15:29:03.053901 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:03 crc kubenswrapper[5008]: I0129 15:29:03.053920 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:03Z","lastTransitionTime":"2026-01-29T15:29:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:03 crc kubenswrapper[5008]: I0129 15:29:03.156627 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:03 crc kubenswrapper[5008]: I0129 15:29:03.158005 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:03 crc kubenswrapper[5008]: I0129 15:29:03.158042 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:03 crc kubenswrapper[5008]: I0129 15:29:03.158067 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:03 crc kubenswrapper[5008]: I0129 15:29:03.158085 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:03Z","lastTransitionTime":"2026-01-29T15:29:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:03 crc kubenswrapper[5008]: I0129 15:29:03.261144 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:03 crc kubenswrapper[5008]: I0129 15:29:03.261239 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:03 crc kubenswrapper[5008]: I0129 15:29:03.261264 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:03 crc kubenswrapper[5008]: I0129 15:29:03.261296 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:03 crc kubenswrapper[5008]: I0129 15:29:03.261320 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:03Z","lastTransitionTime":"2026-01-29T15:29:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:03 crc kubenswrapper[5008]: I0129 15:29:03.323160 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kkc6c" Jan 29 15:29:03 crc kubenswrapper[5008]: E0129 15:29:03.323332 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kkc6c" podUID="f3716fd8-7f9b-44e2-ac3c-e907d8793dc9" Jan 29 15:29:03 crc kubenswrapper[5008]: I0129 15:29:03.333304 5008 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-02 17:11:19.050432148 +0000 UTC Jan 29 15:29:03 crc kubenswrapper[5008]: I0129 15:29:03.364379 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:03 crc kubenswrapper[5008]: I0129 15:29:03.364452 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:03 crc kubenswrapper[5008]: I0129 15:29:03.364473 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:03 crc kubenswrapper[5008]: I0129 15:29:03.364494 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:03 crc kubenswrapper[5008]: I0129 15:29:03.364511 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:03Z","lastTransitionTime":"2026-01-29T15:29:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:03 crc kubenswrapper[5008]: I0129 15:29:03.466562 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:03 crc kubenswrapper[5008]: I0129 15:29:03.466623 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:03 crc kubenswrapper[5008]: I0129 15:29:03.466634 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:03 crc kubenswrapper[5008]: I0129 15:29:03.466657 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:03 crc kubenswrapper[5008]: I0129 15:29:03.466678 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:03Z","lastTransitionTime":"2026-01-29T15:29:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:03 crc kubenswrapper[5008]: I0129 15:29:03.570010 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:03 crc kubenswrapper[5008]: I0129 15:29:03.570062 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:03 crc kubenswrapper[5008]: I0129 15:29:03.570079 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:03 crc kubenswrapper[5008]: I0129 15:29:03.570098 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:03 crc kubenswrapper[5008]: I0129 15:29:03.570116 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:03Z","lastTransitionTime":"2026-01-29T15:29:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:03 crc kubenswrapper[5008]: I0129 15:29:03.672976 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:03 crc kubenswrapper[5008]: I0129 15:29:03.673014 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:03 crc kubenswrapper[5008]: I0129 15:29:03.673025 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:03 crc kubenswrapper[5008]: I0129 15:29:03.673041 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:03 crc kubenswrapper[5008]: I0129 15:29:03.673054 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:03Z","lastTransitionTime":"2026-01-29T15:29:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:03 crc kubenswrapper[5008]: I0129 15:29:03.776209 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:03 crc kubenswrapper[5008]: I0129 15:29:03.776253 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:03 crc kubenswrapper[5008]: I0129 15:29:03.776266 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:03 crc kubenswrapper[5008]: I0129 15:29:03.776284 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:03 crc kubenswrapper[5008]: I0129 15:29:03.776297 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:03Z","lastTransitionTime":"2026-01-29T15:29:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:03 crc kubenswrapper[5008]: I0129 15:29:03.879368 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:03 crc kubenswrapper[5008]: I0129 15:29:03.879434 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:03 crc kubenswrapper[5008]: I0129 15:29:03.879452 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:03 crc kubenswrapper[5008]: I0129 15:29:03.879479 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:03 crc kubenswrapper[5008]: I0129 15:29:03.879544 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:03Z","lastTransitionTime":"2026-01-29T15:29:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:03 crc kubenswrapper[5008]: I0129 15:29:03.982544 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:03 crc kubenswrapper[5008]: I0129 15:29:03.982642 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:03 crc kubenswrapper[5008]: I0129 15:29:03.982663 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:03 crc kubenswrapper[5008]: I0129 15:29:03.982690 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:03 crc kubenswrapper[5008]: I0129 15:29:03.982707 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:03Z","lastTransitionTime":"2026-01-29T15:29:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:04 crc kubenswrapper[5008]: I0129 15:29:04.086907 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:04 crc kubenswrapper[5008]: I0129 15:29:04.086949 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:04 crc kubenswrapper[5008]: I0129 15:29:04.086973 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:04 crc kubenswrapper[5008]: I0129 15:29:04.086995 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:04 crc kubenswrapper[5008]: I0129 15:29:04.087010 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:04Z","lastTransitionTime":"2026-01-29T15:29:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:04 crc kubenswrapper[5008]: I0129 15:29:04.190086 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:04 crc kubenswrapper[5008]: I0129 15:29:04.190166 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:04 crc kubenswrapper[5008]: I0129 15:29:04.190190 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:04 crc kubenswrapper[5008]: I0129 15:29:04.190222 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:04 crc kubenswrapper[5008]: I0129 15:29:04.190248 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:04Z","lastTransitionTime":"2026-01-29T15:29:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:04 crc kubenswrapper[5008]: I0129 15:29:04.292923 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:04 crc kubenswrapper[5008]: I0129 15:29:04.292978 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:04 crc kubenswrapper[5008]: I0129 15:29:04.292993 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:04 crc kubenswrapper[5008]: I0129 15:29:04.293015 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:04 crc kubenswrapper[5008]: I0129 15:29:04.293032 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:04Z","lastTransitionTime":"2026-01-29T15:29:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:04 crc kubenswrapper[5008]: I0129 15:29:04.322750 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 15:29:04 crc kubenswrapper[5008]: I0129 15:29:04.322945 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 15:29:04 crc kubenswrapper[5008]: E0129 15:29:04.323150 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 15:29:04 crc kubenswrapper[5008]: I0129 15:29:04.323171 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 15:29:04 crc kubenswrapper[5008]: E0129 15:29:04.323531 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 15:29:04 crc kubenswrapper[5008]: E0129 15:29:04.323625 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 15:29:04 crc kubenswrapper[5008]: I0129 15:29:04.334088 5008 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-28 14:25:48.378985485 +0000 UTC Jan 29 15:29:04 crc kubenswrapper[5008]: I0129 15:29:04.395519 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:04 crc kubenswrapper[5008]: I0129 15:29:04.395958 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:04 crc kubenswrapper[5008]: I0129 15:29:04.395971 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:04 crc kubenswrapper[5008]: I0129 15:29:04.395987 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:04 crc kubenswrapper[5008]: I0129 15:29:04.395999 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:04Z","lastTransitionTime":"2026-01-29T15:29:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:04 crc kubenswrapper[5008]: I0129 15:29:04.499165 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:04 crc kubenswrapper[5008]: I0129 15:29:04.499211 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:04 crc kubenswrapper[5008]: I0129 15:29:04.499220 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:04 crc kubenswrapper[5008]: I0129 15:29:04.499235 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:04 crc kubenswrapper[5008]: I0129 15:29:04.499246 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:04Z","lastTransitionTime":"2026-01-29T15:29:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:04 crc kubenswrapper[5008]: I0129 15:29:04.603250 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:04 crc kubenswrapper[5008]: I0129 15:29:04.603285 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:04 crc kubenswrapper[5008]: I0129 15:29:04.603296 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:04 crc kubenswrapper[5008]: I0129 15:29:04.603316 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:04 crc kubenswrapper[5008]: I0129 15:29:04.603328 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:04Z","lastTransitionTime":"2026-01-29T15:29:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:04 crc kubenswrapper[5008]: I0129 15:29:04.706379 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:04 crc kubenswrapper[5008]: I0129 15:29:04.706423 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:04 crc kubenswrapper[5008]: I0129 15:29:04.706432 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:04 crc kubenswrapper[5008]: I0129 15:29:04.706446 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:04 crc kubenswrapper[5008]: I0129 15:29:04.706456 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:04Z","lastTransitionTime":"2026-01-29T15:29:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:04 crc kubenswrapper[5008]: I0129 15:29:04.808255 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:04 crc kubenswrapper[5008]: I0129 15:29:04.808301 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:04 crc kubenswrapper[5008]: I0129 15:29:04.808337 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:04 crc kubenswrapper[5008]: I0129 15:29:04.808355 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:04 crc kubenswrapper[5008]: I0129 15:29:04.808364 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:04Z","lastTransitionTime":"2026-01-29T15:29:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:04 crc kubenswrapper[5008]: I0129 15:29:04.911181 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:04 crc kubenswrapper[5008]: I0129 15:29:04.911237 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:04 crc kubenswrapper[5008]: I0129 15:29:04.911254 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:04 crc kubenswrapper[5008]: I0129 15:29:04.911279 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:04 crc kubenswrapper[5008]: I0129 15:29:04.911296 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:04Z","lastTransitionTime":"2026-01-29T15:29:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:05 crc kubenswrapper[5008]: I0129 15:29:05.014372 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:05 crc kubenswrapper[5008]: I0129 15:29:05.014414 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:05 crc kubenswrapper[5008]: I0129 15:29:05.014425 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:05 crc kubenswrapper[5008]: I0129 15:29:05.014442 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:05 crc kubenswrapper[5008]: I0129 15:29:05.014457 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:05Z","lastTransitionTime":"2026-01-29T15:29:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:05 crc kubenswrapper[5008]: I0129 15:29:05.116908 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:05 crc kubenswrapper[5008]: I0129 15:29:05.116996 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:05 crc kubenswrapper[5008]: I0129 15:29:05.117009 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:05 crc kubenswrapper[5008]: I0129 15:29:05.117025 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:05 crc kubenswrapper[5008]: I0129 15:29:05.117036 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:05Z","lastTransitionTime":"2026-01-29T15:29:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:05 crc kubenswrapper[5008]: I0129 15:29:05.219468 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:05 crc kubenswrapper[5008]: I0129 15:29:05.219520 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:05 crc kubenswrapper[5008]: I0129 15:29:05.219530 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:05 crc kubenswrapper[5008]: I0129 15:29:05.219542 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:05 crc kubenswrapper[5008]: I0129 15:29:05.219552 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:05Z","lastTransitionTime":"2026-01-29T15:29:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:05 crc kubenswrapper[5008]: I0129 15:29:05.322821 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kkc6c" Jan 29 15:29:05 crc kubenswrapper[5008]: E0129 15:29:05.322945 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kkc6c" podUID="f3716fd8-7f9b-44e2-ac3c-e907d8793dc9" Jan 29 15:29:05 crc kubenswrapper[5008]: I0129 15:29:05.328848 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:05 crc kubenswrapper[5008]: I0129 15:29:05.328931 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:05 crc kubenswrapper[5008]: I0129 15:29:05.328962 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:05 crc kubenswrapper[5008]: I0129 15:29:05.329016 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:05 crc kubenswrapper[5008]: I0129 15:29:05.329046 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:05Z","lastTransitionTime":"2026-01-29T15:29:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:05 crc kubenswrapper[5008]: I0129 15:29:05.334370 5008 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-06 02:11:31.369764543 +0000 UTC Jan 29 15:29:05 crc kubenswrapper[5008]: I0129 15:29:05.432546 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:05 crc kubenswrapper[5008]: I0129 15:29:05.432594 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:05 crc kubenswrapper[5008]: I0129 15:29:05.432611 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:05 crc kubenswrapper[5008]: I0129 15:29:05.432634 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:05 crc kubenswrapper[5008]: I0129 15:29:05.432654 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:05Z","lastTransitionTime":"2026-01-29T15:29:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:05 crc kubenswrapper[5008]: I0129 15:29:05.535734 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:05 crc kubenswrapper[5008]: I0129 15:29:05.535848 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:05 crc kubenswrapper[5008]: I0129 15:29:05.535869 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:05 crc kubenswrapper[5008]: I0129 15:29:05.535893 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:05 crc kubenswrapper[5008]: I0129 15:29:05.535910 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:05Z","lastTransitionTime":"2026-01-29T15:29:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:05 crc kubenswrapper[5008]: I0129 15:29:05.639020 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:05 crc kubenswrapper[5008]: I0129 15:29:05.639070 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:05 crc kubenswrapper[5008]: I0129 15:29:05.639084 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:05 crc kubenswrapper[5008]: I0129 15:29:05.639102 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:05 crc kubenswrapper[5008]: I0129 15:29:05.639116 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:05Z","lastTransitionTime":"2026-01-29T15:29:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:05 crc kubenswrapper[5008]: I0129 15:29:05.742544 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:05 crc kubenswrapper[5008]: I0129 15:29:05.742585 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:05 crc kubenswrapper[5008]: I0129 15:29:05.742595 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:05 crc kubenswrapper[5008]: I0129 15:29:05.742611 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:05 crc kubenswrapper[5008]: I0129 15:29:05.742621 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:05Z","lastTransitionTime":"2026-01-29T15:29:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:05 crc kubenswrapper[5008]: I0129 15:29:05.844383 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:05 crc kubenswrapper[5008]: I0129 15:29:05.844419 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:05 crc kubenswrapper[5008]: I0129 15:29:05.844427 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:05 crc kubenswrapper[5008]: I0129 15:29:05.844440 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:05 crc kubenswrapper[5008]: I0129 15:29:05.844450 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:05Z","lastTransitionTime":"2026-01-29T15:29:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:05 crc kubenswrapper[5008]: I0129 15:29:05.947164 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:05 crc kubenswrapper[5008]: I0129 15:29:05.947230 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:05 crc kubenswrapper[5008]: I0129 15:29:05.947252 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:05 crc kubenswrapper[5008]: I0129 15:29:05.947283 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:05 crc kubenswrapper[5008]: I0129 15:29:05.947304 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:05Z","lastTransitionTime":"2026-01-29T15:29:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:06 crc kubenswrapper[5008]: I0129 15:29:06.050611 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:06 crc kubenswrapper[5008]: I0129 15:29:06.050876 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:06 crc kubenswrapper[5008]: I0129 15:29:06.051068 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:06 crc kubenswrapper[5008]: I0129 15:29:06.051122 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:06 crc kubenswrapper[5008]: I0129 15:29:06.051162 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:06Z","lastTransitionTime":"2026-01-29T15:29:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:06 crc kubenswrapper[5008]: I0129 15:29:06.153684 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:06 crc kubenswrapper[5008]: I0129 15:29:06.153857 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:06 crc kubenswrapper[5008]: I0129 15:29:06.153884 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:06 crc kubenswrapper[5008]: I0129 15:29:06.153909 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:06 crc kubenswrapper[5008]: I0129 15:29:06.153927 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:06Z","lastTransitionTime":"2026-01-29T15:29:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:06 crc kubenswrapper[5008]: I0129 15:29:06.256974 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:06 crc kubenswrapper[5008]: I0129 15:29:06.257026 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:06 crc kubenswrapper[5008]: I0129 15:29:06.257042 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:06 crc kubenswrapper[5008]: I0129 15:29:06.257064 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:06 crc kubenswrapper[5008]: I0129 15:29:06.257081 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:06Z","lastTransitionTime":"2026-01-29T15:29:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:06 crc kubenswrapper[5008]: I0129 15:29:06.323014 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 15:29:06 crc kubenswrapper[5008]: I0129 15:29:06.323089 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 15:29:06 crc kubenswrapper[5008]: I0129 15:29:06.323192 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 15:29:06 crc kubenswrapper[5008]: E0129 15:29:06.323302 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 15:29:06 crc kubenswrapper[5008]: E0129 15:29:06.323454 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 15:29:06 crc kubenswrapper[5008]: E0129 15:29:06.323732 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 15:29:06 crc kubenswrapper[5008]: I0129 15:29:06.335181 5008 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-10 19:48:34.334377245 +0000 UTC Jan 29 15:29:06 crc kubenswrapper[5008]: I0129 15:29:06.337885 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:06 crc kubenswrapper[5008]: I0129 15:29:06.337970 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:06 crc kubenswrapper[5008]: I0129 15:29:06.337998 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:06 crc kubenswrapper[5008]: I0129 15:29:06.338030 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:06 crc kubenswrapper[5008]: I0129 15:29:06.338053 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:06Z","lastTransitionTime":"2026-01-29T15:29:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:06 crc kubenswrapper[5008]: I0129 15:29:06.381689 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:06 crc kubenswrapper[5008]: I0129 15:29:06.381749 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:06 crc kubenswrapper[5008]: I0129 15:29:06.381759 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:06 crc kubenswrapper[5008]: I0129 15:29:06.381775 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:06 crc kubenswrapper[5008]: I0129 15:29:06.381808 5008 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:06Z","lastTransitionTime":"2026-01-29T15:29:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:06 crc kubenswrapper[5008]: I0129 15:29:06.424530 5008 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-version/cluster-version-operator-5c965bbfc6-h9jn8"] Jan 29 15:29:06 crc kubenswrapper[5008]: I0129 15:29:06.425043 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-h9jn8" Jan 29 15:29:06 crc kubenswrapper[5008]: I0129 15:29:06.426991 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Jan 29 15:29:06 crc kubenswrapper[5008]: I0129 15:29:06.427487 5008 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Jan 29 15:29:06 crc kubenswrapper[5008]: I0129 15:29:06.427631 5008 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Jan 29 15:29:06 crc kubenswrapper[5008]: I0129 15:29:06.427813 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Jan 29 15:29:06 crc kubenswrapper[5008]: I0129 15:29:06.485301 5008 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" podStartSLOduration=23.485277721 podStartE2EDuration="23.485277721s" podCreationTimestamp="2026-01-29 15:28:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 15:29:06.484701603 +0000 UTC m=+90.157555840" watchObservedRunningTime="2026-01-29 15:29:06.485277721 +0000 UTC m=+90.158131968" Jan 29 15:29:06 crc kubenswrapper[5008]: I0129 15:29:06.501759 5008 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-additional-cni-plugins-78bl2" podStartSLOduration=68.501742532 podStartE2EDuration="1m8.501742532s" podCreationTimestamp="2026-01-29 15:27:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 15:29:06.501654149 +0000 UTC m=+90.174508436" watchObservedRunningTime="2026-01-29 15:29:06.501742532 +0000 UTC m=+90.174596789" Jan 29 15:29:06 crc kubenswrapper[5008]: I0129 15:29:06.524153 5008 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/node-ca-qj8wb" podStartSLOduration=68.524131007 podStartE2EDuration="1m8.524131007s" podCreationTimestamp="2026-01-29 15:27:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 15:29:06.513113285 +0000 UTC m=+90.185967532" watchObservedRunningTime="2026-01-29 15:29:06.524131007 +0000 UTC m=+90.196985244" Jan 29 15:29:06 crc kubenswrapper[5008]: I0129 15:29:06.550935 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8f49fada-aec3-467e-93ac-1a06f27ea564-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-h9jn8\" (UID: \"8f49fada-aec3-467e-93ac-1a06f27ea564\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-h9jn8" Jan 29 15:29:06 crc kubenswrapper[5008]: I0129 15:29:06.551029 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/8f49fada-aec3-467e-93ac-1a06f27ea564-service-ca\") pod \"cluster-version-operator-5c965bbfc6-h9jn8\" (UID: \"8f49fada-aec3-467e-93ac-1a06f27ea564\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-h9jn8" Jan 29 15:29:06 crc kubenswrapper[5008]: I0129 15:29:06.551138 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/8f49fada-aec3-467e-93ac-1a06f27ea564-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-h9jn8\" (UID: \"8f49fada-aec3-467e-93ac-1a06f27ea564\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-h9jn8" Jan 29 15:29:06 crc kubenswrapper[5008]: I0129 15:29:06.551177 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/8f49fada-aec3-467e-93ac-1a06f27ea564-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-h9jn8\" (UID: \"8f49fada-aec3-467e-93ac-1a06f27ea564\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-h9jn8" Jan 29 15:29:06 crc kubenswrapper[5008]: I0129 15:29:06.551217 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/8f49fada-aec3-467e-93ac-1a06f27ea564-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-h9jn8\" (UID: \"8f49fada-aec3-467e-93ac-1a06f27ea564\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-h9jn8" Jan 29 15:29:06 crc kubenswrapper[5008]: I0129 15:29:06.568052 5008 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-42hcz" podStartSLOduration=68.56802962 podStartE2EDuration="1m8.56802962s" podCreationTimestamp="2026-01-29 15:27:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 15:29:06.557812253 +0000 UTC m=+90.230666520" watchObservedRunningTime="2026-01-29 15:29:06.56802962 +0000 UTC m=+90.240883867" Jan 29 15:29:06 crc kubenswrapper[5008]: I0129 15:29:06.568279 5008 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-p5kdp" podStartSLOduration=67.568273737 podStartE2EDuration="1m7.568273737s" podCreationTimestamp="2026-01-29 15:27:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 15:29:06.567310008 +0000 UTC m=+90.240164245" watchObservedRunningTime="2026-01-29 15:29:06.568273737 +0000 UTC m=+90.241127994" Jan 29 15:29:06 crc kubenswrapper[5008]: I0129 15:29:06.593081 5008 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/etcd-crc" podStartSLOduration=67.593061667 podStartE2EDuration="1m7.593061667s" podCreationTimestamp="2026-01-29 15:27:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 15:29:06.592621354 +0000 UTC m=+90.265475591" watchObservedRunningTime="2026-01-29 15:29:06.593061667 +0000 UTC m=+90.265915904" Jan 29 15:29:06 crc kubenswrapper[5008]: I0129 15:29:06.631385 5008 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podStartSLOduration=68.631370606 podStartE2EDuration="1m8.631370606s" podCreationTimestamp="2026-01-29 15:27:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 15:29:06.617691642 +0000 UTC m=+90.290545879" watchObservedRunningTime="2026-01-29 15:29:06.631370606 +0000 UTC m=+90.304224843" Jan 29 15:29:06 crc kubenswrapper[5008]: I0129 15:29:06.631492 5008 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=68.631488721 podStartE2EDuration="1m8.631488721s" podCreationTimestamp="2026-01-29 15:27:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 15:29:06.630867901 +0000 UTC m=+90.303722158" watchObservedRunningTime="2026-01-29 15:29:06.631488721 +0000 UTC m=+90.304342958" Jan 29 15:29:06 crc kubenswrapper[5008]: I0129 15:29:06.641060 5008 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/node-resolver-wtvvb" podStartSLOduration=68.641038357 podStartE2EDuration="1m8.641038357s" podCreationTimestamp="2026-01-29 15:27:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 15:29:06.640565452 +0000 UTC m=+90.313419689" watchObservedRunningTime="2026-01-29 15:29:06.641038357 +0000 UTC m=+90.313892604" Jan 29 15:29:06 crc kubenswrapper[5008]: I0129 15:29:06.651660 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/8f49fada-aec3-467e-93ac-1a06f27ea564-service-ca\") pod \"cluster-version-operator-5c965bbfc6-h9jn8\" (UID: \"8f49fada-aec3-467e-93ac-1a06f27ea564\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-h9jn8" Jan 29 15:29:06 crc kubenswrapper[5008]: I0129 15:29:06.651710 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/8f49fada-aec3-467e-93ac-1a06f27ea564-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-h9jn8\" (UID: \"8f49fada-aec3-467e-93ac-1a06f27ea564\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-h9jn8" Jan 29 15:29:06 crc kubenswrapper[5008]: I0129 15:29:06.651730 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/8f49fada-aec3-467e-93ac-1a06f27ea564-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-h9jn8\" (UID: \"8f49fada-aec3-467e-93ac-1a06f27ea564\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-h9jn8" Jan 29 15:29:06 crc kubenswrapper[5008]: I0129 15:29:06.651763 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/8f49fada-aec3-467e-93ac-1a06f27ea564-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-h9jn8\" (UID: \"8f49fada-aec3-467e-93ac-1a06f27ea564\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-h9jn8" Jan 29 15:29:06 crc kubenswrapper[5008]: I0129 15:29:06.651713 5008 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-daemon-gk9q8" podStartSLOduration=68.651698478 podStartE2EDuration="1m8.651698478s" podCreationTimestamp="2026-01-29 15:27:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 15:29:06.651413279 +0000 UTC m=+90.324267526" watchObservedRunningTime="2026-01-29 15:29:06.651698478 +0000 UTC m=+90.324552715" Jan 29 15:29:06 crc kubenswrapper[5008]: I0129 15:29:06.651809 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/8f49fada-aec3-467e-93ac-1a06f27ea564-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-h9jn8\" (UID: \"8f49fada-aec3-467e-93ac-1a06f27ea564\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-h9jn8" Jan 29 15:29:06 crc kubenswrapper[5008]: I0129 15:29:06.651806 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8f49fada-aec3-467e-93ac-1a06f27ea564-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-h9jn8\" (UID: \"8f49fada-aec3-467e-93ac-1a06f27ea564\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-h9jn8" Jan 29 15:29:06 crc kubenswrapper[5008]: I0129 15:29:06.651817 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/8f49fada-aec3-467e-93ac-1a06f27ea564-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-h9jn8\" (UID: \"8f49fada-aec3-467e-93ac-1a06f27ea564\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-h9jn8" Jan 29 15:29:06 crc kubenswrapper[5008]: I0129 15:29:06.652622 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/8f49fada-aec3-467e-93ac-1a06f27ea564-service-ca\") pod \"cluster-version-operator-5c965bbfc6-h9jn8\" (UID: \"8f49fada-aec3-467e-93ac-1a06f27ea564\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-h9jn8" Jan 29 15:29:06 crc kubenswrapper[5008]: I0129 15:29:06.666647 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8f49fada-aec3-467e-93ac-1a06f27ea564-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-h9jn8\" (UID: \"8f49fada-aec3-467e-93ac-1a06f27ea564\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-h9jn8" Jan 29 15:29:06 crc kubenswrapper[5008]: I0129 15:29:06.669630 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/8f49fada-aec3-467e-93ac-1a06f27ea564-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-h9jn8\" (UID: \"8f49fada-aec3-467e-93ac-1a06f27ea564\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-h9jn8" Jan 29 15:29:06 crc kubenswrapper[5008]: I0129 15:29:06.696376 5008 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podStartSLOduration=40.696361155 podStartE2EDuration="40.696361155s" podCreationTimestamp="2026-01-29 15:28:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 15:29:06.696011724 +0000 UTC m=+90.368865981" watchObservedRunningTime="2026-01-29 15:29:06.696361155 +0000 UTC m=+90.369215392" Jan 29 15:29:06 crc kubenswrapper[5008]: I0129 15:29:06.744392 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-h9jn8" Jan 29 15:29:06 crc kubenswrapper[5008]: W0129 15:29:06.770053 5008 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8f49fada_aec3_467e_93ac_1a06f27ea564.slice/crio-ca932034e38ef1a752e34502bfeecaed13983b0e2f9df41a337106eadaa1f7bf WatchSource:0}: Error finding container ca932034e38ef1a752e34502bfeecaed13983b0e2f9df41a337106eadaa1f7bf: Status 404 returned error can't find the container with id ca932034e38ef1a752e34502bfeecaed13983b0e2f9df41a337106eadaa1f7bf Jan 29 15:29:07 crc kubenswrapper[5008]: I0129 15:29:07.057371 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-h9jn8" event={"ID":"8f49fada-aec3-467e-93ac-1a06f27ea564","Type":"ContainerStarted","Data":"654afe7caf138947547858d117e989e4374d20d9d127e21145889b20e89cb559"} Jan 29 15:29:07 crc kubenswrapper[5008]: I0129 15:29:07.057448 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-h9jn8" event={"ID":"8f49fada-aec3-467e-93ac-1a06f27ea564","Type":"ContainerStarted","Data":"ca932034e38ef1a752e34502bfeecaed13983b0e2f9df41a337106eadaa1f7bf"} Jan 29 15:29:07 crc kubenswrapper[5008]: I0129 15:29:07.074296 5008 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-h9jn8" podStartSLOduration=69.074061182 podStartE2EDuration="1m9.074061182s" podCreationTimestamp="2026-01-29 15:27:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 15:29:07.073864216 +0000 UTC m=+90.746718493" watchObservedRunningTime="2026-01-29 15:29:07.074061182 +0000 UTC m=+90.746915429" Jan 29 15:29:07 crc kubenswrapper[5008]: I0129 15:29:07.323256 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kkc6c" Jan 29 15:29:07 crc kubenswrapper[5008]: E0129 15:29:07.324553 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kkc6c" podUID="f3716fd8-7f9b-44e2-ac3c-e907d8793dc9" Jan 29 15:29:07 crc kubenswrapper[5008]: I0129 15:29:07.335551 5008 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-21 09:48:34.051723849 +0000 UTC Jan 29 15:29:07 crc kubenswrapper[5008]: I0129 15:29:07.335672 5008 certificate_manager.go:356] kubernetes.io/kubelet-serving: Rotating certificates Jan 29 15:29:07 crc kubenswrapper[5008]: I0129 15:29:07.350381 5008 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Jan 29 15:29:08 crc kubenswrapper[5008]: I0129 15:29:08.323495 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 15:29:08 crc kubenswrapper[5008]: I0129 15:29:08.323605 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 15:29:08 crc kubenswrapper[5008]: I0129 15:29:08.323644 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 15:29:08 crc kubenswrapper[5008]: E0129 15:29:08.323725 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 15:29:08 crc kubenswrapper[5008]: E0129 15:29:08.323887 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 15:29:08 crc kubenswrapper[5008]: E0129 15:29:08.323960 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 15:29:09 crc kubenswrapper[5008]: I0129 15:29:09.323363 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kkc6c" Jan 29 15:29:09 crc kubenswrapper[5008]: E0129 15:29:09.323628 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kkc6c" podUID="f3716fd8-7f9b-44e2-ac3c-e907d8793dc9" Jan 29 15:29:10 crc kubenswrapper[5008]: I0129 15:29:10.323608 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 15:29:10 crc kubenswrapper[5008]: I0129 15:29:10.323714 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 15:29:10 crc kubenswrapper[5008]: E0129 15:29:10.323763 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 15:29:10 crc kubenswrapper[5008]: E0129 15:29:10.323873 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 15:29:10 crc kubenswrapper[5008]: I0129 15:29:10.323940 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 15:29:10 crc kubenswrapper[5008]: E0129 15:29:10.324001 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 15:29:11 crc kubenswrapper[5008]: I0129 15:29:11.322878 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kkc6c" Jan 29 15:29:11 crc kubenswrapper[5008]: E0129 15:29:11.323387 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kkc6c" podUID="f3716fd8-7f9b-44e2-ac3c-e907d8793dc9" Jan 29 15:29:12 crc kubenswrapper[5008]: I0129 15:29:12.322662 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 15:29:12 crc kubenswrapper[5008]: I0129 15:29:12.322697 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 15:29:12 crc kubenswrapper[5008]: I0129 15:29:12.322852 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 15:29:12 crc kubenswrapper[5008]: E0129 15:29:12.322932 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 15:29:12 crc kubenswrapper[5008]: E0129 15:29:12.323054 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 15:29:12 crc kubenswrapper[5008]: E0129 15:29:12.323134 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 15:29:13 crc kubenswrapper[5008]: I0129 15:29:13.323860 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kkc6c" Jan 29 15:29:13 crc kubenswrapper[5008]: E0129 15:29:13.324725 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kkc6c" podUID="f3716fd8-7f9b-44e2-ac3c-e907d8793dc9" Jan 29 15:29:13 crc kubenswrapper[5008]: I0129 15:29:13.325675 5008 scope.go:117] "RemoveContainer" containerID="c4894794fa383987c6dc74bda3cd40e56fa81dab982e631fe2fb043b74a6afd9" Jan 29 15:29:13 crc kubenswrapper[5008]: E0129 15:29:13.325928 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-pqg9w_openshift-ovn-kubernetes(1d092513-7735-4c98-9734-57bc46b99280)\"" pod="openshift-ovn-kubernetes/ovnkube-node-pqg9w" podUID="1d092513-7735-4c98-9734-57bc46b99280" Jan 29 15:29:14 crc kubenswrapper[5008]: I0129 15:29:14.323244 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 15:29:14 crc kubenswrapper[5008]: I0129 15:29:14.323255 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 15:29:14 crc kubenswrapper[5008]: E0129 15:29:14.323449 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 15:29:14 crc kubenswrapper[5008]: I0129 15:29:14.323275 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 15:29:14 crc kubenswrapper[5008]: E0129 15:29:14.323610 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 15:29:14 crc kubenswrapper[5008]: E0129 15:29:14.324030 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 15:29:15 crc kubenswrapper[5008]: I0129 15:29:15.323523 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kkc6c" Jan 29 15:29:15 crc kubenswrapper[5008]: E0129 15:29:15.323661 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kkc6c" podUID="f3716fd8-7f9b-44e2-ac3c-e907d8793dc9" Jan 29 15:29:16 crc kubenswrapper[5008]: I0129 15:29:16.323431 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 15:29:16 crc kubenswrapper[5008]: I0129 15:29:16.323471 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 15:29:16 crc kubenswrapper[5008]: I0129 15:29:16.323580 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 15:29:16 crc kubenswrapper[5008]: E0129 15:29:16.323649 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 15:29:16 crc kubenswrapper[5008]: E0129 15:29:16.323729 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 15:29:16 crc kubenswrapper[5008]: E0129 15:29:16.323796 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 15:29:16 crc kubenswrapper[5008]: I0129 15:29:16.990346 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/f3716fd8-7f9b-44e2-ac3c-e907d8793dc9-metrics-certs\") pod \"network-metrics-daemon-kkc6c\" (UID: \"f3716fd8-7f9b-44e2-ac3c-e907d8793dc9\") " pod="openshift-multus/network-metrics-daemon-kkc6c" Jan 29 15:29:16 crc kubenswrapper[5008]: E0129 15:29:16.990752 5008 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 29 15:29:16 crc kubenswrapper[5008]: E0129 15:29:16.990943 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f3716fd8-7f9b-44e2-ac3c-e907d8793dc9-metrics-certs podName:f3716fd8-7f9b-44e2-ac3c-e907d8793dc9 nodeName:}" failed. No retries permitted until 2026-01-29 15:30:20.990917257 +0000 UTC m=+164.663771494 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/f3716fd8-7f9b-44e2-ac3c-e907d8793dc9-metrics-certs") pod "network-metrics-daemon-kkc6c" (UID: "f3716fd8-7f9b-44e2-ac3c-e907d8793dc9") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 29 15:29:17 crc kubenswrapper[5008]: I0129 15:29:17.325033 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kkc6c" Jan 29 15:29:17 crc kubenswrapper[5008]: E0129 15:29:17.325208 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kkc6c" podUID="f3716fd8-7f9b-44e2-ac3c-e907d8793dc9" Jan 29 15:29:18 crc kubenswrapper[5008]: I0129 15:29:18.323081 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 15:29:18 crc kubenswrapper[5008]: I0129 15:29:18.323074 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 15:29:18 crc kubenswrapper[5008]: I0129 15:29:18.323205 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 15:29:18 crc kubenswrapper[5008]: E0129 15:29:18.323404 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 15:29:18 crc kubenswrapper[5008]: E0129 15:29:18.323931 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 15:29:18 crc kubenswrapper[5008]: E0129 15:29:18.324257 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 15:29:19 crc kubenswrapper[5008]: I0129 15:29:19.323301 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kkc6c" Jan 29 15:29:19 crc kubenswrapper[5008]: E0129 15:29:19.323595 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kkc6c" podUID="f3716fd8-7f9b-44e2-ac3c-e907d8793dc9" Jan 29 15:29:20 crc kubenswrapper[5008]: I0129 15:29:20.323581 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 15:29:20 crc kubenswrapper[5008]: I0129 15:29:20.323656 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 15:29:20 crc kubenswrapper[5008]: I0129 15:29:20.323606 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 15:29:20 crc kubenswrapper[5008]: E0129 15:29:20.323737 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 15:29:20 crc kubenswrapper[5008]: E0129 15:29:20.323967 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 15:29:20 crc kubenswrapper[5008]: E0129 15:29:20.324132 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 15:29:21 crc kubenswrapper[5008]: I0129 15:29:21.323168 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kkc6c" Jan 29 15:29:21 crc kubenswrapper[5008]: E0129 15:29:21.323433 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kkc6c" podUID="f3716fd8-7f9b-44e2-ac3c-e907d8793dc9" Jan 29 15:29:22 crc kubenswrapper[5008]: I0129 15:29:22.323209 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 15:29:22 crc kubenswrapper[5008]: I0129 15:29:22.323300 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 15:29:22 crc kubenswrapper[5008]: I0129 15:29:22.323217 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 15:29:22 crc kubenswrapper[5008]: E0129 15:29:22.323451 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 15:29:22 crc kubenswrapper[5008]: E0129 15:29:22.323636 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 15:29:22 crc kubenswrapper[5008]: E0129 15:29:22.323875 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 15:29:23 crc kubenswrapper[5008]: I0129 15:29:23.322822 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kkc6c" Jan 29 15:29:23 crc kubenswrapper[5008]: E0129 15:29:23.323141 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kkc6c" podUID="f3716fd8-7f9b-44e2-ac3c-e907d8793dc9" Jan 29 15:29:24 crc kubenswrapper[5008]: I0129 15:29:24.322957 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 15:29:24 crc kubenswrapper[5008]: E0129 15:29:24.323085 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 15:29:24 crc kubenswrapper[5008]: I0129 15:29:24.323289 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 15:29:24 crc kubenswrapper[5008]: E0129 15:29:24.323337 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 15:29:24 crc kubenswrapper[5008]: I0129 15:29:24.324031 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 15:29:24 crc kubenswrapper[5008]: E0129 15:29:24.324401 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 15:29:24 crc kubenswrapper[5008]: I0129 15:29:24.324661 5008 scope.go:117] "RemoveContainer" containerID="c4894794fa383987c6dc74bda3cd40e56fa81dab982e631fe2fb043b74a6afd9" Jan 29 15:29:24 crc kubenswrapper[5008]: E0129 15:29:24.324769 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-pqg9w_openshift-ovn-kubernetes(1d092513-7735-4c98-9734-57bc46b99280)\"" pod="openshift-ovn-kubernetes/ovnkube-node-pqg9w" podUID="1d092513-7735-4c98-9734-57bc46b99280" Jan 29 15:29:25 crc kubenswrapper[5008]: I0129 15:29:25.322908 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kkc6c" Jan 29 15:29:25 crc kubenswrapper[5008]: E0129 15:29:25.323047 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kkc6c" podUID="f3716fd8-7f9b-44e2-ac3c-e907d8793dc9" Jan 29 15:29:26 crc kubenswrapper[5008]: I0129 15:29:26.323586 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 15:29:26 crc kubenswrapper[5008]: I0129 15:29:26.323658 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 15:29:26 crc kubenswrapper[5008]: E0129 15:29:26.323744 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 15:29:26 crc kubenswrapper[5008]: I0129 15:29:26.323667 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 15:29:26 crc kubenswrapper[5008]: E0129 15:29:26.323865 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 15:29:26 crc kubenswrapper[5008]: E0129 15:29:26.323965 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 15:29:27 crc kubenswrapper[5008]: I0129 15:29:27.323394 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kkc6c" Jan 29 15:29:27 crc kubenswrapper[5008]: E0129 15:29:27.324759 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kkc6c" podUID="f3716fd8-7f9b-44e2-ac3c-e907d8793dc9" Jan 29 15:29:28 crc kubenswrapper[5008]: I0129 15:29:28.323754 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 15:29:28 crc kubenswrapper[5008]: I0129 15:29:28.323906 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 15:29:28 crc kubenswrapper[5008]: I0129 15:29:28.323768 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 15:29:28 crc kubenswrapper[5008]: E0129 15:29:28.323961 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 15:29:28 crc kubenswrapper[5008]: E0129 15:29:28.324093 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 15:29:28 crc kubenswrapper[5008]: E0129 15:29:28.324292 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 15:29:29 crc kubenswrapper[5008]: I0129 15:29:29.323758 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kkc6c" Jan 29 15:29:29 crc kubenswrapper[5008]: E0129 15:29:29.324506 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kkc6c" podUID="f3716fd8-7f9b-44e2-ac3c-e907d8793dc9" Jan 29 15:29:30 crc kubenswrapper[5008]: I0129 15:29:30.323402 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 15:29:30 crc kubenswrapper[5008]: I0129 15:29:30.323464 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 15:29:30 crc kubenswrapper[5008]: I0129 15:29:30.323515 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 15:29:30 crc kubenswrapper[5008]: E0129 15:29:30.323708 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 15:29:30 crc kubenswrapper[5008]: E0129 15:29:30.323888 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 15:29:30 crc kubenswrapper[5008]: E0129 15:29:30.324002 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 15:29:31 crc kubenswrapper[5008]: I0129 15:29:31.323458 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kkc6c" Jan 29 15:29:31 crc kubenswrapper[5008]: E0129 15:29:31.323869 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kkc6c" podUID="f3716fd8-7f9b-44e2-ac3c-e907d8793dc9" Jan 29 15:29:32 crc kubenswrapper[5008]: I0129 15:29:32.323607 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 15:29:32 crc kubenswrapper[5008]: E0129 15:29:32.323734 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 15:29:32 crc kubenswrapper[5008]: I0129 15:29:32.323863 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 15:29:32 crc kubenswrapper[5008]: E0129 15:29:32.324027 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 15:29:32 crc kubenswrapper[5008]: I0129 15:29:32.323872 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 15:29:32 crc kubenswrapper[5008]: E0129 15:29:32.324210 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 15:29:33 crc kubenswrapper[5008]: I0129 15:29:33.323290 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kkc6c" Jan 29 15:29:33 crc kubenswrapper[5008]: E0129 15:29:33.323470 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kkc6c" podUID="f3716fd8-7f9b-44e2-ac3c-e907d8793dc9" Jan 29 15:29:34 crc kubenswrapper[5008]: I0129 15:29:34.154247 5008 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-42hcz_cdd8ae23-3f9f-49f8-928d-46dad823fde4/kube-multus/1.log" Jan 29 15:29:34 crc kubenswrapper[5008]: I0129 15:29:34.154947 5008 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-42hcz_cdd8ae23-3f9f-49f8-928d-46dad823fde4/kube-multus/0.log" Jan 29 15:29:34 crc kubenswrapper[5008]: I0129 15:29:34.155010 5008 generic.go:334] "Generic (PLEG): container finished" podID="cdd8ae23-3f9f-49f8-928d-46dad823fde4" containerID="af9a973786f58d2c63123c28e0b1aedaa9ec4188567960c544cf68f70ba20873" exitCode=1 Jan 29 15:29:34 crc kubenswrapper[5008]: I0129 15:29:34.155050 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-42hcz" event={"ID":"cdd8ae23-3f9f-49f8-928d-46dad823fde4","Type":"ContainerDied","Data":"af9a973786f58d2c63123c28e0b1aedaa9ec4188567960c544cf68f70ba20873"} Jan 29 15:29:34 crc kubenswrapper[5008]: I0129 15:29:34.155105 5008 scope.go:117] "RemoveContainer" containerID="a44b0a7b0b53c339b51d5391ad7e0eb342bdb491b4af37a98f48788b8e2c077b" Jan 29 15:29:34 crc kubenswrapper[5008]: I0129 15:29:34.155731 5008 scope.go:117] "RemoveContainer" containerID="af9a973786f58d2c63123c28e0b1aedaa9ec4188567960c544cf68f70ba20873" Jan 29 15:29:34 crc kubenswrapper[5008]: E0129 15:29:34.156110 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-multus pod=multus-42hcz_openshift-multus(cdd8ae23-3f9f-49f8-928d-46dad823fde4)\"" pod="openshift-multus/multus-42hcz" podUID="cdd8ae23-3f9f-49f8-928d-46dad823fde4" Jan 29 15:29:34 crc kubenswrapper[5008]: I0129 15:29:34.322871 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 15:29:34 crc kubenswrapper[5008]: I0129 15:29:34.322872 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 15:29:34 crc kubenswrapper[5008]: E0129 15:29:34.323704 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 15:29:34 crc kubenswrapper[5008]: E0129 15:29:34.323349 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 15:29:34 crc kubenswrapper[5008]: I0129 15:29:34.322941 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 15:29:34 crc kubenswrapper[5008]: E0129 15:29:34.323890 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 15:29:35 crc kubenswrapper[5008]: I0129 15:29:35.161186 5008 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-42hcz_cdd8ae23-3f9f-49f8-928d-46dad823fde4/kube-multus/1.log" Jan 29 15:29:35 crc kubenswrapper[5008]: I0129 15:29:35.323200 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kkc6c" Jan 29 15:29:35 crc kubenswrapper[5008]: E0129 15:29:35.323415 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kkc6c" podUID="f3716fd8-7f9b-44e2-ac3c-e907d8793dc9" Jan 29 15:29:36 crc kubenswrapper[5008]: I0129 15:29:36.322942 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 15:29:36 crc kubenswrapper[5008]: I0129 15:29:36.322999 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 15:29:36 crc kubenswrapper[5008]: E0129 15:29:36.323167 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 15:29:36 crc kubenswrapper[5008]: I0129 15:29:36.323208 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 15:29:36 crc kubenswrapper[5008]: E0129 15:29:36.323455 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 15:29:36 crc kubenswrapper[5008]: E0129 15:29:36.323877 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 15:29:37 crc kubenswrapper[5008]: E0129 15:29:37.318386 5008 kubelet_node_status.go:497] "Node not becoming ready in time after startup" Jan 29 15:29:37 crc kubenswrapper[5008]: I0129 15:29:37.323070 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kkc6c" Jan 29 15:29:37 crc kubenswrapper[5008]: E0129 15:29:37.326260 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kkc6c" podUID="f3716fd8-7f9b-44e2-ac3c-e907d8793dc9" Jan 29 15:29:37 crc kubenswrapper[5008]: E0129 15:29:37.439998 5008 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 29 15:29:38 crc kubenswrapper[5008]: I0129 15:29:38.323537 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 15:29:38 crc kubenswrapper[5008]: I0129 15:29:38.323658 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 15:29:38 crc kubenswrapper[5008]: E0129 15:29:38.323684 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 15:29:38 crc kubenswrapper[5008]: I0129 15:29:38.323864 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 15:29:38 crc kubenswrapper[5008]: E0129 15:29:38.325589 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 15:29:38 crc kubenswrapper[5008]: E0129 15:29:38.325709 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 15:29:38 crc kubenswrapper[5008]: I0129 15:29:38.327434 5008 scope.go:117] "RemoveContainer" containerID="c4894794fa383987c6dc74bda3cd40e56fa81dab982e631fe2fb043b74a6afd9" Jan 29 15:29:39 crc kubenswrapper[5008]: I0129 15:29:39.323308 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kkc6c" Jan 29 15:29:39 crc kubenswrapper[5008]: E0129 15:29:39.323539 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kkc6c" podUID="f3716fd8-7f9b-44e2-ac3c-e907d8793dc9" Jan 29 15:29:40 crc kubenswrapper[5008]: I0129 15:29:40.165515 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-kkc6c"] Jan 29 15:29:40 crc kubenswrapper[5008]: I0129 15:29:40.185251 5008 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-pqg9w_1d092513-7735-4c98-9734-57bc46b99280/ovnkube-controller/3.log" Jan 29 15:29:40 crc kubenswrapper[5008]: I0129 15:29:40.188818 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kkc6c" Jan 29 15:29:40 crc kubenswrapper[5008]: I0129 15:29:40.188836 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pqg9w" event={"ID":"1d092513-7735-4c98-9734-57bc46b99280","Type":"ContainerStarted","Data":"f8f1d8793cbf27bc352ee2009caccdffa0a765f416beee3df3c97018285f6f5c"} Jan 29 15:29:40 crc kubenswrapper[5008]: E0129 15:29:40.188974 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kkc6c" podUID="f3716fd8-7f9b-44e2-ac3c-e907d8793dc9" Jan 29 15:29:40 crc kubenswrapper[5008]: I0129 15:29:40.189757 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-pqg9w" Jan 29 15:29:40 crc kubenswrapper[5008]: I0129 15:29:40.218525 5008 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-pqg9w" podStartSLOduration=102.218504856 podStartE2EDuration="1m42.218504856s" podCreationTimestamp="2026-01-29 15:27:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 15:29:40.217554216 +0000 UTC m=+123.890408453" watchObservedRunningTime="2026-01-29 15:29:40.218504856 +0000 UTC m=+123.891359093" Jan 29 15:29:40 crc kubenswrapper[5008]: I0129 15:29:40.323299 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 15:29:40 crc kubenswrapper[5008]: I0129 15:29:40.323337 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 15:29:40 crc kubenswrapper[5008]: E0129 15:29:40.324106 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 15:29:40 crc kubenswrapper[5008]: I0129 15:29:40.323354 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 15:29:40 crc kubenswrapper[5008]: E0129 15:29:40.324303 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 15:29:40 crc kubenswrapper[5008]: E0129 15:29:40.324376 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 15:29:42 crc kubenswrapper[5008]: I0129 15:29:42.323697 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 15:29:42 crc kubenswrapper[5008]: I0129 15:29:42.323844 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 15:29:42 crc kubenswrapper[5008]: I0129 15:29:42.323844 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 15:29:42 crc kubenswrapper[5008]: E0129 15:29:42.323921 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 15:29:42 crc kubenswrapper[5008]: I0129 15:29:42.323973 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kkc6c" Jan 29 15:29:42 crc kubenswrapper[5008]: E0129 15:29:42.324154 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 15:29:42 crc kubenswrapper[5008]: E0129 15:29:42.324286 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 15:29:42 crc kubenswrapper[5008]: E0129 15:29:42.324497 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kkc6c" podUID="f3716fd8-7f9b-44e2-ac3c-e907d8793dc9" Jan 29 15:29:42 crc kubenswrapper[5008]: E0129 15:29:42.440883 5008 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 29 15:29:44 crc kubenswrapper[5008]: I0129 15:29:44.323697 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 15:29:44 crc kubenswrapper[5008]: I0129 15:29:44.323853 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 15:29:44 crc kubenswrapper[5008]: I0129 15:29:44.323749 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 15:29:44 crc kubenswrapper[5008]: I0129 15:29:44.323879 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kkc6c" Jan 29 15:29:44 crc kubenswrapper[5008]: E0129 15:29:44.323997 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 15:29:44 crc kubenswrapper[5008]: E0129 15:29:44.324128 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 15:29:44 crc kubenswrapper[5008]: E0129 15:29:44.324225 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 15:29:44 crc kubenswrapper[5008]: E0129 15:29:44.324309 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kkc6c" podUID="f3716fd8-7f9b-44e2-ac3c-e907d8793dc9" Jan 29 15:29:45 crc kubenswrapper[5008]: I0129 15:29:45.323268 5008 scope.go:117] "RemoveContainer" containerID="af9a973786f58d2c63123c28e0b1aedaa9ec4188567960c544cf68f70ba20873" Jan 29 15:29:46 crc kubenswrapper[5008]: I0129 15:29:46.211616 5008 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-42hcz_cdd8ae23-3f9f-49f8-928d-46dad823fde4/kube-multus/1.log" Jan 29 15:29:46 crc kubenswrapper[5008]: I0129 15:29:46.211673 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-42hcz" event={"ID":"cdd8ae23-3f9f-49f8-928d-46dad823fde4","Type":"ContainerStarted","Data":"a79b05ecc77ae822ab75bfdce779bbfbb375857cfbf47a090a83a690373dc6e0"} Jan 29 15:29:46 crc kubenswrapper[5008]: I0129 15:29:46.322816 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 15:29:46 crc kubenswrapper[5008]: E0129 15:29:46.322952 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 15:29:46 crc kubenswrapper[5008]: I0129 15:29:46.322968 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 15:29:46 crc kubenswrapper[5008]: I0129 15:29:46.323012 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 15:29:46 crc kubenswrapper[5008]: E0129 15:29:46.323091 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 15:29:46 crc kubenswrapper[5008]: I0129 15:29:46.323139 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kkc6c" Jan 29 15:29:46 crc kubenswrapper[5008]: E0129 15:29:46.323191 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kkc6c" podUID="f3716fd8-7f9b-44e2-ac3c-e907d8793dc9" Jan 29 15:29:46 crc kubenswrapper[5008]: E0129 15:29:46.323231 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 15:29:47 crc kubenswrapper[5008]: E0129 15:29:47.441503 5008 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 29 15:29:48 crc kubenswrapper[5008]: I0129 15:29:48.323640 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 15:29:48 crc kubenswrapper[5008]: I0129 15:29:48.323731 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 15:29:48 crc kubenswrapper[5008]: I0129 15:29:48.323655 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 15:29:48 crc kubenswrapper[5008]: I0129 15:29:48.323753 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kkc6c" Jan 29 15:29:48 crc kubenswrapper[5008]: E0129 15:29:48.323906 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 15:29:48 crc kubenswrapper[5008]: E0129 15:29:48.324093 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kkc6c" podUID="f3716fd8-7f9b-44e2-ac3c-e907d8793dc9" Jan 29 15:29:48 crc kubenswrapper[5008]: E0129 15:29:48.324222 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 15:29:48 crc kubenswrapper[5008]: E0129 15:29:48.324315 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 15:29:50 crc kubenswrapper[5008]: I0129 15:29:50.323373 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 15:29:50 crc kubenswrapper[5008]: I0129 15:29:50.323523 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kkc6c" Jan 29 15:29:50 crc kubenswrapper[5008]: I0129 15:29:50.323373 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 15:29:50 crc kubenswrapper[5008]: E0129 15:29:50.323589 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 15:29:50 crc kubenswrapper[5008]: I0129 15:29:50.323404 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 15:29:50 crc kubenswrapper[5008]: E0129 15:29:50.323751 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kkc6c" podUID="f3716fd8-7f9b-44e2-ac3c-e907d8793dc9" Jan 29 15:29:50 crc kubenswrapper[5008]: E0129 15:29:50.324005 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 15:29:50 crc kubenswrapper[5008]: E0129 15:29:50.324091 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 15:29:52 crc kubenswrapper[5008]: I0129 15:29:52.322852 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 15:29:52 crc kubenswrapper[5008]: I0129 15:29:52.322911 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 15:29:52 crc kubenswrapper[5008]: E0129 15:29:52.322980 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 15:29:52 crc kubenswrapper[5008]: I0129 15:29:52.323170 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 15:29:52 crc kubenswrapper[5008]: E0129 15:29:52.323178 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 15:29:52 crc kubenswrapper[5008]: E0129 15:29:52.323215 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 15:29:52 crc kubenswrapper[5008]: I0129 15:29:52.323246 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kkc6c" Jan 29 15:29:52 crc kubenswrapper[5008]: E0129 15:29:52.323428 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kkc6c" podUID="f3716fd8-7f9b-44e2-ac3c-e907d8793dc9" Jan 29 15:29:54 crc kubenswrapper[5008]: I0129 15:29:54.323375 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 15:29:54 crc kubenswrapper[5008]: I0129 15:29:54.323592 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 15:29:54 crc kubenswrapper[5008]: I0129 15:29:54.323687 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 15:29:54 crc kubenswrapper[5008]: I0129 15:29:54.323703 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kkc6c" Jan 29 15:29:54 crc kubenswrapper[5008]: I0129 15:29:54.328178 5008 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Jan 29 15:29:54 crc kubenswrapper[5008]: I0129 15:29:54.328753 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Jan 29 15:29:54 crc kubenswrapper[5008]: I0129 15:29:54.328939 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Jan 29 15:29:54 crc kubenswrapper[5008]: I0129 15:29:54.329026 5008 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Jan 29 15:29:54 crc kubenswrapper[5008]: I0129 15:29:54.329231 5008 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Jan 29 15:29:54 crc kubenswrapper[5008]: I0129 15:29:54.329644 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.445190 5008 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeReady" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.498004 5008 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-fsx74"] Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.498755 5008 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/downloads-7954f5f757-6wmrp"] Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.499310 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-6wmrp" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.499898 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-fsx74" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.533463 5008 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-machine-approver/machine-approver-56656f9798-p8fx6"] Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.533867 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-p8fx6" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.534893 5008 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Jan 29 15:29:57 crc kubenswrapper[5008]: W0129 15:29:57.535023 5008 reflector.go:561] object-"openshift-machine-api"/"kube-rbac-proxy": failed to list *v1.ConfigMap: configmaps "kube-rbac-proxy" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-machine-api": no relationship found between node 'crc' and this object Jan 29 15:29:57 crc kubenswrapper[5008]: E0129 15:29:57.535044 5008 reflector.go:158] "Unhandled Error" err="object-\"openshift-machine-api\"/\"kube-rbac-proxy\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-rbac-proxy\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-machine-api\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 29 15:29:57 crc kubenswrapper[5008]: W0129 15:29:57.535073 5008 reflector.go:561] object-"openshift-console"/"default-dockercfg-chnjx": failed to list *v1.Secret: secrets "default-dockercfg-chnjx" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openshift-console": no relationship found between node 'crc' and this object Jan 29 15:29:57 crc kubenswrapper[5008]: E0129 15:29:57.535083 5008 reflector.go:158] "Unhandled Error" err="object-\"openshift-console\"/\"default-dockercfg-chnjx\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"default-dockercfg-chnjx\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-console\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.535998 5008 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-fpmxk"] Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.536207 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-fpmxk" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.546306 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2w978\" (UniqueName: \"kubernetes.io/projected/64cf2ff9-40f4-48a5-a16c-6513cf0470bd-kube-api-access-2w978\") pod \"downloads-7954f5f757-6wmrp\" (UID: \"64cf2ff9-40f4-48a5-a16c-6513cf0470bd\") " pod="openshift-console/downloads-7954f5f757-6wmrp" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.546348 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6db03bb1-4833-4d3f-82d5-08ec5710251f-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-fsx74\" (UID: \"6db03bb1-4833-4d3f-82d5-08ec5710251f\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-fsx74" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.546366 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wmrtr\" (UniqueName: \"kubernetes.io/projected/6db03bb1-4833-4d3f-82d5-08ec5710251f-kube-api-access-wmrtr\") pod \"machine-api-operator-5694c8668f-fsx74\" (UID: \"6db03bb1-4833-4d3f-82d5-08ec5710251f\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-fsx74" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.546439 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6db03bb1-4833-4d3f-82d5-08ec5710251f-images\") pod \"machine-api-operator-5694c8668f-fsx74\" (UID: \"6db03bb1-4833-4d3f-82d5-08ec5710251f\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-fsx74" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.546464 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6db03bb1-4833-4d3f-82d5-08ec5710251f-config\") pod \"machine-api-operator-5694c8668f-fsx74\" (UID: \"6db03bb1-4833-4d3f-82d5-08ec5710251f\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-fsx74" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.563551 5008 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.566440 5008 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.566677 5008 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.567620 5008 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.567631 5008 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-tczgr"] Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.567765 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.567945 5008 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.567975 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-tczgr" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.568668 5008 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.568834 5008 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.568991 5008 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-brcd7"] Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.569124 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.569204 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-brcd7" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.569226 5008 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.569335 5008 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.571295 5008 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.572033 5008 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-468fl"] Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.572370 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-468fl" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.572954 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.573086 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.573238 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.573555 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.573961 5008 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.576129 5008 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-wkn92"] Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.576372 5008 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-n2sqt"] Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.576677 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-n2sqt" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.577293 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-wkn92" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.579319 5008 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.581754 5008 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-f9d7485db-g2rk6"] Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.582045 5008 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-4l85w"] Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.582507 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-4l85w" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.582940 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-g2rk6" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.585287 5008 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.585419 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.585455 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.585527 5008 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.588261 5008 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.595039 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.595053 5008 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.595042 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.596209 5008 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-v7r8x"] Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.596702 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-v7r8x" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.602577 5008 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-4zwkl"] Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.603416 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-4zwkl" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.603768 5008 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.604190 5008 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.604751 5008 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.622984 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.625041 5008 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-ztdsl"] Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.625527 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-ztdsl" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.625642 5008 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-lb8mt"] Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.626186 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-lb8mt" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.629628 5008 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.629986 5008 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.630174 5008 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.630422 5008 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.630554 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.630844 5008 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.630925 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.631010 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.631331 5008 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.631482 5008 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.631692 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.631746 5008 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.631925 5008 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.631952 5008 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.632085 5008 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.632178 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.632184 5008 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.632282 5008 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.632089 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.632362 5008 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.632390 5008 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.632406 5008 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.632358 5008 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress/router-default-5444994796-lkcrp"] Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.632485 5008 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.632539 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.632641 5008 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.632777 5008 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.632908 5008 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.633070 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-lkcrp" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.635752 5008 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.636052 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.636468 5008 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.636610 5008 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.636677 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.636774 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.636841 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.636936 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.637076 5008 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.637273 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.637703 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.637858 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.638044 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.638270 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.638498 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.638606 5008 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.638642 5008 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.638697 5008 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.638830 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.638967 5008 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.639064 5008 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.639163 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.639277 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.639371 5008 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.639461 5008 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-s5vvl"] Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.639544 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.640090 5008 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-f5fs6"] Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.640417 5008 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-6lddg"] Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.640812 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-6lddg" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.641113 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-s5vvl" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.641319 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-f5fs6" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.641835 5008 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-2dsnp"] Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.642495 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-2dsnp" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.647115 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f88f09ca-9a9f-4d6e-bb2f-f00d75ae11fb-config\") pod \"authentication-operator-69f744f599-wkn92\" (UID: \"f88f09ca-9a9f-4d6e-bb2f-f00d75ae11fb\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-wkn92" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.647142 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.647177 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1c37e4bb-792b-4317-87ae-ca4172740500-config\") pod \"etcd-operator-b45778765-v7r8x\" (UID: \"1c37e4bb-792b-4317-87ae-ca4172740500\") " pod="openshift-etcd-operator/etcd-operator-b45778765-v7r8x" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.647205 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v4bmd\" (UniqueName: \"kubernetes.io/projected/8eb3ecfb-3675-4931-b618-9a5ba6d23b1d-kube-api-access-v4bmd\") pod \"openshift-controller-manager-operator-756b6f6bc6-brcd7\" (UID: \"8eb3ecfb-3675-4931-b618-9a5ba6d23b1d\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-brcd7" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.647228 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f88f09ca-9a9f-4d6e-bb2f-f00d75ae11fb-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-wkn92\" (UID: \"f88f09ca-9a9f-4d6e-bb2f-f00d75ae11fb\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-wkn92" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.647250 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f88f09ca-9a9f-4d6e-bb2f-f00d75ae11fb-serving-cert\") pod \"authentication-operator-69f744f599-wkn92\" (UID: \"f88f09ca-9a9f-4d6e-bb2f-f00d75ae11fb\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-wkn92" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.647274 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/3f7de4a5-3819-41c0-9e2e-766dcff408bb-console-serving-cert\") pod \"console-f9d7485db-g2rk6\" (UID: \"3f7de4a5-3819-41c0-9e2e-766dcff408bb\") " pod="openshift-console/console-f9d7485db-g2rk6" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.647296 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4adf65cb-4f11-4061-bcb5-71c3d9b890f7-serving-cert\") pod \"apiserver-7bbb656c7d-n2sqt\" (UID: \"4adf65cb-4f11-4061-bcb5-71c3d9b890f7\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-n2sqt" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.647317 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/3f7de4a5-3819-41c0-9e2e-766dcff408bb-service-ca\") pod \"console-f9d7485db-g2rk6\" (UID: \"3f7de4a5-3819-41c0-9e2e-766dcff408bb\") " pod="openshift-console/console-f9d7485db-g2rk6" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.647339 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7d5c80c8-4e74-4618-96c0-8e76168ad709-client-ca\") pod \"controller-manager-879f6c89f-fpmxk\" (UID: \"7d5c80c8-4e74-4618-96c0-8e76168ad709\") " pod="openshift-controller-manager/controller-manager-879f6c89f-fpmxk" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.647366 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wmrtr\" (UniqueName: \"kubernetes.io/projected/6db03bb1-4833-4d3f-82d5-08ec5710251f-kube-api-access-wmrtr\") pod \"machine-api-operator-5694c8668f-fsx74\" (UID: \"6db03bb1-4833-4d3f-82d5-08ec5710251f\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-fsx74" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.647387 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mcl2c\" (UniqueName: \"kubernetes.io/projected/1c37e4bb-792b-4317-87ae-ca4172740500-kube-api-access-mcl2c\") pod \"etcd-operator-b45778765-v7r8x\" (UID: \"1c37e4bb-792b-4317-87ae-ca4172740500\") " pod="openshift-etcd-operator/etcd-operator-b45778765-v7r8x" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.647406 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/653b37fe-d452-4111-b27f-ef75530abe41-image-import-ca\") pod \"apiserver-76f77b778f-4l85w\" (UID: \"653b37fe-d452-4111-b27f-ef75530abe41\") " pod="openshift-apiserver/apiserver-76f77b778f-4l85w" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.647426 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7d5c80c8-4e74-4618-96c0-8e76168ad709-serving-cert\") pod \"controller-manager-879f6c89f-fpmxk\" (UID: \"7d5c80c8-4e74-4618-96c0-8e76168ad709\") " pod="openshift-controller-manager/controller-manager-879f6c89f-fpmxk" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.647449 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f56b5e44-f079-4c56-9e19-e09996979003-serving-cert\") pod \"route-controller-manager-6576b87f9c-4zwkl\" (UID: \"f56b5e44-f079-4c56-9e19-e09996979003\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-4zwkl" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.647468 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/653b37fe-d452-4111-b27f-ef75530abe41-serving-cert\") pod \"apiserver-76f77b778f-4l85w\" (UID: \"653b37fe-d452-4111-b27f-ef75530abe41\") " pod="openshift-apiserver/apiserver-76f77b778f-4l85w" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.647490 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8r42b\" (UniqueName: \"kubernetes.io/projected/f88f09ca-9a9f-4d6e-bb2f-f00d75ae11fb-kube-api-access-8r42b\") pod \"authentication-operator-69f744f599-wkn92\" (UID: \"f88f09ca-9a9f-4d6e-bb2f-f00d75ae11fb\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-wkn92" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.647514 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/1c37e4bb-792b-4317-87ae-ca4172740500-etcd-service-ca\") pod \"etcd-operator-b45778765-v7r8x\" (UID: \"1c37e4bb-792b-4317-87ae-ca4172740500\") " pod="openshift-etcd-operator/etcd-operator-b45778765-v7r8x" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.647535 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/653b37fe-d452-4111-b27f-ef75530abe41-etcd-serving-ca\") pod \"apiserver-76f77b778f-4l85w\" (UID: \"653b37fe-d452-4111-b27f-ef75530abe41\") " pod="openshift-apiserver/apiserver-76f77b778f-4l85w" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.647558 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nktwv\" (UniqueName: \"kubernetes.io/projected/4adf65cb-4f11-4061-bcb5-71c3d9b890f7-kube-api-access-nktwv\") pod \"apiserver-7bbb656c7d-n2sqt\" (UID: \"4adf65cb-4f11-4061-bcb5-71c3d9b890f7\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-n2sqt" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.647581 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/653b37fe-d452-4111-b27f-ef75530abe41-config\") pod \"apiserver-76f77b778f-4l85w\" (UID: \"653b37fe-d452-4111-b27f-ef75530abe41\") " pod="openshift-apiserver/apiserver-76f77b778f-4l85w" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.647601 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/3f7de4a5-3819-41c0-9e2e-766dcff408bb-console-oauth-config\") pod \"console-f9d7485db-g2rk6\" (UID: \"3f7de4a5-3819-41c0-9e2e-766dcff408bb\") " pod="openshift-console/console-f9d7485db-g2rk6" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.647622 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/3f7de4a5-3819-41c0-9e2e-766dcff408bb-oauth-serving-cert\") pod \"console-f9d7485db-g2rk6\" (UID: \"3f7de4a5-3819-41c0-9e2e-766dcff408bb\") " pod="openshift-console/console-f9d7485db-g2rk6" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.647644 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4adf65cb-4f11-4061-bcb5-71c3d9b890f7-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-n2sqt\" (UID: \"4adf65cb-4f11-4061-bcb5-71c3d9b890f7\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-n2sqt" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.647666 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8eb3ecfb-3675-4931-b618-9a5ba6d23b1d-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-brcd7\" (UID: \"8eb3ecfb-3675-4931-b618-9a5ba6d23b1d\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-brcd7" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.647688 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7d5c80c8-4e74-4618-96c0-8e76168ad709-config\") pod \"controller-manager-879f6c89f-fpmxk\" (UID: \"7d5c80c8-4e74-4618-96c0-8e76168ad709\") " pod="openshift-controller-manager/controller-manager-879f6c89f-fpmxk" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.647720 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/653b37fe-d452-4111-b27f-ef75530abe41-encryption-config\") pod \"apiserver-76f77b778f-4l85w\" (UID: \"653b37fe-d452-4111-b27f-ef75530abe41\") " pod="openshift-apiserver/apiserver-76f77b778f-4l85w" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.647755 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/8d495a4f-d952-4050-a895-e6650c083e0d-machine-approver-tls\") pod \"machine-approver-56656f9798-p8fx6\" (UID: \"8d495a4f-d952-4050-a895-e6650c083e0d\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-p8fx6" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.647777 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f56b5e44-f079-4c56-9e19-e09996979003-client-ca\") pod \"route-controller-manager-6576b87f9c-4zwkl\" (UID: \"f56b5e44-f079-4c56-9e19-e09996979003\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-4zwkl" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.647822 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/4adf65cb-4f11-4061-bcb5-71c3d9b890f7-etcd-client\") pod \"apiserver-7bbb656c7d-n2sqt\" (UID: \"4adf65cb-4f11-4061-bcb5-71c3d9b890f7\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-n2sqt" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.647842 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ng4mr\" (UniqueName: \"kubernetes.io/projected/00332b75-a73b-49c1-9b72-73445baccf6d-kube-api-access-ng4mr\") pod \"openshift-config-operator-7777fb866f-468fl\" (UID: \"00332b75-a73b-49c1-9b72-73445baccf6d\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-468fl" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.647863 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f88f09ca-9a9f-4d6e-bb2f-f00d75ae11fb-service-ca-bundle\") pod \"authentication-operator-69f744f599-wkn92\" (UID: \"f88f09ca-9a9f-4d6e-bb2f-f00d75ae11fb\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-wkn92" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.647886 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/696d81dd-3f1a-4c58-ae69-29fff54e590b-config\") pod \"openshift-apiserver-operator-796bbdcf4f-tczgr\" (UID: \"696d81dd-3f1a-4c58-ae69-29fff54e590b\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-tczgr" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.647917 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/1c37e4bb-792b-4317-87ae-ca4172740500-etcd-ca\") pod \"etcd-operator-b45778765-v7r8x\" (UID: \"1c37e4bb-792b-4317-87ae-ca4172740500\") " pod="openshift-etcd-operator/etcd-operator-b45778765-v7r8x" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.647939 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2w978\" (UniqueName: \"kubernetes.io/projected/64cf2ff9-40f4-48a5-a16c-6513cf0470bd-kube-api-access-2w978\") pod \"downloads-7954f5f757-6wmrp\" (UID: \"64cf2ff9-40f4-48a5-a16c-6513cf0470bd\") " pod="openshift-console/downloads-7954f5f757-6wmrp" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.647973 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f56b5e44-f079-4c56-9e19-e09996979003-config\") pod \"route-controller-manager-6576b87f9c-4zwkl\" (UID: \"f56b5e44-f079-4c56-9e19-e09996979003\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-4zwkl" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.647992 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/3f7de4a5-3819-41c0-9e2e-766dcff408bb-console-config\") pod \"console-f9d7485db-g2rk6\" (UID: \"3f7de4a5-3819-41c0-9e2e-766dcff408bb\") " pod="openshift-console/console-f9d7485db-g2rk6" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.648012 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/653b37fe-d452-4111-b27f-ef75530abe41-node-pullsecrets\") pod \"apiserver-76f77b778f-4l85w\" (UID: \"653b37fe-d452-4111-b27f-ef75530abe41\") " pod="openshift-apiserver/apiserver-76f77b778f-4l85w" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.648032 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/00332b75-a73b-49c1-9b72-73445baccf6d-serving-cert\") pod \"openshift-config-operator-7777fb866f-468fl\" (UID: \"00332b75-a73b-49c1-9b72-73445baccf6d\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-468fl" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.648050 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/4adf65cb-4f11-4061-bcb5-71c3d9b890f7-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-n2sqt\" (UID: \"4adf65cb-4f11-4061-bcb5-71c3d9b890f7\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-n2sqt" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.648091 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6db03bb1-4833-4d3f-82d5-08ec5710251f-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-fsx74\" (UID: \"6db03bb1-4833-4d3f-82d5-08ec5710251f\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-fsx74" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.648117 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1c37e4bb-792b-4317-87ae-ca4172740500-etcd-client\") pod \"etcd-operator-b45778765-v7r8x\" (UID: \"1c37e4bb-792b-4317-87ae-ca4172740500\") " pod="openshift-etcd-operator/etcd-operator-b45778765-v7r8x" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.648138 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/4adf65cb-4f11-4061-bcb5-71c3d9b890f7-encryption-config\") pod \"apiserver-7bbb656c7d-n2sqt\" (UID: \"4adf65cb-4f11-4061-bcb5-71c3d9b890f7\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-n2sqt" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.648172 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rrrl8\" (UniqueName: \"kubernetes.io/projected/8d495a4f-d952-4050-a895-e6650c083e0d-kube-api-access-rrrl8\") pod \"machine-approver-56656f9798-p8fx6\" (UID: \"8d495a4f-d952-4050-a895-e6650c083e0d\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-p8fx6" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.648195 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/653b37fe-d452-4111-b27f-ef75530abe41-trusted-ca-bundle\") pod \"apiserver-76f77b778f-4l85w\" (UID: \"653b37fe-d452-4111-b27f-ef75530abe41\") " pod="openshift-apiserver/apiserver-76f77b778f-4l85w" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.648217 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/4adf65cb-4f11-4061-bcb5-71c3d9b890f7-audit-policies\") pod \"apiserver-7bbb656c7d-n2sqt\" (UID: \"4adf65cb-4f11-4061-bcb5-71c3d9b890f7\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-n2sqt" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.648238 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/4adf65cb-4f11-4061-bcb5-71c3d9b890f7-audit-dir\") pod \"apiserver-7bbb656c7d-n2sqt\" (UID: \"4adf65cb-4f11-4061-bcb5-71c3d9b890f7\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-n2sqt" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.648263 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6db03bb1-4833-4d3f-82d5-08ec5710251f-images\") pod \"machine-api-operator-5694c8668f-fsx74\" (UID: \"6db03bb1-4833-4d3f-82d5-08ec5710251f\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-fsx74" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.648285 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/696d81dd-3f1a-4c58-ae69-29fff54e590b-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-tczgr\" (UID: \"696d81dd-3f1a-4c58-ae69-29fff54e590b\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-tczgr" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.648320 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7d5c80c8-4e74-4618-96c0-8e76168ad709-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-fpmxk\" (UID: \"7d5c80c8-4e74-4618-96c0-8e76168ad709\") " pod="openshift-controller-manager/controller-manager-879f6c89f-fpmxk" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.648341 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3f7de4a5-3819-41c0-9e2e-766dcff408bb-trusted-ca-bundle\") pod \"console-f9d7485db-g2rk6\" (UID: \"3f7de4a5-3819-41c0-9e2e-766dcff408bb\") " pod="openshift-console/console-f9d7485db-g2rk6" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.648358 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8d495a4f-d952-4050-a895-e6650c083e0d-config\") pod \"machine-approver-56656f9798-p8fx6\" (UID: \"8d495a4f-d952-4050-a895-e6650c083e0d\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-p8fx6" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.648378 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/653b37fe-d452-4111-b27f-ef75530abe41-audit-dir\") pod \"apiserver-76f77b778f-4l85w\" (UID: \"653b37fe-d452-4111-b27f-ef75530abe41\") " pod="openshift-apiserver/apiserver-76f77b778f-4l85w" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.648398 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xc2mb\" (UniqueName: \"kubernetes.io/projected/696d81dd-3f1a-4c58-ae69-29fff54e590b-kube-api-access-xc2mb\") pod \"openshift-apiserver-operator-796bbdcf4f-tczgr\" (UID: \"696d81dd-3f1a-4c58-ae69-29fff54e590b\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-tczgr" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.648421 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6db03bb1-4833-4d3f-82d5-08ec5710251f-config\") pod \"machine-api-operator-5694c8668f-fsx74\" (UID: \"6db03bb1-4833-4d3f-82d5-08ec5710251f\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-fsx74" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.648426 5008 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.648441 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8eb3ecfb-3675-4931-b618-9a5ba6d23b1d-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-brcd7\" (UID: \"8eb3ecfb-3675-4931-b618-9a5ba6d23b1d\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-brcd7" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.648462 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dqdxf\" (UniqueName: \"kubernetes.io/projected/7d5c80c8-4e74-4618-96c0-8e76168ad709-kube-api-access-dqdxf\") pod \"controller-manager-879f6c89f-fpmxk\" (UID: \"7d5c80c8-4e74-4618-96c0-8e76168ad709\") " pod="openshift-controller-manager/controller-manager-879f6c89f-fpmxk" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.648495 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4cdqj\" (UniqueName: \"kubernetes.io/projected/f56b5e44-f079-4c56-9e19-e09996979003-kube-api-access-4cdqj\") pod \"route-controller-manager-6576b87f9c-4zwkl\" (UID: \"f56b5e44-f079-4c56-9e19-e09996979003\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-4zwkl" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.648515 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4pz26\" (UniqueName: \"kubernetes.io/projected/3f7de4a5-3819-41c0-9e2e-766dcff408bb-kube-api-access-4pz26\") pod \"console-f9d7485db-g2rk6\" (UID: \"3f7de4a5-3819-41c0-9e2e-766dcff408bb\") " pod="openshift-console/console-f9d7485db-g2rk6" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.648545 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1c37e4bb-792b-4317-87ae-ca4172740500-serving-cert\") pod \"etcd-operator-b45778765-v7r8x\" (UID: \"1c37e4bb-792b-4317-87ae-ca4172740500\") " pod="openshift-etcd-operator/etcd-operator-b45778765-v7r8x" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.648567 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/8d495a4f-d952-4050-a895-e6650c083e0d-auth-proxy-config\") pod \"machine-approver-56656f9798-p8fx6\" (UID: \"8d495a4f-d952-4050-a895-e6650c083e0d\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-p8fx6" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.648587 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/653b37fe-d452-4111-b27f-ef75530abe41-etcd-client\") pod \"apiserver-76f77b778f-4l85w\" (UID: \"653b37fe-d452-4111-b27f-ef75530abe41\") " pod="openshift-apiserver/apiserver-76f77b778f-4l85w" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.648606 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/00332b75-a73b-49c1-9b72-73445baccf6d-available-featuregates\") pod \"openshift-config-operator-7777fb866f-468fl\" (UID: \"00332b75-a73b-49c1-9b72-73445baccf6d\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-468fl" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.648629 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/653b37fe-d452-4111-b27f-ef75530abe41-audit\") pod \"apiserver-76f77b778f-4l85w\" (UID: \"653b37fe-d452-4111-b27f-ef75530abe41\") " pod="openshift-apiserver/apiserver-76f77b778f-4l85w" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.648648 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-plh2t\" (UniqueName: \"kubernetes.io/projected/653b37fe-d452-4111-b27f-ef75530abe41-kube-api-access-plh2t\") pod \"apiserver-76f77b778f-4l85w\" (UID: \"653b37fe-d452-4111-b27f-ef75530abe41\") " pod="openshift-apiserver/apiserver-76f77b778f-4l85w" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.649809 5008 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.650196 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.650325 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.650406 5008 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.650561 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.650594 5008 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-qm54x"] Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.650704 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6db03bb1-4833-4d3f-82d5-08ec5710251f-images\") pod \"machine-api-operator-5694c8668f-fsx74\" (UID: \"6db03bb1-4833-4d3f-82d5-08ec5710251f\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-fsx74" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.650771 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.650909 5008 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.651047 5008 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.651147 5008 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.651202 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-qm54x" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.651257 5008 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-6zjns"] Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.651744 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-6zjns" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.652383 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.652442 5008 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29494995-x4n8l"] Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.652813 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29494995-x4n8l" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.653163 5008 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.653303 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.653385 5008 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.656114 5008 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-9gw94"] Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.656567 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-9gw94" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.657354 5008 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-bmtm4"] Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.665937 5008 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.667671 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6db03bb1-4833-4d3f-82d5-08ec5710251f-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-fsx74\" (UID: \"6db03bb1-4833-4d3f-82d5-08ec5710251f\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-fsx74" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.670658 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-bmtm4" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.670915 5008 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-ghcqr"] Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.678015 5008 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.678272 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-ghcqr" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.678374 5008 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.678700 5008 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.678899 5008 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-2h8sf"] Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.679331 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-2h8sf" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.679917 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.681682 5008 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.681889 5008 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console-operator/console-operator-58897d9998-zs2tk"] Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.682520 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-zs2tk" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.685840 5008 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-zrdsf"] Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.686284 5008 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-cb6xn"] Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.686687 5008 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-4268l"] Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.687138 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-4268l" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.687551 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-zrdsf" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.687698 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-cb6xn" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.701719 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.705815 5008 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-w2lv5"] Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.706476 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-w2lv5" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.708652 5008 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-mqnz8"] Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.709297 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-mqnz8" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.710544 5008 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-j8wt8"] Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.711041 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-j8wt8" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.711570 5008 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-9b7ll"] Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.717280 5008 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-w5jbk"] Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.717373 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-9b7ll" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.718106 5008 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-x9bx7"] Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.718287 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-w5jbk" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.718554 5008 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-zvhxk"] Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.718839 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-x9bx7" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.719085 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-zvhxk" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.720824 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-tczgr"] Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.720863 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-brcd7"] Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.722349 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-fsx74"] Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.722580 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-fpmxk"] Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.729033 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.733057 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-ztdsl"] Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.734184 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-n2sqt"] Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.735553 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-w2lv5"] Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.737201 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-2h8sf"] Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.737611 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-f5fs6"] Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.738767 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-ghcqr"] Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.742521 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-cb6xn"] Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.742559 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-lb8mt"] Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.744935 5008 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.745878 5008 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-server-qs6wx"] Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.746354 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-qs6wx" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.746909 5008 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-canary/ingress-canary-p7nds"] Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.747682 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-p7nds" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.749186 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3e0bc350-e279-4e74-a70e-c89593f115f3-config\") pod \"kube-controller-manager-operator-78b949d7b-6lddg\" (UID: \"3e0bc350-e279-4e74-a70e-c89593f115f3\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-6lddg" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.749201 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-s5vvl"] Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.749226 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4cdqj\" (UniqueName: \"kubernetes.io/projected/f56b5e44-f079-4c56-9e19-e09996979003-kube-api-access-4cdqj\") pod \"route-controller-manager-6576b87f9c-4zwkl\" (UID: \"f56b5e44-f079-4c56-9e19-e09996979003\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-4zwkl" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.749269 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4pz26\" (UniqueName: \"kubernetes.io/projected/3f7de4a5-3819-41c0-9e2e-766dcff408bb-kube-api-access-4pz26\") pod \"console-f9d7485db-g2rk6\" (UID: \"3f7de4a5-3819-41c0-9e2e-766dcff408bb\") " pod="openshift-console/console-f9d7485db-g2rk6" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.749287 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3e0bc350-e279-4e74-a70e-c89593f115f3-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-6lddg\" (UID: \"3e0bc350-e279-4e74-a70e-c89593f115f3\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-6lddg" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.749327 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/c9bc5b93-0c42-401c-8ca5-e5154e8be34d-webhook-cert\") pod \"packageserver-d55dfcdfc-j8wt8\" (UID: \"c9bc5b93-0c42-401c-8ca5-e5154e8be34d\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-j8wt8" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.749348 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5b987d67-e424-4286-a25d-11bfc4d1e577-config\") pod \"console-operator-58897d9998-zs2tk\" (UID: \"5b987d67-e424-4286-a25d-11bfc4d1e577\") " pod="openshift-console-operator/console-operator-58897d9998-zs2tk" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.749363 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/30a4c50c-34f7-4c9c-9cbd-baaf50ed16e1-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-6zjns\" (UID: \"30a4c50c-34f7-4c9c-9cbd-baaf50ed16e1\") " pod="openshift-authentication/oauth-openshift-558db77b4-6zjns" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.749382 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/8d495a4f-d952-4050-a895-e6650c083e0d-auth-proxy-config\") pod \"machine-approver-56656f9798-p8fx6\" (UID: \"8d495a4f-d952-4050-a895-e6650c083e0d\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-p8fx6" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.749397 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/653b37fe-d452-4111-b27f-ef75530abe41-etcd-client\") pod \"apiserver-76f77b778f-4l85w\" (UID: \"653b37fe-d452-4111-b27f-ef75530abe41\") " pod="openshift-apiserver/apiserver-76f77b778f-4l85w" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.749413 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l2kqn\" (UniqueName: \"kubernetes.io/projected/7473d665-3627-4470-a820-ebdbdc113587-kube-api-access-l2kqn\") pod \"marketplace-operator-79b997595-4268l\" (UID: \"7473d665-3627-4470-a820-ebdbdc113587\") " pod="openshift-marketplace/marketplace-operator-79b997595-4268l" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.749431 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/653b37fe-d452-4111-b27f-ef75530abe41-audit\") pod \"apiserver-76f77b778f-4l85w\" (UID: \"653b37fe-d452-4111-b27f-ef75530abe41\") " pod="openshift-apiserver/apiserver-76f77b778f-4l85w" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.749446 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-plh2t\" (UniqueName: \"kubernetes.io/projected/653b37fe-d452-4111-b27f-ef75530abe41-kube-api-access-plh2t\") pod \"apiserver-76f77b778f-4l85w\" (UID: \"653b37fe-d452-4111-b27f-ef75530abe41\") " pod="openshift-apiserver/apiserver-76f77b778f-4l85w" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.749461 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1c37e4bb-792b-4317-87ae-ca4172740500-config\") pod \"etcd-operator-b45778765-v7r8x\" (UID: \"1c37e4bb-792b-4317-87ae-ca4172740500\") " pod="openshift-etcd-operator/etcd-operator-b45778765-v7r8x" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.749477 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/3c5e8be2-fe94-488c-801e-d1a56700bfa5-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-ztdsl\" (UID: \"3c5e8be2-fe94-488c-801e-d1a56700bfa5\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-ztdsl" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.749495 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/3f7de4a5-3819-41c0-9e2e-766dcff408bb-console-serving-cert\") pod \"console-f9d7485db-g2rk6\" (UID: \"3f7de4a5-3819-41c0-9e2e-766dcff408bb\") " pod="openshift-console/console-f9d7485db-g2rk6" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.749509 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7d5c80c8-4e74-4618-96c0-8e76168ad709-client-ca\") pod \"controller-manager-879f6c89f-fpmxk\" (UID: \"7d5c80c8-4e74-4618-96c0-8e76168ad709\") " pod="openshift-controller-manager/controller-manager-879f6c89f-fpmxk" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.749524 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/5b987d67-e424-4286-a25d-11bfc4d1e577-trusted-ca\") pod \"console-operator-58897d9998-zs2tk\" (UID: \"5b987d67-e424-4286-a25d-11bfc4d1e577\") " pod="openshift-console-operator/console-operator-58897d9998-zs2tk" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.749538 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rsr8x\" (UniqueName: \"kubernetes.io/projected/cb93f308-4554-41a0-a5c7-28d516a419c7-kube-api-access-rsr8x\") pod \"machine-config-controller-84d6567774-ghcqr\" (UID: \"cb93f308-4554-41a0-a5c7-28d516a419c7\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-ghcqr" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.749555 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/30a4c50c-34f7-4c9c-9cbd-baaf50ed16e1-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-6zjns\" (UID: \"30a4c50c-34f7-4c9c-9cbd-baaf50ed16e1\") " pod="openshift-authentication/oauth-openshift-558db77b4-6zjns" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.749572 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mcl2c\" (UniqueName: \"kubernetes.io/projected/1c37e4bb-792b-4317-87ae-ca4172740500-kube-api-access-mcl2c\") pod \"etcd-operator-b45778765-v7r8x\" (UID: \"1c37e4bb-792b-4317-87ae-ca4172740500\") " pod="openshift-etcd-operator/etcd-operator-b45778765-v7r8x" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.749587 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/653b37fe-d452-4111-b27f-ef75530abe41-serving-cert\") pod \"apiserver-76f77b778f-4l85w\" (UID: \"653b37fe-d452-4111-b27f-ef75530abe41\") " pod="openshift-apiserver/apiserver-76f77b778f-4l85w" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.749603 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rf25c\" (UniqueName: \"kubernetes.io/projected/3c5e8be2-fe94-488c-801e-d1a56700bfa5-kube-api-access-rf25c\") pod \"cluster-samples-operator-665b6dd947-ztdsl\" (UID: \"3c5e8be2-fe94-488c-801e-d1a56700bfa5\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-ztdsl" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.749630 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f56b5e44-f079-4c56-9e19-e09996979003-serving-cert\") pod \"route-controller-manager-6576b87f9c-4zwkl\" (UID: \"f56b5e44-f079-4c56-9e19-e09996979003\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-4zwkl" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.749645 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nktwv\" (UniqueName: \"kubernetes.io/projected/4adf65cb-4f11-4061-bcb5-71c3d9b890f7-kube-api-access-nktwv\") pod \"apiserver-7bbb656c7d-n2sqt\" (UID: \"4adf65cb-4f11-4061-bcb5-71c3d9b890f7\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-n2sqt" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.749660 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/657b37ac-43ff-4309-9bfa-5220bccb08c0-signing-key\") pod \"service-ca-9c57cc56f-w2lv5\" (UID: \"657b37ac-43ff-4309-9bfa-5220bccb08c0\") " pod="openshift-service-ca/service-ca-9c57cc56f-w2lv5" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.749675 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/653b37fe-d452-4111-b27f-ef75530abe41-config\") pod \"apiserver-76f77b778f-4l85w\" (UID: \"653b37fe-d452-4111-b27f-ef75530abe41\") " pod="openshift-apiserver/apiserver-76f77b778f-4l85w" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.749693 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/3f7de4a5-3819-41c0-9e2e-766dcff408bb-oauth-serving-cert\") pod \"console-f9d7485db-g2rk6\" (UID: \"3f7de4a5-3819-41c0-9e2e-766dcff408bb\") " pod="openshift-console/console-f9d7485db-g2rk6" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.749710 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/30a4c50c-34f7-4c9c-9cbd-baaf50ed16e1-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-6zjns\" (UID: \"30a4c50c-34f7-4c9c-9cbd-baaf50ed16e1\") " pod="openshift-authentication/oauth-openshift-558db77b4-6zjns" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.749727 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/98a7839a-3ca2-49f7-a330-f77ffc4e4da3-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-zrdsf\" (UID: \"98a7839a-3ca2-49f7-a330-f77ffc4e4da3\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-zrdsf" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.749743 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8eb3ecfb-3675-4931-b618-9a5ba6d23b1d-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-brcd7\" (UID: \"8eb3ecfb-3675-4931-b618-9a5ba6d23b1d\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-brcd7" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.749760 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/653b37fe-d452-4111-b27f-ef75530abe41-encryption-config\") pod \"apiserver-76f77b778f-4l85w\" (UID: \"653b37fe-d452-4111-b27f-ef75530abe41\") " pod="openshift-apiserver/apiserver-76f77b778f-4l85w" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.749776 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/8d495a4f-d952-4050-a895-e6650c083e0d-machine-approver-tls\") pod \"machine-approver-56656f9798-p8fx6\" (UID: \"8d495a4f-d952-4050-a895-e6650c083e0d\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-p8fx6" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.749808 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ec989c54-8ec3-4f9d-87b0-2665776ffd15-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-9gw94\" (UID: \"ec989c54-8ec3-4f9d-87b0-2665776ffd15\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-9gw94" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.749824 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/30a4c50c-34f7-4c9c-9cbd-baaf50ed16e1-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-6zjns\" (UID: \"30a4c50c-34f7-4c9c-9cbd-baaf50ed16e1\") " pod="openshift-authentication/oauth-openshift-558db77b4-6zjns" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.749840 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/c9bc5b93-0c42-401c-8ca5-e5154e8be34d-apiservice-cert\") pod \"packageserver-d55dfcdfc-j8wt8\" (UID: \"c9bc5b93-0c42-401c-8ca5-e5154e8be34d\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-j8wt8" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.749855 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f56b5e44-f079-4c56-9e19-e09996979003-client-ca\") pod \"route-controller-manager-6576b87f9c-4zwkl\" (UID: \"f56b5e44-f079-4c56-9e19-e09996979003\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-4zwkl" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.749870 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f88f09ca-9a9f-4d6e-bb2f-f00d75ae11fb-service-ca-bundle\") pod \"authentication-operator-69f744f599-wkn92\" (UID: \"f88f09ca-9a9f-4d6e-bb2f-f00d75ae11fb\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-wkn92" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.749885 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/30a4c50c-34f7-4c9c-9cbd-baaf50ed16e1-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-6zjns\" (UID: \"30a4c50c-34f7-4c9c-9cbd-baaf50ed16e1\") " pod="openshift-authentication/oauth-openshift-558db77b4-6zjns" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.749908 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/696d81dd-3f1a-4c58-ae69-29fff54e590b-config\") pod \"openshift-apiserver-operator-796bbdcf4f-tczgr\" (UID: \"696d81dd-3f1a-4c58-ae69-29fff54e590b\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-tczgr" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.750740 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1c37e4bb-792b-4317-87ae-ca4172740500-config\") pod \"etcd-operator-b45778765-v7r8x\" (UID: \"1c37e4bb-792b-4317-87ae-ca4172740500\") " pod="openshift-etcd-operator/etcd-operator-b45778765-v7r8x" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.750809 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2msjg\" (UniqueName: \"kubernetes.io/projected/820dc798-ef25-4bda-947f-8c66b290816d-kube-api-access-2msjg\") pod \"dns-operator-744455d44c-2dsnp\" (UID: \"820dc798-ef25-4bda-947f-8c66b290816d\") " pod="openshift-dns-operator/dns-operator-744455d44c-2dsnp" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.750840 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/380625b0-02b5-417a-bd1e-7ccf56f56059-metrics-certs\") pod \"router-default-5444994796-lkcrp\" (UID: \"380625b0-02b5-417a-bd1e-7ccf56f56059\") " pod="openshift-ingress/router-default-5444994796-lkcrp" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.750856 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/3f7de4a5-3819-41c0-9e2e-766dcff408bb-console-config\") pod \"console-f9d7485db-g2rk6\" (UID: \"3f7de4a5-3819-41c0-9e2e-766dcff408bb\") " pod="openshift-console/console-f9d7485db-g2rk6" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.750882 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7f9xk\" (UniqueName: \"kubernetes.io/projected/380625b0-02b5-417a-bd1e-7ccf56f56059-kube-api-access-7f9xk\") pod \"router-default-5444994796-lkcrp\" (UID: \"380625b0-02b5-417a-bd1e-7ccf56f56059\") " pod="openshift-ingress/router-default-5444994796-lkcrp" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.750898 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xfxpn\" (UniqueName: \"kubernetes.io/projected/30a4c50c-34f7-4c9c-9cbd-baaf50ed16e1-kube-api-access-xfxpn\") pod \"oauth-openshift-558db77b4-6zjns\" (UID: \"30a4c50c-34f7-4c9c-9cbd-baaf50ed16e1\") " pod="openshift-authentication/oauth-openshift-558db77b4-6zjns" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.750917 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1c37e4bb-792b-4317-87ae-ca4172740500-etcd-client\") pod \"etcd-operator-b45778765-v7r8x\" (UID: \"1c37e4bb-792b-4317-87ae-ca4172740500\") " pod="openshift-etcd-operator/etcd-operator-b45778765-v7r8x" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.750933 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/380625b0-02b5-417a-bd1e-7ccf56f56059-service-ca-bundle\") pod \"router-default-5444994796-lkcrp\" (UID: \"380625b0-02b5-417a-bd1e-7ccf56f56059\") " pod="openshift-ingress/router-default-5444994796-lkcrp" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.751396 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/696d81dd-3f1a-4c58-ae69-29fff54e590b-config\") pod \"openshift-apiserver-operator-796bbdcf4f-tczgr\" (UID: \"696d81dd-3f1a-4c58-ae69-29fff54e590b\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-tczgr" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.751860 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/8d495a4f-d952-4050-a895-e6650c083e0d-auth-proxy-config\") pod \"machine-approver-56656f9798-p8fx6\" (UID: \"8d495a4f-d952-4050-a895-e6650c083e0d\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-p8fx6" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.753647 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f56b5e44-f079-4c56-9e19-e09996979003-serving-cert\") pod \"route-controller-manager-6576b87f9c-4zwkl\" (UID: \"f56b5e44-f079-4c56-9e19-e09996979003\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-4zwkl" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.755014 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/3f7de4a5-3819-41c0-9e2e-766dcff408bb-oauth-serving-cert\") pod \"console-f9d7485db-g2rk6\" (UID: \"3f7de4a5-3819-41c0-9e2e-766dcff408bb\") " pod="openshift-console/console-f9d7485db-g2rk6" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.755445 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/653b37fe-d452-4111-b27f-ef75530abe41-config\") pod \"apiserver-76f77b778f-4l85w\" (UID: \"653b37fe-d452-4111-b27f-ef75530abe41\") " pod="openshift-apiserver/apiserver-76f77b778f-4l85w" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.755691 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/3f7de4a5-3819-41c0-9e2e-766dcff408bb-console-config\") pod \"console-f9d7485db-g2rk6\" (UID: \"3f7de4a5-3819-41c0-9e2e-766dcff408bb\") " pod="openshift-console/console-f9d7485db-g2rk6" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.757041 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/4adf65cb-4f11-4061-bcb5-71c3d9b890f7-encryption-config\") pod \"apiserver-7bbb656c7d-n2sqt\" (UID: \"4adf65cb-4f11-4061-bcb5-71c3d9b890f7\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-n2sqt" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.757227 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/30a4c50c-34f7-4c9c-9cbd-baaf50ed16e1-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-6zjns\" (UID: \"30a4c50c-34f7-4c9c-9cbd-baaf50ed16e1\") " pod="openshift-authentication/oauth-openshift-558db77b4-6zjns" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.757300 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/4adf65cb-4f11-4061-bcb5-71c3d9b890f7-audit-policies\") pod \"apiserver-7bbb656c7d-n2sqt\" (UID: \"4adf65cb-4f11-4061-bcb5-71c3d9b890f7\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-n2sqt" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.757639 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/4adf65cb-4f11-4061-bcb5-71c3d9b890f7-audit-dir\") pod \"apiserver-7bbb656c7d-n2sqt\" (UID: \"4adf65cb-4f11-4061-bcb5-71c3d9b890f7\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-n2sqt" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.757682 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/30a4c50c-34f7-4c9c-9cbd-baaf50ed16e1-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-6zjns\" (UID: \"30a4c50c-34f7-4c9c-9cbd-baaf50ed16e1\") " pod="openshift-authentication/oauth-openshift-558db77b4-6zjns" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.757712 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/696d81dd-3f1a-4c58-ae69-29fff54e590b-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-tczgr\" (UID: \"696d81dd-3f1a-4c58-ae69-29fff54e590b\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-tczgr" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.757733 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3f7de4a5-3819-41c0-9e2e-766dcff408bb-trusted-ca-bundle\") pod \"console-f9d7485db-g2rk6\" (UID: \"3f7de4a5-3819-41c0-9e2e-766dcff408bb\") " pod="openshift-console/console-f9d7485db-g2rk6" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.757790 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8d495a4f-d952-4050-a895-e6650c083e0d-config\") pod \"machine-approver-56656f9798-p8fx6\" (UID: \"8d495a4f-d952-4050-a895-e6650c083e0d\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-p8fx6" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.757813 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ec989c54-8ec3-4f9d-87b0-2665776ffd15-config\") pod \"kube-apiserver-operator-766d6c64bb-9gw94\" (UID: \"ec989c54-8ec3-4f9d-87b0-2665776ffd15\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-9gw94" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.757839 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/653b37fe-d452-4111-b27f-ef75530abe41-audit-dir\") pod \"apiserver-76f77b778f-4l85w\" (UID: \"653b37fe-d452-4111-b27f-ef75530abe41\") " pod="openshift-apiserver/apiserver-76f77b778f-4l85w" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.757863 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xc2mb\" (UniqueName: \"kubernetes.io/projected/696d81dd-3f1a-4c58-ae69-29fff54e590b-kube-api-access-xc2mb\") pod \"openshift-apiserver-operator-796bbdcf4f-tczgr\" (UID: \"696d81dd-3f1a-4c58-ae69-29fff54e590b\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-tczgr" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.757918 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/30a4c50c-34f7-4c9c-9cbd-baaf50ed16e1-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-6zjns\" (UID: \"30a4c50c-34f7-4c9c-9cbd-baaf50ed16e1\") " pod="openshift-authentication/oauth-openshift-558db77b4-6zjns" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.757945 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bgpph\" (UniqueName: \"kubernetes.io/projected/1b0f95d5-456d-45a7-9bfd-49efbf2a16ce-kube-api-access-bgpph\") pod \"kube-storage-version-migrator-operator-b67b599dd-f5fs6\" (UID: \"1b0f95d5-456d-45a7-9bfd-49efbf2a16ce\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-f5fs6" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.757965 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8eb3ecfb-3675-4931-b618-9a5ba6d23b1d-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-brcd7\" (UID: \"8eb3ecfb-3675-4931-b618-9a5ba6d23b1d\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-brcd7" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.757981 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dqdxf\" (UniqueName: \"kubernetes.io/projected/7d5c80c8-4e74-4618-96c0-8e76168ad709-kube-api-access-dqdxf\") pod \"controller-manager-879f6c89f-fpmxk\" (UID: \"7d5c80c8-4e74-4618-96c0-8e76168ad709\") " pod="openshift-controller-manager/controller-manager-879f6c89f-fpmxk" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.757999 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/1408f146-4652-41e3-8947-2f230e515750-metrics-tls\") pod \"ingress-operator-5b745b69d9-2h8sf\" (UID: \"1408f146-4652-41e3-8947-2f230e515750\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-2h8sf" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.758048 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/1408f146-4652-41e3-8947-2f230e515750-bound-sa-token\") pod \"ingress-operator-5b745b69d9-2h8sf\" (UID: \"1408f146-4652-41e3-8947-2f230e515750\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-2h8sf" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.758066 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1c37e4bb-792b-4317-87ae-ca4172740500-serving-cert\") pod \"etcd-operator-b45778765-v7r8x\" (UID: \"1c37e4bb-792b-4317-87ae-ca4172740500\") " pod="openshift-etcd-operator/etcd-operator-b45778765-v7r8x" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.758085 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/98a7839a-3ca2-49f7-a330-f77ffc4e4da3-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-zrdsf\" (UID: \"98a7839a-3ca2-49f7-a330-f77ffc4e4da3\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-zrdsf" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.758100 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/657b37ac-43ff-4309-9bfa-5220bccb08c0-signing-cabundle\") pod \"service-ca-9c57cc56f-w2lv5\" (UID: \"657b37ac-43ff-4309-9bfa-5220bccb08c0\") " pod="openshift-service-ca/service-ca-9c57cc56f-w2lv5" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.758114 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/c9bc5b93-0c42-401c-8ca5-e5154e8be34d-tmpfs\") pod \"packageserver-d55dfcdfc-j8wt8\" (UID: \"c9bc5b93-0c42-401c-8ca5-e5154e8be34d\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-j8wt8" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.758130 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/00332b75-a73b-49c1-9b72-73445baccf6d-available-featuregates\") pod \"openshift-config-operator-7777fb866f-468fl\" (UID: \"00332b75-a73b-49c1-9b72-73445baccf6d\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-468fl" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.758153 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v4bmd\" (UniqueName: \"kubernetes.io/projected/8eb3ecfb-3675-4931-b618-9a5ba6d23b1d-kube-api-access-v4bmd\") pod \"openshift-controller-manager-operator-756b6f6bc6-brcd7\" (UID: \"8eb3ecfb-3675-4931-b618-9a5ba6d23b1d\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-brcd7" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.758218 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f88f09ca-9a9f-4d6e-bb2f-f00d75ae11fb-config\") pod \"authentication-operator-69f744f599-wkn92\" (UID: \"f88f09ca-9a9f-4d6e-bb2f-f00d75ae11fb\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-wkn92" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.758234 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f88f09ca-9a9f-4d6e-bb2f-f00d75ae11fb-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-wkn92\" (UID: \"f88f09ca-9a9f-4d6e-bb2f-f00d75ae11fb\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-wkn92" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.758252 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f88f09ca-9a9f-4d6e-bb2f-f00d75ae11fb-serving-cert\") pod \"authentication-operator-69f744f599-wkn92\" (UID: \"f88f09ca-9a9f-4d6e-bb2f-f00d75ae11fb\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-wkn92" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.758268 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ec989c54-8ec3-4f9d-87b0-2665776ffd15-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-9gw94\" (UID: \"ec989c54-8ec3-4f9d-87b0-2665776ffd15\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-9gw94" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.758296 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/cb93f308-4554-41a0-a5c7-28d516a419c7-proxy-tls\") pod \"machine-config-controller-84d6567774-ghcqr\" (UID: \"cb93f308-4554-41a0-a5c7-28d516a419c7\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-ghcqr" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.758311 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tsqb8\" (UniqueName: \"kubernetes.io/projected/b1a4a04b-067c-43f1-b355-46161babe869-kube-api-access-tsqb8\") pod \"collect-profiles-29494995-x4n8l\" (UID: \"b1a4a04b-067c-43f1-b355-46161babe869\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29494995-x4n8l" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.758330 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4adf65cb-4f11-4061-bcb5-71c3d9b890f7-serving-cert\") pod \"apiserver-7bbb656c7d-n2sqt\" (UID: \"4adf65cb-4f11-4061-bcb5-71c3d9b890f7\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-n2sqt" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.758346 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1b0f95d5-456d-45a7-9bfd-49efbf2a16ce-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-f5fs6\" (UID: \"1b0f95d5-456d-45a7-9bfd-49efbf2a16ce\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-f5fs6" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.758399 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f56b5e44-f079-4c56-9e19-e09996979003-client-ca\") pod \"route-controller-manager-6576b87f9c-4zwkl\" (UID: \"f56b5e44-f079-4c56-9e19-e09996979003\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-4zwkl" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.759707 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7d5c80c8-4e74-4618-96c0-8e76168ad709-client-ca\") pod \"controller-manager-879f6c89f-fpmxk\" (UID: \"7d5c80c8-4e74-4618-96c0-8e76168ad709\") " pod="openshift-controller-manager/controller-manager-879f6c89f-fpmxk" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.760177 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8eb3ecfb-3675-4931-b618-9a5ba6d23b1d-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-brcd7\" (UID: \"8eb3ecfb-3675-4931-b618-9a5ba6d23b1d\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-brcd7" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.760286 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-468fl"] Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.761097 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3f7de4a5-3819-41c0-9e2e-766dcff408bb-trusted-ca-bundle\") pod \"console-f9d7485db-g2rk6\" (UID: \"3f7de4a5-3819-41c0-9e2e-766dcff408bb\") " pod="openshift-console/console-f9d7485db-g2rk6" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.761401 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f88f09ca-9a9f-4d6e-bb2f-f00d75ae11fb-config\") pod \"authentication-operator-69f744f599-wkn92\" (UID: \"f88f09ca-9a9f-4d6e-bb2f-f00d75ae11fb\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-wkn92" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.775995 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f88f09ca-9a9f-4d6e-bb2f-f00d75ae11fb-service-ca-bundle\") pod \"authentication-operator-69f744f599-wkn92\" (UID: \"f88f09ca-9a9f-4d6e-bb2f-f00d75ae11fb\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-wkn92" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.776485 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1c37e4bb-792b-4317-87ae-ca4172740500-etcd-client\") pod \"etcd-operator-b45778765-v7r8x\" (UID: \"1c37e4bb-792b-4317-87ae-ca4172740500\") " pod="openshift-etcd-operator/etcd-operator-b45778765-v7r8x" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.776971 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/653b37fe-d452-4111-b27f-ef75530abe41-audit\") pod \"apiserver-76f77b778f-4l85w\" (UID: \"653b37fe-d452-4111-b27f-ef75530abe41\") " pod="openshift-apiserver/apiserver-76f77b778f-4l85w" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.776999 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f88f09ca-9a9f-4d6e-bb2f-f00d75ae11fb-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-wkn92\" (UID: \"f88f09ca-9a9f-4d6e-bb2f-f00d75ae11fb\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-wkn92" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.777931 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/4adf65cb-4f11-4061-bcb5-71c3d9b890f7-audit-policies\") pod \"apiserver-7bbb656c7d-n2sqt\" (UID: \"4adf65cb-4f11-4061-bcb5-71c3d9b890f7\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-n2sqt" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.778050 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/3f7de4a5-3819-41c0-9e2e-766dcff408bb-service-ca\") pod \"console-f9d7485db-g2rk6\" (UID: \"3f7de4a5-3819-41c0-9e2e-766dcff408bb\") " pod="openshift-console/console-f9d7485db-g2rk6" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.778113 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5b987d67-e424-4286-a25d-11bfc4d1e577-serving-cert\") pod \"console-operator-58897d9998-zs2tk\" (UID: \"5b987d67-e424-4286-a25d-11bfc4d1e577\") " pod="openshift-console-operator/console-operator-58897d9998-zs2tk" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.778139 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r6lwz\" (UniqueName: \"kubernetes.io/projected/657b37ac-43ff-4309-9bfa-5220bccb08c0-kube-api-access-r6lwz\") pod \"service-ca-9c57cc56f-w2lv5\" (UID: \"657b37ac-43ff-4309-9bfa-5220bccb08c0\") " pod="openshift-service-ca/service-ca-9c57cc56f-w2lv5" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.778183 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/653b37fe-d452-4111-b27f-ef75530abe41-image-import-ca\") pod \"apiserver-76f77b778f-4l85w\" (UID: \"653b37fe-d452-4111-b27f-ef75530abe41\") " pod="openshift-apiserver/apiserver-76f77b778f-4l85w" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.778201 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7d5c80c8-4e74-4618-96c0-8e76168ad709-serving-cert\") pod \"controller-manager-879f6c89f-fpmxk\" (UID: \"7d5c80c8-4e74-4618-96c0-8e76168ad709\") " pod="openshift-controller-manager/controller-manager-879f6c89f-fpmxk" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.778223 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/820dc798-ef25-4bda-947f-8c66b290816d-metrics-tls\") pod \"dns-operator-744455d44c-2dsnp\" (UID: \"820dc798-ef25-4bda-947f-8c66b290816d\") " pod="openshift-dns-operator/dns-operator-744455d44c-2dsnp" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.778225 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-g2rk6"] Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.778246 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8r42b\" (UniqueName: \"kubernetes.io/projected/f88f09ca-9a9f-4d6e-bb2f-f00d75ae11fb-kube-api-access-8r42b\") pod \"authentication-operator-69f744f599-wkn92\" (UID: \"f88f09ca-9a9f-4d6e-bb2f-f00d75ae11fb\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-wkn92" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.778284 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/1c37e4bb-792b-4317-87ae-ca4172740500-etcd-service-ca\") pod \"etcd-operator-b45778765-v7r8x\" (UID: \"1c37e4bb-792b-4317-87ae-ca4172740500\") " pod="openshift-etcd-operator/etcd-operator-b45778765-v7r8x" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.778305 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/653b37fe-d452-4111-b27f-ef75530abe41-etcd-serving-ca\") pod \"apiserver-76f77b778f-4l85w\" (UID: \"653b37fe-d452-4111-b27f-ef75530abe41\") " pod="openshift-apiserver/apiserver-76f77b778f-4l85w" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.778337 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4adf65cb-4f11-4061-bcb5-71c3d9b890f7-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-n2sqt\" (UID: \"4adf65cb-4f11-4061-bcb5-71c3d9b890f7\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-n2sqt" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.778345 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/653b37fe-d452-4111-b27f-ef75530abe41-etcd-client\") pod \"apiserver-76f77b778f-4l85w\" (UID: \"653b37fe-d452-4111-b27f-ef75530abe41\") " pod="openshift-apiserver/apiserver-76f77b778f-4l85w" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.778368 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/30a4c50c-34f7-4c9c-9cbd-baaf50ed16e1-audit-policies\") pod \"oauth-openshift-558db77b4-6zjns\" (UID: \"30a4c50c-34f7-4c9c-9cbd-baaf50ed16e1\") " pod="openshift-authentication/oauth-openshift-558db77b4-6zjns" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.778390 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/653b37fe-d452-4111-b27f-ef75530abe41-encryption-config\") pod \"apiserver-76f77b778f-4l85w\" (UID: \"653b37fe-d452-4111-b27f-ef75530abe41\") " pod="openshift-apiserver/apiserver-76f77b778f-4l85w" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.778743 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/8d495a4f-d952-4050-a895-e6650c083e0d-machine-approver-tls\") pod \"machine-approver-56656f9798-p8fx6\" (UID: \"8d495a4f-d952-4050-a895-e6650c083e0d\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-p8fx6" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.779235 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/3f7de4a5-3819-41c0-9e2e-766dcff408bb-console-serving-cert\") pod \"console-f9d7485db-g2rk6\" (UID: \"3f7de4a5-3819-41c0-9e2e-766dcff408bb\") " pod="openshift-console/console-f9d7485db-g2rk6" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.779263 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/3f7de4a5-3819-41c0-9e2e-766dcff408bb-console-oauth-config\") pod \"console-f9d7485db-g2rk6\" (UID: \"3f7de4a5-3819-41c0-9e2e-766dcff408bb\") " pod="openshift-console/console-f9d7485db-g2rk6" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.779296 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r7qjf\" (UniqueName: \"kubernetes.io/projected/5b987d67-e424-4286-a25d-11bfc4d1e577-kube-api-access-r7qjf\") pod \"console-operator-58897d9998-zs2tk\" (UID: \"5b987d67-e424-4286-a25d-11bfc4d1e577\") " pod="openshift-console-operator/console-operator-58897d9998-zs2tk" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.779301 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/4adf65cb-4f11-4061-bcb5-71c3d9b890f7-audit-dir\") pod \"apiserver-7bbb656c7d-n2sqt\" (UID: \"4adf65cb-4f11-4061-bcb5-71c3d9b890f7\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-n2sqt" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.779321 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1b0f95d5-456d-45a7-9bfd-49efbf2a16ce-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-f5fs6\" (UID: \"1b0f95d5-456d-45a7-9bfd-49efbf2a16ce\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-f5fs6" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.779344 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/7473d665-3627-4470-a820-ebdbdc113587-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-4268l\" (UID: \"7473d665-3627-4470-a820-ebdbdc113587\") " pod="openshift-marketplace/marketplace-operator-79b997595-4268l" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.779372 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7d5c80c8-4e74-4618-96c0-8e76168ad709-config\") pod \"controller-manager-879f6c89f-fpmxk\" (UID: \"7d5c80c8-4e74-4618-96c0-8e76168ad709\") " pod="openshift-controller-manager/controller-manager-879f6c89f-fpmxk" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.779390 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/30a4c50c-34f7-4c9c-9cbd-baaf50ed16e1-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-6zjns\" (UID: \"30a4c50c-34f7-4c9c-9cbd-baaf50ed16e1\") " pod="openshift-authentication/oauth-openshift-558db77b4-6zjns" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.779453 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3e0bc350-e279-4e74-a70e-c89593f115f3-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-6lddg\" (UID: \"3e0bc350-e279-4e74-a70e-c89593f115f3\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-6lddg" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.779475 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/30a4c50c-34f7-4c9c-9cbd-baaf50ed16e1-audit-dir\") pod \"oauth-openshift-558db77b4-6zjns\" (UID: \"30a4c50c-34f7-4c9c-9cbd-baaf50ed16e1\") " pod="openshift-authentication/oauth-openshift-558db77b4-6zjns" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.779496 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/0b6fe31f-5401-4a2e-bccb-e57fab2a35ba-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-cb6xn\" (UID: \"0b6fe31f-5401-4a2e-bccb-e57fab2a35ba\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-cb6xn" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.779516 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b1a4a04b-067c-43f1-b355-46161babe869-secret-volume\") pod \"collect-profiles-29494995-x4n8l\" (UID: \"b1a4a04b-067c-43f1-b355-46161babe869\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29494995-x4n8l" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.780141 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/4adf65cb-4f11-4061-bcb5-71c3d9b890f7-encryption-config\") pod \"apiserver-7bbb656c7d-n2sqt\" (UID: \"4adf65cb-4f11-4061-bcb5-71c3d9b890f7\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-n2sqt" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.780450 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/3f7de4a5-3819-41c0-9e2e-766dcff408bb-service-ca\") pod \"console-f9d7485db-g2rk6\" (UID: \"3f7de4a5-3819-41c0-9e2e-766dcff408bb\") " pod="openshift-console/console-f9d7485db-g2rk6" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.780657 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/4adf65cb-4f11-4061-bcb5-71c3d9b890f7-etcd-client\") pod \"apiserver-7bbb656c7d-n2sqt\" (UID: \"4adf65cb-4f11-4061-bcb5-71c3d9b890f7\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-n2sqt" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.780683 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ng4mr\" (UniqueName: \"kubernetes.io/projected/00332b75-a73b-49c1-9b72-73445baccf6d-kube-api-access-ng4mr\") pod \"openshift-config-operator-7777fb866f-468fl\" (UID: \"00332b75-a73b-49c1-9b72-73445baccf6d\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-468fl" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.780709 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/1c37e4bb-792b-4317-87ae-ca4172740500-etcd-ca\") pod \"etcd-operator-b45778765-v7r8x\" (UID: \"1c37e4bb-792b-4317-87ae-ca4172740500\") " pod="openshift-etcd-operator/etcd-operator-b45778765-v7r8x" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.780736 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/cb93f308-4554-41a0-a5c7-28d516a419c7-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-ghcqr\" (UID: \"cb93f308-4554-41a0-a5c7-28d516a419c7\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-ghcqr" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.780759 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f56b5e44-f079-4c56-9e19-e09996979003-config\") pod \"route-controller-manager-6576b87f9c-4zwkl\" (UID: \"f56b5e44-f079-4c56-9e19-e09996979003\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-4zwkl" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.780823 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/653b37fe-d452-4111-b27f-ef75530abe41-node-pullsecrets\") pod \"apiserver-76f77b778f-4l85w\" (UID: \"653b37fe-d452-4111-b27f-ef75530abe41\") " pod="openshift-apiserver/apiserver-76f77b778f-4l85w" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.780834 5008 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.780847 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/00332b75-a73b-49c1-9b72-73445baccf6d-serving-cert\") pod \"openshift-config-operator-7777fb866f-468fl\" (UID: \"00332b75-a73b-49c1-9b72-73445baccf6d\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-468fl" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.780872 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/30a4c50c-34f7-4c9c-9cbd-baaf50ed16e1-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-6zjns\" (UID: \"30a4c50c-34f7-4c9c-9cbd-baaf50ed16e1\") " pod="openshift-authentication/oauth-openshift-558db77b4-6zjns" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.780892 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/4adf65cb-4f11-4061-bcb5-71c3d9b890f7-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-n2sqt\" (UID: \"4adf65cb-4f11-4061-bcb5-71c3d9b890f7\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-n2sqt" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.780955 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d5jrc\" (UniqueName: \"kubernetes.io/projected/1408f146-4652-41e3-8947-2f230e515750-kube-api-access-d5jrc\") pod \"ingress-operator-5b745b69d9-2h8sf\" (UID: \"1408f146-4652-41e3-8947-2f230e515750\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-2h8sf" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.780979 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/380625b0-02b5-417a-bd1e-7ccf56f56059-default-certificate\") pod \"router-default-5444994796-lkcrp\" (UID: \"380625b0-02b5-417a-bd1e-7ccf56f56059\") " pod="openshift-ingress/router-default-5444994796-lkcrp" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.781007 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rrrl8\" (UniqueName: \"kubernetes.io/projected/8d495a4f-d952-4050-a895-e6650c083e0d-kube-api-access-rrrl8\") pod \"machine-approver-56656f9798-p8fx6\" (UID: \"8d495a4f-d952-4050-a895-e6650c083e0d\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-p8fx6" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.781029 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/653b37fe-d452-4111-b27f-ef75530abe41-trusted-ca-bundle\") pod \"apiserver-76f77b778f-4l85w\" (UID: \"653b37fe-d452-4111-b27f-ef75530abe41\") " pod="openshift-apiserver/apiserver-76f77b778f-4l85w" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.781512 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/653b37fe-d452-4111-b27f-ef75530abe41-node-pullsecrets\") pod \"apiserver-76f77b778f-4l85w\" (UID: \"653b37fe-d452-4111-b27f-ef75530abe41\") " pod="openshift-apiserver/apiserver-76f77b778f-4l85w" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.781575 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/00332b75-a73b-49c1-9b72-73445baccf6d-available-featuregates\") pod \"openshift-config-operator-7777fb866f-468fl\" (UID: \"00332b75-a73b-49c1-9b72-73445baccf6d\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-468fl" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.781692 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/653b37fe-d452-4111-b27f-ef75530abe41-audit-dir\") pod \"apiserver-76f77b778f-4l85w\" (UID: \"653b37fe-d452-4111-b27f-ef75530abe41\") " pod="openshift-apiserver/apiserver-76f77b778f-4l85w" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.782445 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4adf65cb-4f11-4061-bcb5-71c3d9b890f7-serving-cert\") pod \"apiserver-7bbb656c7d-n2sqt\" (UID: \"4adf65cb-4f11-4061-bcb5-71c3d9b890f7\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-n2sqt" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.783097 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/653b37fe-d452-4111-b27f-ef75530abe41-serving-cert\") pod \"apiserver-76f77b778f-4l85w\" (UID: \"653b37fe-d452-4111-b27f-ef75530abe41\") " pod="openshift-apiserver/apiserver-76f77b778f-4l85w" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.783311 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8d495a4f-d952-4050-a895-e6650c083e0d-config\") pod \"machine-approver-56656f9798-p8fx6\" (UID: \"8d495a4f-d952-4050-a895-e6650c083e0d\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-p8fx6" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.783864 5008 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.785899 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/1c37e4bb-792b-4317-87ae-ca4172740500-etcd-ca\") pod \"etcd-operator-b45778765-v7r8x\" (UID: \"1c37e4bb-792b-4317-87ae-ca4172740500\") " pod="openshift-etcd-operator/etcd-operator-b45778765-v7r8x" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.786091 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/00332b75-a73b-49c1-9b72-73445baccf6d-serving-cert\") pod \"openshift-config-operator-7777fb866f-468fl\" (UID: \"00332b75-a73b-49c1-9b72-73445baccf6d\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-468fl" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.786356 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f88f09ca-9a9f-4d6e-bb2f-f00d75ae11fb-serving-cert\") pod \"authentication-operator-69f744f599-wkn92\" (UID: \"f88f09ca-9a9f-4d6e-bb2f-f00d75ae11fb\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-wkn92" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.786383 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/4adf65cb-4f11-4061-bcb5-71c3d9b890f7-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-n2sqt\" (UID: \"4adf65cb-4f11-4061-bcb5-71c3d9b890f7\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-n2sqt" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.786909 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8eb3ecfb-3675-4931-b618-9a5ba6d23b1d-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-brcd7\" (UID: \"8eb3ecfb-3675-4931-b618-9a5ba6d23b1d\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-brcd7" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.787452 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7d5c80c8-4e74-4618-96c0-8e76168ad709-config\") pod \"controller-manager-879f6c89f-fpmxk\" (UID: \"7d5c80c8-4e74-4618-96c0-8e76168ad709\") " pod="openshift-controller-manager/controller-manager-879f6c89f-fpmxk" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.787541 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/380625b0-02b5-417a-bd1e-7ccf56f56059-stats-auth\") pod \"router-default-5444994796-lkcrp\" (UID: \"380625b0-02b5-417a-bd1e-7ccf56f56059\") " pod="openshift-ingress/router-default-5444994796-lkcrp" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.787580 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/1408f146-4652-41e3-8947-2f230e515750-trusted-ca\") pod \"ingress-operator-5b745b69d9-2h8sf\" (UID: \"1408f146-4652-41e3-8947-2f230e515750\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-2h8sf" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.787618 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cmjd2\" (UniqueName: \"kubernetes.io/projected/0b6fe31f-5401-4a2e-bccb-e57fab2a35ba-kube-api-access-cmjd2\") pod \"multus-admission-controller-857f4d67dd-cb6xn\" (UID: \"0b6fe31f-5401-4a2e-bccb-e57fab2a35ba\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-cb6xn" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.787704 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/98a7839a-3ca2-49f7-a330-f77ffc4e4da3-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-zrdsf\" (UID: \"98a7839a-3ca2-49f7-a330-f77ffc4e4da3\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-zrdsf" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.787745 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7d5c80c8-4e74-4618-96c0-8e76168ad709-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-fpmxk\" (UID: \"7d5c80c8-4e74-4618-96c0-8e76168ad709\") " pod="openshift-controller-manager/controller-manager-879f6c89f-fpmxk" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.787866 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b1a4a04b-067c-43f1-b355-46161babe869-config-volume\") pod \"collect-profiles-29494995-x4n8l\" (UID: \"b1a4a04b-067c-43f1-b355-46161babe869\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29494995-x4n8l" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.787900 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/7473d665-3627-4470-a820-ebdbdc113587-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-4268l\" (UID: \"7473d665-3627-4470-a820-ebdbdc113587\") " pod="openshift-marketplace/marketplace-operator-79b997595-4268l" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.787933 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ng4c5\" (UniqueName: \"kubernetes.io/projected/c9bc5b93-0c42-401c-8ca5-e5154e8be34d-kube-api-access-ng4c5\") pod \"packageserver-d55dfcdfc-j8wt8\" (UID: \"c9bc5b93-0c42-401c-8ca5-e5154e8be34d\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-j8wt8" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.788125 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/653b37fe-d452-4111-b27f-ef75530abe41-trusted-ca-bundle\") pod \"apiserver-76f77b778f-4l85w\" (UID: \"653b37fe-d452-4111-b27f-ef75530abe41\") " pod="openshift-apiserver/apiserver-76f77b778f-4l85w" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.789007 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4adf65cb-4f11-4061-bcb5-71c3d9b890f7-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-n2sqt\" (UID: \"4adf65cb-4f11-4061-bcb5-71c3d9b890f7\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-n2sqt" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.790378 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/653b37fe-d452-4111-b27f-ef75530abe41-image-import-ca\") pod \"apiserver-76f77b778f-4l85w\" (UID: \"653b37fe-d452-4111-b27f-ef75530abe41\") " pod="openshift-apiserver/apiserver-76f77b778f-4l85w" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.791277 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/696d81dd-3f1a-4c58-ae69-29fff54e590b-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-tczgr\" (UID: \"696d81dd-3f1a-4c58-ae69-29fff54e590b\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-tczgr" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.792016 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-wkn92"] Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.792036 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/30a4c50c-34f7-4c9c-9cbd-baaf50ed16e1-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-6zjns\" (UID: \"30a4c50c-34f7-4c9c-9cbd-baaf50ed16e1\") " pod="openshift-authentication/oauth-openshift-558db77b4-6zjns" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.792226 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-4l85w"] Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.792560 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/653b37fe-d452-4111-b27f-ef75530abe41-etcd-serving-ca\") pod \"apiserver-76f77b778f-4l85w\" (UID: \"653b37fe-d452-4111-b27f-ef75530abe41\") " pod="openshift-apiserver/apiserver-76f77b778f-4l85w" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.793150 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/1c37e4bb-792b-4317-87ae-ca4172740500-etcd-service-ca\") pod \"etcd-operator-b45778765-v7r8x\" (UID: \"1c37e4bb-792b-4317-87ae-ca4172740500\") " pod="openshift-etcd-operator/etcd-operator-b45778765-v7r8x" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.793370 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7d5c80c8-4e74-4618-96c0-8e76168ad709-serving-cert\") pod \"controller-manager-879f6c89f-fpmxk\" (UID: \"7d5c80c8-4e74-4618-96c0-8e76168ad709\") " pod="openshift-controller-manager/controller-manager-879f6c89f-fpmxk" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.794634 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1c37e4bb-792b-4317-87ae-ca4172740500-serving-cert\") pod \"etcd-operator-b45778765-v7r8x\" (UID: \"1c37e4bb-792b-4317-87ae-ca4172740500\") " pod="openshift-etcd-operator/etcd-operator-b45778765-v7r8x" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.795063 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-mqnz8"] Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.795327 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7d5c80c8-4e74-4618-96c0-8e76168ad709-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-fpmxk\" (UID: \"7d5c80c8-4e74-4618-96c0-8e76168ad709\") " pod="openshift-controller-manager/controller-manager-879f6c89f-fpmxk" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.795822 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/4adf65cb-4f11-4061-bcb5-71c3d9b890f7-etcd-client\") pod \"apiserver-7bbb656c7d-n2sqt\" (UID: \"4adf65cb-4f11-4061-bcb5-71c3d9b890f7\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-n2sqt" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.795941 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/3f7de4a5-3819-41c0-9e2e-766dcff408bb-console-oauth-config\") pod \"console-f9d7485db-g2rk6\" (UID: \"3f7de4a5-3819-41c0-9e2e-766dcff408bb\") " pod="openshift-console/console-f9d7485db-g2rk6" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.797653 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-4268l"] Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.802696 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.804247 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f56b5e44-f079-4c56-9e19-e09996979003-config\") pod \"route-controller-manager-6576b87f9c-4zwkl\" (UID: \"f56b5e44-f079-4c56-9e19-e09996979003\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-4zwkl" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.806430 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-x9bx7"] Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.811208 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-zs2tk"] Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.818943 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-v7r8x"] Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.821226 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-9gw94"] Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.822741 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-zrdsf"] Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.822776 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.825100 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-6lddg"] Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.827409 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-qm54x"] Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.828733 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29494995-x4n8l"] Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.830478 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-9b7ll"] Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.832127 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-4zwkl"] Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.833566 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-j8wt8"] Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.835257 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-p7nds"] Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.836923 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-bmtm4"] Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.839952 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-2dsnp"] Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.841597 5008 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.841879 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-w5jbk"] Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.843198 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-6wmrp"] Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.844393 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-zvhxk"] Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.847582 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-6zjns"] Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.848906 5008 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-g9x2n"] Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.850123 5008 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/dns-default-tw5d5"] Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.850256 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-g9x2n" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.851196 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-tw5d5"] Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.851309 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-tw5d5" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.852254 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-g9x2n"] Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.879685 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wmrtr\" (UniqueName: \"kubernetes.io/projected/6db03bb1-4833-4d3f-82d5-08ec5710251f-kube-api-access-wmrtr\") pod \"machine-api-operator-5694c8668f-fsx74\" (UID: \"6db03bb1-4833-4d3f-82d5-08ec5710251f\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-fsx74" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.892619 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7f9xk\" (UniqueName: \"kubernetes.io/projected/380625b0-02b5-417a-bd1e-7ccf56f56059-kube-api-access-7f9xk\") pod \"router-default-5444994796-lkcrp\" (UID: \"380625b0-02b5-417a-bd1e-7ccf56f56059\") " pod="openshift-ingress/router-default-5444994796-lkcrp" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.892657 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xfxpn\" (UniqueName: \"kubernetes.io/projected/30a4c50c-34f7-4c9c-9cbd-baaf50ed16e1-kube-api-access-xfxpn\") pod \"oauth-openshift-558db77b4-6zjns\" (UID: \"30a4c50c-34f7-4c9c-9cbd-baaf50ed16e1\") " pod="openshift-authentication/oauth-openshift-558db77b4-6zjns" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.892679 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/380625b0-02b5-417a-bd1e-7ccf56f56059-service-ca-bundle\") pod \"router-default-5444994796-lkcrp\" (UID: \"380625b0-02b5-417a-bd1e-7ccf56f56059\") " pod="openshift-ingress/router-default-5444994796-lkcrp" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.892707 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/30a4c50c-34f7-4c9c-9cbd-baaf50ed16e1-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-6zjns\" (UID: \"30a4c50c-34f7-4c9c-9cbd-baaf50ed16e1\") " pod="openshift-authentication/oauth-openshift-558db77b4-6zjns" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.892726 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/30a4c50c-34f7-4c9c-9cbd-baaf50ed16e1-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-6zjns\" (UID: \"30a4c50c-34f7-4c9c-9cbd-baaf50ed16e1\") " pod="openshift-authentication/oauth-openshift-558db77b4-6zjns" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.892748 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ec989c54-8ec3-4f9d-87b0-2665776ffd15-config\") pod \"kube-apiserver-operator-766d6c64bb-9gw94\" (UID: \"ec989c54-8ec3-4f9d-87b0-2665776ffd15\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-9gw94" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.892768 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/30a4c50c-34f7-4c9c-9cbd-baaf50ed16e1-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-6zjns\" (UID: \"30a4c50c-34f7-4c9c-9cbd-baaf50ed16e1\") " pod="openshift-authentication/oauth-openshift-558db77b4-6zjns" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.892809 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bgpph\" (UniqueName: \"kubernetes.io/projected/1b0f95d5-456d-45a7-9bfd-49efbf2a16ce-kube-api-access-bgpph\") pod \"kube-storage-version-migrator-operator-b67b599dd-f5fs6\" (UID: \"1b0f95d5-456d-45a7-9bfd-49efbf2a16ce\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-f5fs6" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.892839 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/1408f146-4652-41e3-8947-2f230e515750-metrics-tls\") pod \"ingress-operator-5b745b69d9-2h8sf\" (UID: \"1408f146-4652-41e3-8947-2f230e515750\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-2h8sf" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.892856 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/1408f146-4652-41e3-8947-2f230e515750-bound-sa-token\") pod \"ingress-operator-5b745b69d9-2h8sf\" (UID: \"1408f146-4652-41e3-8947-2f230e515750\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-2h8sf" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.892872 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/c9bc5b93-0c42-401c-8ca5-e5154e8be34d-tmpfs\") pod \"packageserver-d55dfcdfc-j8wt8\" (UID: \"c9bc5b93-0c42-401c-8ca5-e5154e8be34d\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-j8wt8" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.892889 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/98a7839a-3ca2-49f7-a330-f77ffc4e4da3-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-zrdsf\" (UID: \"98a7839a-3ca2-49f7-a330-f77ffc4e4da3\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-zrdsf" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.892905 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/657b37ac-43ff-4309-9bfa-5220bccb08c0-signing-cabundle\") pod \"service-ca-9c57cc56f-w2lv5\" (UID: \"657b37ac-43ff-4309-9bfa-5220bccb08c0\") " pod="openshift-service-ca/service-ca-9c57cc56f-w2lv5" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.892929 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ec989c54-8ec3-4f9d-87b0-2665776ffd15-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-9gw94\" (UID: \"ec989c54-8ec3-4f9d-87b0-2665776ffd15\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-9gw94" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.892954 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tsqb8\" (UniqueName: \"kubernetes.io/projected/b1a4a04b-067c-43f1-b355-46161babe869-kube-api-access-tsqb8\") pod \"collect-profiles-29494995-x4n8l\" (UID: \"b1a4a04b-067c-43f1-b355-46161babe869\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29494995-x4n8l" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.892975 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/cb93f308-4554-41a0-a5c7-28d516a419c7-proxy-tls\") pod \"machine-config-controller-84d6567774-ghcqr\" (UID: \"cb93f308-4554-41a0-a5c7-28d516a419c7\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-ghcqr" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.892992 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1b0f95d5-456d-45a7-9bfd-49efbf2a16ce-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-f5fs6\" (UID: \"1b0f95d5-456d-45a7-9bfd-49efbf2a16ce\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-f5fs6" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.893010 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r6lwz\" (UniqueName: \"kubernetes.io/projected/657b37ac-43ff-4309-9bfa-5220bccb08c0-kube-api-access-r6lwz\") pod \"service-ca-9c57cc56f-w2lv5\" (UID: \"657b37ac-43ff-4309-9bfa-5220bccb08c0\") " pod="openshift-service-ca/service-ca-9c57cc56f-w2lv5" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.893026 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5b987d67-e424-4286-a25d-11bfc4d1e577-serving-cert\") pod \"console-operator-58897d9998-zs2tk\" (UID: \"5b987d67-e424-4286-a25d-11bfc4d1e577\") " pod="openshift-console-operator/console-operator-58897d9998-zs2tk" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.893044 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/820dc798-ef25-4bda-947f-8c66b290816d-metrics-tls\") pod \"dns-operator-744455d44c-2dsnp\" (UID: \"820dc798-ef25-4bda-947f-8c66b290816d\") " pod="openshift-dns-operator/dns-operator-744455d44c-2dsnp" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.893068 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/30a4c50c-34f7-4c9c-9cbd-baaf50ed16e1-audit-policies\") pod \"oauth-openshift-558db77b4-6zjns\" (UID: \"30a4c50c-34f7-4c9c-9cbd-baaf50ed16e1\") " pod="openshift-authentication/oauth-openshift-558db77b4-6zjns" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.893084 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/7473d665-3627-4470-a820-ebdbdc113587-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-4268l\" (UID: \"7473d665-3627-4470-a820-ebdbdc113587\") " pod="openshift-marketplace/marketplace-operator-79b997595-4268l" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.893101 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r7qjf\" (UniqueName: \"kubernetes.io/projected/5b987d67-e424-4286-a25d-11bfc4d1e577-kube-api-access-r7qjf\") pod \"console-operator-58897d9998-zs2tk\" (UID: \"5b987d67-e424-4286-a25d-11bfc4d1e577\") " pod="openshift-console-operator/console-operator-58897d9998-zs2tk" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.893118 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1b0f95d5-456d-45a7-9bfd-49efbf2a16ce-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-f5fs6\" (UID: \"1b0f95d5-456d-45a7-9bfd-49efbf2a16ce\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-f5fs6" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.893138 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/30a4c50c-34f7-4c9c-9cbd-baaf50ed16e1-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-6zjns\" (UID: \"30a4c50c-34f7-4c9c-9cbd-baaf50ed16e1\") " pod="openshift-authentication/oauth-openshift-558db77b4-6zjns" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.893164 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3e0bc350-e279-4e74-a70e-c89593f115f3-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-6lddg\" (UID: \"3e0bc350-e279-4e74-a70e-c89593f115f3\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-6lddg" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.893182 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/30a4c50c-34f7-4c9c-9cbd-baaf50ed16e1-audit-dir\") pod \"oauth-openshift-558db77b4-6zjns\" (UID: \"30a4c50c-34f7-4c9c-9cbd-baaf50ed16e1\") " pod="openshift-authentication/oauth-openshift-558db77b4-6zjns" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.893217 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/0b6fe31f-5401-4a2e-bccb-e57fab2a35ba-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-cb6xn\" (UID: \"0b6fe31f-5401-4a2e-bccb-e57fab2a35ba\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-cb6xn" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.893235 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b1a4a04b-067c-43f1-b355-46161babe869-secret-volume\") pod \"collect-profiles-29494995-x4n8l\" (UID: \"b1a4a04b-067c-43f1-b355-46161babe869\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29494995-x4n8l" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.893257 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/cb93f308-4554-41a0-a5c7-28d516a419c7-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-ghcqr\" (UID: \"cb93f308-4554-41a0-a5c7-28d516a419c7\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-ghcqr" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.893276 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/30a4c50c-34f7-4c9c-9cbd-baaf50ed16e1-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-6zjns\" (UID: \"30a4c50c-34f7-4c9c-9cbd-baaf50ed16e1\") " pod="openshift-authentication/oauth-openshift-558db77b4-6zjns" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.893294 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d5jrc\" (UniqueName: \"kubernetes.io/projected/1408f146-4652-41e3-8947-2f230e515750-kube-api-access-d5jrc\") pod \"ingress-operator-5b745b69d9-2h8sf\" (UID: \"1408f146-4652-41e3-8947-2f230e515750\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-2h8sf" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.893309 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/380625b0-02b5-417a-bd1e-7ccf56f56059-default-certificate\") pod \"router-default-5444994796-lkcrp\" (UID: \"380625b0-02b5-417a-bd1e-7ccf56f56059\") " pod="openshift-ingress/router-default-5444994796-lkcrp" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.893330 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/1408f146-4652-41e3-8947-2f230e515750-trusted-ca\") pod \"ingress-operator-5b745b69d9-2h8sf\" (UID: \"1408f146-4652-41e3-8947-2f230e515750\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-2h8sf" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.893348 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cmjd2\" (UniqueName: \"kubernetes.io/projected/0b6fe31f-5401-4a2e-bccb-e57fab2a35ba-kube-api-access-cmjd2\") pod \"multus-admission-controller-857f4d67dd-cb6xn\" (UID: \"0b6fe31f-5401-4a2e-bccb-e57fab2a35ba\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-cb6xn" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.893363 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/380625b0-02b5-417a-bd1e-7ccf56f56059-stats-auth\") pod \"router-default-5444994796-lkcrp\" (UID: \"380625b0-02b5-417a-bd1e-7ccf56f56059\") " pod="openshift-ingress/router-default-5444994796-lkcrp" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.893373 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/c9bc5b93-0c42-401c-8ca5-e5154e8be34d-tmpfs\") pod \"packageserver-d55dfcdfc-j8wt8\" (UID: \"c9bc5b93-0c42-401c-8ca5-e5154e8be34d\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-j8wt8" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.893379 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/98a7839a-3ca2-49f7-a330-f77ffc4e4da3-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-zrdsf\" (UID: \"98a7839a-3ca2-49f7-a330-f77ffc4e4da3\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-zrdsf" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.893477 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/7473d665-3627-4470-a820-ebdbdc113587-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-4268l\" (UID: \"7473d665-3627-4470-a820-ebdbdc113587\") " pod="openshift-marketplace/marketplace-operator-79b997595-4268l" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.893501 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b1a4a04b-067c-43f1-b355-46161babe869-config-volume\") pod \"collect-profiles-29494995-x4n8l\" (UID: \"b1a4a04b-067c-43f1-b355-46161babe869\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29494995-x4n8l" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.893522 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ng4c5\" (UniqueName: \"kubernetes.io/projected/c9bc5b93-0c42-401c-8ca5-e5154e8be34d-kube-api-access-ng4c5\") pod \"packageserver-d55dfcdfc-j8wt8\" (UID: \"c9bc5b93-0c42-401c-8ca5-e5154e8be34d\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-j8wt8" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.893546 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/30a4c50c-34f7-4c9c-9cbd-baaf50ed16e1-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-6zjns\" (UID: \"30a4c50c-34f7-4c9c-9cbd-baaf50ed16e1\") " pod="openshift-authentication/oauth-openshift-558db77b4-6zjns" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.893562 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3e0bc350-e279-4e74-a70e-c89593f115f3-config\") pod \"kube-controller-manager-operator-78b949d7b-6lddg\" (UID: \"3e0bc350-e279-4e74-a70e-c89593f115f3\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-6lddg" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.893588 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3e0bc350-e279-4e74-a70e-c89593f115f3-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-6lddg\" (UID: \"3e0bc350-e279-4e74-a70e-c89593f115f3\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-6lddg" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.893603 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/c9bc5b93-0c42-401c-8ca5-e5154e8be34d-webhook-cert\") pod \"packageserver-d55dfcdfc-j8wt8\" (UID: \"c9bc5b93-0c42-401c-8ca5-e5154e8be34d\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-j8wt8" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.893615 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/380625b0-02b5-417a-bd1e-7ccf56f56059-service-ca-bundle\") pod \"router-default-5444994796-lkcrp\" (UID: \"380625b0-02b5-417a-bd1e-7ccf56f56059\") " pod="openshift-ingress/router-default-5444994796-lkcrp" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.893633 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/30a4c50c-34f7-4c9c-9cbd-baaf50ed16e1-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-6zjns\" (UID: \"30a4c50c-34f7-4c9c-9cbd-baaf50ed16e1\") " pod="openshift-authentication/oauth-openshift-558db77b4-6zjns" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.893660 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5b987d67-e424-4286-a25d-11bfc4d1e577-config\") pod \"console-operator-58897d9998-zs2tk\" (UID: \"5b987d67-e424-4286-a25d-11bfc4d1e577\") " pod="openshift-console-operator/console-operator-58897d9998-zs2tk" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.893675 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l2kqn\" (UniqueName: \"kubernetes.io/projected/7473d665-3627-4470-a820-ebdbdc113587-kube-api-access-l2kqn\") pod \"marketplace-operator-79b997595-4268l\" (UID: \"7473d665-3627-4470-a820-ebdbdc113587\") " pod="openshift-marketplace/marketplace-operator-79b997595-4268l" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.893705 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/3c5e8be2-fe94-488c-801e-d1a56700bfa5-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-ztdsl\" (UID: \"3c5e8be2-fe94-488c-801e-d1a56700bfa5\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-ztdsl" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.893706 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1b0f95d5-456d-45a7-9bfd-49efbf2a16ce-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-f5fs6\" (UID: \"1b0f95d5-456d-45a7-9bfd-49efbf2a16ce\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-f5fs6" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.893731 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rsr8x\" (UniqueName: \"kubernetes.io/projected/cb93f308-4554-41a0-a5c7-28d516a419c7-kube-api-access-rsr8x\") pod \"machine-config-controller-84d6567774-ghcqr\" (UID: \"cb93f308-4554-41a0-a5c7-28d516a419c7\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-ghcqr" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.893753 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/30a4c50c-34f7-4c9c-9cbd-baaf50ed16e1-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-6zjns\" (UID: \"30a4c50c-34f7-4c9c-9cbd-baaf50ed16e1\") " pod="openshift-authentication/oauth-openshift-558db77b4-6zjns" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.893773 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/5b987d67-e424-4286-a25d-11bfc4d1e577-trusted-ca\") pod \"console-operator-58897d9998-zs2tk\" (UID: \"5b987d67-e424-4286-a25d-11bfc4d1e577\") " pod="openshift-console-operator/console-operator-58897d9998-zs2tk" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.893846 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rf25c\" (UniqueName: \"kubernetes.io/projected/3c5e8be2-fe94-488c-801e-d1a56700bfa5-kube-api-access-rf25c\") pod \"cluster-samples-operator-665b6dd947-ztdsl\" (UID: \"3c5e8be2-fe94-488c-801e-d1a56700bfa5\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-ztdsl" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.893870 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/657b37ac-43ff-4309-9bfa-5220bccb08c0-signing-key\") pod \"service-ca-9c57cc56f-w2lv5\" (UID: \"657b37ac-43ff-4309-9bfa-5220bccb08c0\") " pod="openshift-service-ca/service-ca-9c57cc56f-w2lv5" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.893875 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/30a4c50c-34f7-4c9c-9cbd-baaf50ed16e1-audit-dir\") pod \"oauth-openshift-558db77b4-6zjns\" (UID: \"30a4c50c-34f7-4c9c-9cbd-baaf50ed16e1\") " pod="openshift-authentication/oauth-openshift-558db77b4-6zjns" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.893892 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/30a4c50c-34f7-4c9c-9cbd-baaf50ed16e1-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-6zjns\" (UID: \"30a4c50c-34f7-4c9c-9cbd-baaf50ed16e1\") " pod="openshift-authentication/oauth-openshift-558db77b4-6zjns" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.893909 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/98a7839a-3ca2-49f7-a330-f77ffc4e4da3-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-zrdsf\" (UID: \"98a7839a-3ca2-49f7-a330-f77ffc4e4da3\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-zrdsf" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.893930 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ec989c54-8ec3-4f9d-87b0-2665776ffd15-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-9gw94\" (UID: \"ec989c54-8ec3-4f9d-87b0-2665776ffd15\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-9gw94" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.893947 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/30a4c50c-34f7-4c9c-9cbd-baaf50ed16e1-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-6zjns\" (UID: \"30a4c50c-34f7-4c9c-9cbd-baaf50ed16e1\") " pod="openshift-authentication/oauth-openshift-558db77b4-6zjns" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.893962 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/c9bc5b93-0c42-401c-8ca5-e5154e8be34d-apiservice-cert\") pod \"packageserver-d55dfcdfc-j8wt8\" (UID: \"c9bc5b93-0c42-401c-8ca5-e5154e8be34d\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-j8wt8" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.893980 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/30a4c50c-34f7-4c9c-9cbd-baaf50ed16e1-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-6zjns\" (UID: \"30a4c50c-34f7-4c9c-9cbd-baaf50ed16e1\") " pod="openshift-authentication/oauth-openshift-558db77b4-6zjns" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.893999 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2msjg\" (UniqueName: \"kubernetes.io/projected/820dc798-ef25-4bda-947f-8c66b290816d-kube-api-access-2msjg\") pod \"dns-operator-744455d44c-2dsnp\" (UID: \"820dc798-ef25-4bda-947f-8c66b290816d\") " pod="openshift-dns-operator/dns-operator-744455d44c-2dsnp" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.894018 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/380625b0-02b5-417a-bd1e-7ccf56f56059-metrics-certs\") pod \"router-default-5444994796-lkcrp\" (UID: \"380625b0-02b5-417a-bd1e-7ccf56f56059\") " pod="openshift-ingress/router-default-5444994796-lkcrp" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.895033 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3e0bc350-e279-4e74-a70e-c89593f115f3-config\") pod \"kube-controller-manager-operator-78b949d7b-6lddg\" (UID: \"3e0bc350-e279-4e74-a70e-c89593f115f3\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-6lddg" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.895183 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/cb93f308-4554-41a0-a5c7-28d516a419c7-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-ghcqr\" (UID: \"cb93f308-4554-41a0-a5c7-28d516a419c7\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-ghcqr" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.896171 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/820dc798-ef25-4bda-947f-8c66b290816d-metrics-tls\") pod \"dns-operator-744455d44c-2dsnp\" (UID: \"820dc798-ef25-4bda-947f-8c66b290816d\") " pod="openshift-dns-operator/dns-operator-744455d44c-2dsnp" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.896709 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3e0bc350-e279-4e74-a70e-c89593f115f3-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-6lddg\" (UID: \"3e0bc350-e279-4e74-a70e-c89593f115f3\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-6lddg" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.897168 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/380625b0-02b5-417a-bd1e-7ccf56f56059-metrics-certs\") pod \"router-default-5444994796-lkcrp\" (UID: \"380625b0-02b5-417a-bd1e-7ccf56f56059\") " pod="openshift-ingress/router-default-5444994796-lkcrp" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.897222 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1b0f95d5-456d-45a7-9bfd-49efbf2a16ce-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-f5fs6\" (UID: \"1b0f95d5-456d-45a7-9bfd-49efbf2a16ce\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-f5fs6" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.897632 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/3c5e8be2-fe94-488c-801e-d1a56700bfa5-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-ztdsl\" (UID: \"3c5e8be2-fe94-488c-801e-d1a56700bfa5\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-ztdsl" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.897759 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/380625b0-02b5-417a-bd1e-7ccf56f56059-stats-auth\") pod \"router-default-5444994796-lkcrp\" (UID: \"380625b0-02b5-417a-bd1e-7ccf56f56059\") " pod="openshift-ingress/router-default-5444994796-lkcrp" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.899442 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/380625b0-02b5-417a-bd1e-7ccf56f56059-default-certificate\") pod \"router-default-5444994796-lkcrp\" (UID: \"380625b0-02b5-417a-bd1e-7ccf56f56059\") " pod="openshift-ingress/router-default-5444994796-lkcrp" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.899998 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2w978\" (UniqueName: \"kubernetes.io/projected/64cf2ff9-40f4-48a5-a16c-6513cf0470bd-kube-api-access-2w978\") pod \"downloads-7954f5f757-6wmrp\" (UID: \"64cf2ff9-40f4-48a5-a16c-6513cf0470bd\") " pod="openshift-console/downloads-7954f5f757-6wmrp" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.921816 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.942533 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.961840 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.986959 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Jan 29 15:29:57 crc kubenswrapper[5008]: I0129 15:29:57.995970 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/30a4c50c-34f7-4c9c-9cbd-baaf50ed16e1-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-6zjns\" (UID: \"30a4c50c-34f7-4c9c-9cbd-baaf50ed16e1\") " pod="openshift-authentication/oauth-openshift-558db77b4-6zjns" Jan 29 15:29:58 crc kubenswrapper[5008]: I0129 15:29:58.001852 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Jan 29 15:29:58 crc kubenswrapper[5008]: I0129 15:29:58.006011 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/30a4c50c-34f7-4c9c-9cbd-baaf50ed16e1-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-6zjns\" (UID: \"30a4c50c-34f7-4c9c-9cbd-baaf50ed16e1\") " pod="openshift-authentication/oauth-openshift-558db77b4-6zjns" Jan 29 15:29:58 crc kubenswrapper[5008]: I0129 15:29:58.021706 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Jan 29 15:29:58 crc kubenswrapper[5008]: I0129 15:29:58.027772 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/30a4c50c-34f7-4c9c-9cbd-baaf50ed16e1-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-6zjns\" (UID: \"30a4c50c-34f7-4c9c-9cbd-baaf50ed16e1\") " pod="openshift-authentication/oauth-openshift-558db77b4-6zjns" Jan 29 15:29:58 crc kubenswrapper[5008]: I0129 15:29:58.063057 5008 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Jan 29 15:29:58 crc kubenswrapper[5008]: I0129 15:29:58.066334 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Jan 29 15:29:58 crc kubenswrapper[5008]: I0129 15:29:58.077427 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/30a4c50c-34f7-4c9c-9cbd-baaf50ed16e1-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-6zjns\" (UID: \"30a4c50c-34f7-4c9c-9cbd-baaf50ed16e1\") " pod="openshift-authentication/oauth-openshift-558db77b4-6zjns" Jan 29 15:29:58 crc kubenswrapper[5008]: I0129 15:29:58.082093 5008 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Jan 29 15:29:58 crc kubenswrapper[5008]: I0129 15:29:58.083682 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/30a4c50c-34f7-4c9c-9cbd-baaf50ed16e1-audit-policies\") pod \"oauth-openshift-558db77b4-6zjns\" (UID: \"30a4c50c-34f7-4c9c-9cbd-baaf50ed16e1\") " pod="openshift-authentication/oauth-openshift-558db77b4-6zjns" Jan 29 15:29:58 crc kubenswrapper[5008]: I0129 15:29:58.102032 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Jan 29 15:29:58 crc kubenswrapper[5008]: I0129 15:29:58.110131 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/30a4c50c-34f7-4c9c-9cbd-baaf50ed16e1-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-6zjns\" (UID: \"30a4c50c-34f7-4c9c-9cbd-baaf50ed16e1\") " pod="openshift-authentication/oauth-openshift-558db77b4-6zjns" Jan 29 15:29:58 crc kubenswrapper[5008]: I0129 15:29:58.122987 5008 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Jan 29 15:29:58 crc kubenswrapper[5008]: I0129 15:29:58.125146 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/30a4c50c-34f7-4c9c-9cbd-baaf50ed16e1-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-6zjns\" (UID: \"30a4c50c-34f7-4c9c-9cbd-baaf50ed16e1\") " pod="openshift-authentication/oauth-openshift-558db77b4-6zjns" Jan 29 15:29:58 crc kubenswrapper[5008]: I0129 15:29:58.142366 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Jan 29 15:29:58 crc kubenswrapper[5008]: I0129 15:29:58.147880 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/30a4c50c-34f7-4c9c-9cbd-baaf50ed16e1-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-6zjns\" (UID: \"30a4c50c-34f7-4c9c-9cbd-baaf50ed16e1\") " pod="openshift-authentication/oauth-openshift-558db77b4-6zjns" Jan 29 15:29:58 crc kubenswrapper[5008]: I0129 15:29:58.162719 5008 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Jan 29 15:29:58 crc kubenswrapper[5008]: I0129 15:29:58.164520 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/30a4c50c-34f7-4c9c-9cbd-baaf50ed16e1-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-6zjns\" (UID: \"30a4c50c-34f7-4c9c-9cbd-baaf50ed16e1\") " pod="openshift-authentication/oauth-openshift-558db77b4-6zjns" Jan 29 15:29:58 crc kubenswrapper[5008]: I0129 15:29:58.183201 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Jan 29 15:29:58 crc kubenswrapper[5008]: I0129 15:29:58.188506 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/30a4c50c-34f7-4c9c-9cbd-baaf50ed16e1-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-6zjns\" (UID: \"30a4c50c-34f7-4c9c-9cbd-baaf50ed16e1\") " pod="openshift-authentication/oauth-openshift-558db77b4-6zjns" Jan 29 15:29:58 crc kubenswrapper[5008]: I0129 15:29:58.209361 5008 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Jan 29 15:29:58 crc kubenswrapper[5008]: I0129 15:29:58.215487 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/30a4c50c-34f7-4c9c-9cbd-baaf50ed16e1-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-6zjns\" (UID: \"30a4c50c-34f7-4c9c-9cbd-baaf50ed16e1\") " pod="openshift-authentication/oauth-openshift-558db77b4-6zjns" Jan 29 15:29:58 crc kubenswrapper[5008]: I0129 15:29:58.222855 5008 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Jan 29 15:29:58 crc kubenswrapper[5008]: I0129 15:29:58.243439 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Jan 29 15:29:58 crc kubenswrapper[5008]: I0129 15:29:58.262998 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Jan 29 15:29:58 crc kubenswrapper[5008]: I0129 15:29:58.268228 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/30a4c50c-34f7-4c9c-9cbd-baaf50ed16e1-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-6zjns\" (UID: \"30a4c50c-34f7-4c9c-9cbd-baaf50ed16e1\") " pod="openshift-authentication/oauth-openshift-558db77b4-6zjns" Jan 29 15:29:58 crc kubenswrapper[5008]: I0129 15:29:58.283709 5008 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Jan 29 15:29:58 crc kubenswrapper[5008]: I0129 15:29:58.302340 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 29 15:29:58 crc kubenswrapper[5008]: I0129 15:29:58.322693 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Jan 29 15:29:58 crc kubenswrapper[5008]: I0129 15:29:58.327952 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b1a4a04b-067c-43f1-b355-46161babe869-secret-volume\") pod \"collect-profiles-29494995-x4n8l\" (UID: \"b1a4a04b-067c-43f1-b355-46161babe869\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29494995-x4n8l" Jan 29 15:29:58 crc kubenswrapper[5008]: I0129 15:29:58.342365 5008 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 29 15:29:58 crc kubenswrapper[5008]: I0129 15:29:58.344771 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b1a4a04b-067c-43f1-b355-46161babe869-config-volume\") pod \"collect-profiles-29494995-x4n8l\" (UID: \"b1a4a04b-067c-43f1-b355-46161babe869\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29494995-x4n8l" Jan 29 15:29:58 crc kubenswrapper[5008]: I0129 15:29:58.362695 5008 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Jan 29 15:29:58 crc kubenswrapper[5008]: I0129 15:29:58.383015 5008 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Jan 29 15:29:58 crc kubenswrapper[5008]: I0129 15:29:58.402233 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Jan 29 15:29:58 crc kubenswrapper[5008]: I0129 15:29:58.423150 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Jan 29 15:29:58 crc kubenswrapper[5008]: I0129 15:29:58.428081 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ec989c54-8ec3-4f9d-87b0-2665776ffd15-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-9gw94\" (UID: \"ec989c54-8ec3-4f9d-87b0-2665776ffd15\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-9gw94" Jan 29 15:29:58 crc kubenswrapper[5008]: I0129 15:29:58.443281 5008 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Jan 29 15:29:58 crc kubenswrapper[5008]: I0129 15:29:58.455242 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ec989c54-8ec3-4f9d-87b0-2665776ffd15-config\") pod \"kube-apiserver-operator-766d6c64bb-9gw94\" (UID: \"ec989c54-8ec3-4f9d-87b0-2665776ffd15\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-9gw94" Jan 29 15:29:58 crc kubenswrapper[5008]: I0129 15:29:58.463944 5008 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Jan 29 15:29:58 crc kubenswrapper[5008]: I0129 15:29:58.483151 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Jan 29 15:29:58 crc kubenswrapper[5008]: I0129 15:29:58.502748 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Jan 29 15:29:58 crc kubenswrapper[5008]: I0129 15:29:58.508463 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/cb93f308-4554-41a0-a5c7-28d516a419c7-proxy-tls\") pod \"machine-config-controller-84d6567774-ghcqr\" (UID: \"cb93f308-4554-41a0-a5c7-28d516a419c7\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-ghcqr" Jan 29 15:29:58 crc kubenswrapper[5008]: I0129 15:29:58.522880 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Jan 29 15:29:58 crc kubenswrapper[5008]: I0129 15:29:58.544282 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Jan 29 15:29:58 crc kubenswrapper[5008]: I0129 15:29:58.562829 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Jan 29 15:29:58 crc kubenswrapper[5008]: I0129 15:29:58.598539 5008 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Jan 29 15:29:58 crc kubenswrapper[5008]: I0129 15:29:58.601891 5008 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Jan 29 15:29:58 crc kubenswrapper[5008]: I0129 15:29:58.605812 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/1408f146-4652-41e3-8947-2f230e515750-trusted-ca\") pod \"ingress-operator-5b745b69d9-2h8sf\" (UID: \"1408f146-4652-41e3-8947-2f230e515750\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-2h8sf" Jan 29 15:29:58 crc kubenswrapper[5008]: I0129 15:29:58.623656 5008 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Jan 29 15:29:58 crc kubenswrapper[5008]: I0129 15:29:58.643227 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Jan 29 15:29:58 crc kubenswrapper[5008]: I0129 15:29:58.649732 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/1408f146-4652-41e3-8947-2f230e515750-metrics-tls\") pod \"ingress-operator-5b745b69d9-2h8sf\" (UID: \"1408f146-4652-41e3-8947-2f230e515750\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-2h8sf" Jan 29 15:29:58 crc kubenswrapper[5008]: E0129 15:29:58.651139 5008 configmap.go:193] Couldn't get configMap openshift-machine-api/kube-rbac-proxy: failed to sync configmap cache: timed out waiting for the condition Jan 29 15:29:58 crc kubenswrapper[5008]: E0129 15:29:58.651253 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/6db03bb1-4833-4d3f-82d5-08ec5710251f-config podName:6db03bb1-4833-4d3f-82d5-08ec5710251f nodeName:}" failed. No retries permitted until 2026-01-29 15:29:59.151222199 +0000 UTC m=+142.824076476 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/6db03bb1-4833-4d3f-82d5-08ec5710251f-config") pod "machine-api-operator-5694c8668f-fsx74" (UID: "6db03bb1-4833-4d3f-82d5-08ec5710251f") : failed to sync configmap cache: timed out waiting for the condition Jan 29 15:29:58 crc kubenswrapper[5008]: I0129 15:29:58.662632 5008 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Jan 29 15:29:58 crc kubenswrapper[5008]: I0129 15:29:58.683632 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Jan 29 15:29:58 crc kubenswrapper[5008]: I0129 15:29:58.701024 5008 request.go:700] Waited for 1.018087865s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console-operator/secrets?fieldSelector=metadata.name%3Dserving-cert&limit=500&resourceVersion=0 Jan 29 15:29:58 crc kubenswrapper[5008]: I0129 15:29:58.703843 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Jan 29 15:29:58 crc kubenswrapper[5008]: I0129 15:29:58.716584 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5b987d67-e424-4286-a25d-11bfc4d1e577-serving-cert\") pod \"console-operator-58897d9998-zs2tk\" (UID: \"5b987d67-e424-4286-a25d-11bfc4d1e577\") " pod="openshift-console-operator/console-operator-58897d9998-zs2tk" Jan 29 15:29:58 crc kubenswrapper[5008]: I0129 15:29:58.723082 5008 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Jan 29 15:29:58 crc kubenswrapper[5008]: I0129 15:29:58.744481 5008 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Jan 29 15:29:58 crc kubenswrapper[5008]: I0129 15:29:58.745605 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5b987d67-e424-4286-a25d-11bfc4d1e577-config\") pod \"console-operator-58897d9998-zs2tk\" (UID: \"5b987d67-e424-4286-a25d-11bfc4d1e577\") " pod="openshift-console-operator/console-operator-58897d9998-zs2tk" Jan 29 15:29:58 crc kubenswrapper[5008]: I0129 15:29:58.772967 5008 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Jan 29 15:29:58 crc kubenswrapper[5008]: I0129 15:29:58.776688 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/5b987d67-e424-4286-a25d-11bfc4d1e577-trusted-ca\") pod \"console-operator-58897d9998-zs2tk\" (UID: \"5b987d67-e424-4286-a25d-11bfc4d1e577\") " pod="openshift-console-operator/console-operator-58897d9998-zs2tk" Jan 29 15:29:58 crc kubenswrapper[5008]: I0129 15:29:58.790686 5008 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Jan 29 15:29:58 crc kubenswrapper[5008]: I0129 15:29:58.801989 5008 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Jan 29 15:29:58 crc kubenswrapper[5008]: I0129 15:29:58.823458 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Jan 29 15:29:58 crc kubenswrapper[5008]: I0129 15:29:58.842382 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Jan 29 15:29:58 crc kubenswrapper[5008]: I0129 15:29:58.862354 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Jan 29 15:29:58 crc kubenswrapper[5008]: I0129 15:29:58.868576 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/7473d665-3627-4470-a820-ebdbdc113587-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-4268l\" (UID: \"7473d665-3627-4470-a820-ebdbdc113587\") " pod="openshift-marketplace/marketplace-operator-79b997595-4268l" Jan 29 15:29:58 crc kubenswrapper[5008]: I0129 15:29:58.882459 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Jan 29 15:29:58 crc kubenswrapper[5008]: E0129 15:29:58.893468 5008 secret.go:188] Couldn't get secret openshift-kube-scheduler-operator/kube-scheduler-operator-serving-cert: failed to sync secret cache: timed out waiting for the condition Jan 29 15:29:58 crc kubenswrapper[5008]: E0129 15:29:58.893552 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/98a7839a-3ca2-49f7-a330-f77ffc4e4da3-serving-cert podName:98a7839a-3ca2-49f7-a330-f77ffc4e4da3 nodeName:}" failed. No retries permitted until 2026-01-29 15:29:59.393528802 +0000 UTC m=+143.066383049 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/98a7839a-3ca2-49f7-a330-f77ffc4e4da3-serving-cert") pod "openshift-kube-scheduler-operator-5fdd9b5758-zrdsf" (UID: "98a7839a-3ca2-49f7-a330-f77ffc4e4da3") : failed to sync secret cache: timed out waiting for the condition Jan 29 15:29:58 crc kubenswrapper[5008]: E0129 15:29:58.893564 5008 configmap.go:193] Couldn't get configMap openshift-service-ca/signing-cabundle: failed to sync configmap cache: timed out waiting for the condition Jan 29 15:29:58 crc kubenswrapper[5008]: E0129 15:29:58.893683 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/657b37ac-43ff-4309-9bfa-5220bccb08c0-signing-cabundle podName:657b37ac-43ff-4309-9bfa-5220bccb08c0 nodeName:}" failed. No retries permitted until 2026-01-29 15:29:59.393648285 +0000 UTC m=+143.066502592 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "signing-cabundle" (UniqueName: "kubernetes.io/configmap/657b37ac-43ff-4309-9bfa-5220bccb08c0-signing-cabundle") pod "service-ca-9c57cc56f-w2lv5" (UID: "657b37ac-43ff-4309-9bfa-5220bccb08c0") : failed to sync configmap cache: timed out waiting for the condition Jan 29 15:29:58 crc kubenswrapper[5008]: E0129 15:29:58.894833 5008 secret.go:188] Couldn't get secret openshift-operator-lifecycle-manager/packageserver-service-cert: failed to sync secret cache: timed out waiting for the condition Jan 29 15:29:58 crc kubenswrapper[5008]: E0129 15:29:58.894889 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c9bc5b93-0c42-401c-8ca5-e5154e8be34d-apiservice-cert podName:c9bc5b93-0c42-401c-8ca5-e5154e8be34d nodeName:}" failed. No retries permitted until 2026-01-29 15:29:59.394875048 +0000 UTC m=+143.067729295 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "apiservice-cert" (UniqueName: "kubernetes.io/secret/c9bc5b93-0c42-401c-8ca5-e5154e8be34d-apiservice-cert") pod "packageserver-d55dfcdfc-j8wt8" (UID: "c9bc5b93-0c42-401c-8ca5-e5154e8be34d") : failed to sync secret cache: timed out waiting for the condition Jan 29 15:29:58 crc kubenswrapper[5008]: E0129 15:29:58.894905 5008 secret.go:188] Couldn't get secret openshift-multus/multus-admission-controller-secret: failed to sync secret cache: timed out waiting for the condition Jan 29 15:29:58 crc kubenswrapper[5008]: E0129 15:29:58.894971 5008 secret.go:188] Couldn't get secret openshift-operator-lifecycle-manager/packageserver-service-cert: failed to sync secret cache: timed out waiting for the condition Jan 29 15:29:58 crc kubenswrapper[5008]: E0129 15:29:58.895003 5008 secret.go:188] Couldn't get secret openshift-service-ca/signing-key: failed to sync secret cache: timed out waiting for the condition Jan 29 15:29:58 crc kubenswrapper[5008]: E0129 15:29:58.894990 5008 configmap.go:193] Couldn't get configMap openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-config: failed to sync configmap cache: timed out waiting for the condition Jan 29 15:29:58 crc kubenswrapper[5008]: E0129 15:29:58.894998 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0b6fe31f-5401-4a2e-bccb-e57fab2a35ba-webhook-certs podName:0b6fe31f-5401-4a2e-bccb-e57fab2a35ba nodeName:}" failed. No retries permitted until 2026-01-29 15:29:59.39497461 +0000 UTC m=+143.067828917 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/0b6fe31f-5401-4a2e-bccb-e57fab2a35ba-webhook-certs") pod "multus-admission-controller-857f4d67dd-cb6xn" (UID: "0b6fe31f-5401-4a2e-bccb-e57fab2a35ba") : failed to sync secret cache: timed out waiting for the condition Jan 29 15:29:58 crc kubenswrapper[5008]: E0129 15:29:58.895092 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/657b37ac-43ff-4309-9bfa-5220bccb08c0-signing-key podName:657b37ac-43ff-4309-9bfa-5220bccb08c0 nodeName:}" failed. No retries permitted until 2026-01-29 15:29:59.395080713 +0000 UTC m=+143.067934970 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "signing-key" (UniqueName: "kubernetes.io/secret/657b37ac-43ff-4309-9bfa-5220bccb08c0-signing-key") pod "service-ca-9c57cc56f-w2lv5" (UID: "657b37ac-43ff-4309-9bfa-5220bccb08c0") : failed to sync secret cache: timed out waiting for the condition Jan 29 15:29:58 crc kubenswrapper[5008]: E0129 15:29:58.895110 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c9bc5b93-0c42-401c-8ca5-e5154e8be34d-webhook-cert podName:c9bc5b93-0c42-401c-8ca5-e5154e8be34d nodeName:}" failed. No retries permitted until 2026-01-29 15:29:59.395100543 +0000 UTC m=+143.067954790 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "webhook-cert" (UniqueName: "kubernetes.io/secret/c9bc5b93-0c42-401c-8ca5-e5154e8be34d-webhook-cert") pod "packageserver-d55dfcdfc-j8wt8" (UID: "c9bc5b93-0c42-401c-8ca5-e5154e8be34d") : failed to sync secret cache: timed out waiting for the condition Jan 29 15:29:58 crc kubenswrapper[5008]: E0129 15:29:58.895124 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/98a7839a-3ca2-49f7-a330-f77ffc4e4da3-config podName:98a7839a-3ca2-49f7-a330-f77ffc4e4da3 nodeName:}" failed. No retries permitted until 2026-01-29 15:29:59.395117424 +0000 UTC m=+143.067971771 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/98a7839a-3ca2-49f7-a330-f77ffc4e4da3-config") pod "openshift-kube-scheduler-operator-5fdd9b5758-zrdsf" (UID: "98a7839a-3ca2-49f7-a330-f77ffc4e4da3") : failed to sync configmap cache: timed out waiting for the condition Jan 29 15:29:58 crc kubenswrapper[5008]: I0129 15:29:58.902731 5008 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Jan 29 15:29:58 crc kubenswrapper[5008]: I0129 15:29:58.921970 5008 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Jan 29 15:29:58 crc kubenswrapper[5008]: I0129 15:29:58.943101 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Jan 29 15:29:58 crc kubenswrapper[5008]: I0129 15:29:58.962499 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Jan 29 15:29:58 crc kubenswrapper[5008]: I0129 15:29:58.982172 5008 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Jan 29 15:29:59 crc kubenswrapper[5008]: I0129 15:29:59.002133 5008 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Jan 29 15:29:59 crc kubenswrapper[5008]: I0129 15:29:59.022511 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Jan 29 15:29:59 crc kubenswrapper[5008]: I0129 15:29:59.041764 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Jan 29 15:29:59 crc kubenswrapper[5008]: I0129 15:29:59.047449 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/7473d665-3627-4470-a820-ebdbdc113587-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-4268l\" (UID: \"7473d665-3627-4470-a820-ebdbdc113587\") " pod="openshift-marketplace/marketplace-operator-79b997595-4268l" Jan 29 15:29:59 crc kubenswrapper[5008]: I0129 15:29:59.062532 5008 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Jan 29 15:29:59 crc kubenswrapper[5008]: I0129 15:29:59.083514 5008 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Jan 29 15:29:59 crc kubenswrapper[5008]: I0129 15:29:59.103664 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Jan 29 15:29:59 crc kubenswrapper[5008]: I0129 15:29:59.122446 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Jan 29 15:29:59 crc kubenswrapper[5008]: I0129 15:29:59.123446 5008 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="openshift-console/downloads-7954f5f757-6wmrp" secret="" err="failed to sync secret cache: timed out waiting for the condition" Jan 29 15:29:59 crc kubenswrapper[5008]: I0129 15:29:59.123553 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-6wmrp" Jan 29 15:29:59 crc kubenswrapper[5008]: I0129 15:29:59.145197 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Jan 29 15:29:59 crc kubenswrapper[5008]: I0129 15:29:59.183637 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Jan 29 15:29:59 crc kubenswrapper[5008]: I0129 15:29:59.203543 5008 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Jan 29 15:29:59 crc kubenswrapper[5008]: I0129 15:29:59.214600 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6db03bb1-4833-4d3f-82d5-08ec5710251f-config\") pod \"machine-api-operator-5694c8668f-fsx74\" (UID: \"6db03bb1-4833-4d3f-82d5-08ec5710251f\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-fsx74" Jan 29 15:29:59 crc kubenswrapper[5008]: I0129 15:29:59.223060 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Jan 29 15:29:59 crc kubenswrapper[5008]: I0129 15:29:59.243894 5008 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Jan 29 15:29:59 crc kubenswrapper[5008]: I0129 15:29:59.263817 5008 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Jan 29 15:29:59 crc kubenswrapper[5008]: I0129 15:29:59.283347 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Jan 29 15:29:59 crc kubenswrapper[5008]: I0129 15:29:59.303350 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Jan 29 15:29:59 crc kubenswrapper[5008]: I0129 15:29:59.323350 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Jan 29 15:29:59 crc kubenswrapper[5008]: I0129 15:29:59.342535 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Jan 29 15:29:59 crc kubenswrapper[5008]: I0129 15:29:59.358541 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-6wmrp"] Jan 29 15:29:59 crc kubenswrapper[5008]: I0129 15:29:59.365911 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Jan 29 15:29:59 crc kubenswrapper[5008]: W0129 15:29:59.367439 5008 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod64cf2ff9_40f4_48a5_a16c_6513cf0470bd.slice/crio-9abd198d8b241b24280129834e1f5180fb259afc84a75988e07119fc2a4ada66 WatchSource:0}: Error finding container 9abd198d8b241b24280129834e1f5180fb259afc84a75988e07119fc2a4ada66: Status 404 returned error can't find the container with id 9abd198d8b241b24280129834e1f5180fb259afc84a75988e07119fc2a4ada66 Jan 29 15:29:59 crc kubenswrapper[5008]: I0129 15:29:59.382936 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Jan 29 15:29:59 crc kubenswrapper[5008]: I0129 15:29:59.403645 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Jan 29 15:29:59 crc kubenswrapper[5008]: I0129 15:29:59.418913 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/657b37ac-43ff-4309-9bfa-5220bccb08c0-signing-key\") pod \"service-ca-9c57cc56f-w2lv5\" (UID: \"657b37ac-43ff-4309-9bfa-5220bccb08c0\") " pod="openshift-service-ca/service-ca-9c57cc56f-w2lv5" Jan 29 15:29:59 crc kubenswrapper[5008]: I0129 15:29:59.418965 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/98a7839a-3ca2-49f7-a330-f77ffc4e4da3-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-zrdsf\" (UID: \"98a7839a-3ca2-49f7-a330-f77ffc4e4da3\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-zrdsf" Jan 29 15:29:59 crc kubenswrapper[5008]: I0129 15:29:59.418991 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/c9bc5b93-0c42-401c-8ca5-e5154e8be34d-apiservice-cert\") pod \"packageserver-d55dfcdfc-j8wt8\" (UID: \"c9bc5b93-0c42-401c-8ca5-e5154e8be34d\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-j8wt8" Jan 29 15:29:59 crc kubenswrapper[5008]: I0129 15:29:59.419090 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/98a7839a-3ca2-49f7-a330-f77ffc4e4da3-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-zrdsf\" (UID: \"98a7839a-3ca2-49f7-a330-f77ffc4e4da3\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-zrdsf" Jan 29 15:29:59 crc kubenswrapper[5008]: I0129 15:29:59.419117 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/657b37ac-43ff-4309-9bfa-5220bccb08c0-signing-cabundle\") pod \"service-ca-9c57cc56f-w2lv5\" (UID: \"657b37ac-43ff-4309-9bfa-5220bccb08c0\") " pod="openshift-service-ca/service-ca-9c57cc56f-w2lv5" Jan 29 15:29:59 crc kubenswrapper[5008]: I0129 15:29:59.419408 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/0b6fe31f-5401-4a2e-bccb-e57fab2a35ba-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-cb6xn\" (UID: \"0b6fe31f-5401-4a2e-bccb-e57fab2a35ba\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-cb6xn" Jan 29 15:29:59 crc kubenswrapper[5008]: I0129 15:29:59.419488 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/c9bc5b93-0c42-401c-8ca5-e5154e8be34d-webhook-cert\") pod \"packageserver-d55dfcdfc-j8wt8\" (UID: \"c9bc5b93-0c42-401c-8ca5-e5154e8be34d\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-j8wt8" Jan 29 15:29:59 crc kubenswrapper[5008]: I0129 15:29:59.421443 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/98a7839a-3ca2-49f7-a330-f77ffc4e4da3-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-zrdsf\" (UID: \"98a7839a-3ca2-49f7-a330-f77ffc4e4da3\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-zrdsf" Jan 29 15:29:59 crc kubenswrapper[5008]: I0129 15:29:59.423237 5008 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Jan 29 15:29:59 crc kubenswrapper[5008]: I0129 15:29:59.423714 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/98a7839a-3ca2-49f7-a330-f77ffc4e4da3-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-zrdsf\" (UID: \"98a7839a-3ca2-49f7-a330-f77ffc4e4da3\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-zrdsf" Jan 29 15:29:59 crc kubenswrapper[5008]: I0129 15:29:59.423909 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/c9bc5b93-0c42-401c-8ca5-e5154e8be34d-apiservice-cert\") pod \"packageserver-d55dfcdfc-j8wt8\" (UID: \"c9bc5b93-0c42-401c-8ca5-e5154e8be34d\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-j8wt8" Jan 29 15:29:59 crc kubenswrapper[5008]: I0129 15:29:59.424035 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/0b6fe31f-5401-4a2e-bccb-e57fab2a35ba-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-cb6xn\" (UID: \"0b6fe31f-5401-4a2e-bccb-e57fab2a35ba\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-cb6xn" Jan 29 15:29:59 crc kubenswrapper[5008]: I0129 15:29:59.424578 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/c9bc5b93-0c42-401c-8ca5-e5154e8be34d-webhook-cert\") pod \"packageserver-d55dfcdfc-j8wt8\" (UID: \"c9bc5b93-0c42-401c-8ca5-e5154e8be34d\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-j8wt8" Jan 29 15:29:59 crc kubenswrapper[5008]: I0129 15:29:59.442347 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Jan 29 15:29:59 crc kubenswrapper[5008]: I0129 15:29:59.462154 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Jan 29 15:29:59 crc kubenswrapper[5008]: I0129 15:29:59.472494 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/657b37ac-43ff-4309-9bfa-5220bccb08c0-signing-cabundle\") pod \"service-ca-9c57cc56f-w2lv5\" (UID: \"657b37ac-43ff-4309-9bfa-5220bccb08c0\") " pod="openshift-service-ca/service-ca-9c57cc56f-w2lv5" Jan 29 15:29:59 crc kubenswrapper[5008]: I0129 15:29:59.473012 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/657b37ac-43ff-4309-9bfa-5220bccb08c0-signing-key\") pod \"service-ca-9c57cc56f-w2lv5\" (UID: \"657b37ac-43ff-4309-9bfa-5220bccb08c0\") " pod="openshift-service-ca/service-ca-9c57cc56f-w2lv5" Jan 29 15:29:59 crc kubenswrapper[5008]: I0129 15:29:59.482439 5008 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Jan 29 15:29:59 crc kubenswrapper[5008]: I0129 15:29:59.522566 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4cdqj\" (UniqueName: \"kubernetes.io/projected/f56b5e44-f079-4c56-9e19-e09996979003-kube-api-access-4cdqj\") pod \"route-controller-manager-6576b87f9c-4zwkl\" (UID: \"f56b5e44-f079-4c56-9e19-e09996979003\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-4zwkl" Jan 29 15:29:59 crc kubenswrapper[5008]: I0129 15:29:59.538260 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4pz26\" (UniqueName: \"kubernetes.io/projected/3f7de4a5-3819-41c0-9e2e-766dcff408bb-kube-api-access-4pz26\") pod \"console-f9d7485db-g2rk6\" (UID: \"3f7de4a5-3819-41c0-9e2e-766dcff408bb\") " pod="openshift-console/console-f9d7485db-g2rk6" Jan 29 15:29:59 crc kubenswrapper[5008]: I0129 15:29:59.558605 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nktwv\" (UniqueName: \"kubernetes.io/projected/4adf65cb-4f11-4061-bcb5-71c3d9b890f7-kube-api-access-nktwv\") pod \"apiserver-7bbb656c7d-n2sqt\" (UID: \"4adf65cb-4f11-4061-bcb5-71c3d9b890f7\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-n2sqt" Jan 29 15:29:59 crc kubenswrapper[5008]: I0129 15:29:59.585487 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mcl2c\" (UniqueName: \"kubernetes.io/projected/1c37e4bb-792b-4317-87ae-ca4172740500-kube-api-access-mcl2c\") pod \"etcd-operator-b45778765-v7r8x\" (UID: \"1c37e4bb-792b-4317-87ae-ca4172740500\") " pod="openshift-etcd-operator/etcd-operator-b45778765-v7r8x" Jan 29 15:29:59 crc kubenswrapper[5008]: I0129 15:29:59.597138 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-plh2t\" (UniqueName: \"kubernetes.io/projected/653b37fe-d452-4111-b27f-ef75530abe41-kube-api-access-plh2t\") pod \"apiserver-76f77b778f-4l85w\" (UID: \"653b37fe-d452-4111-b27f-ef75530abe41\") " pod="openshift-apiserver/apiserver-76f77b778f-4l85w" Jan 29 15:29:59 crc kubenswrapper[5008]: I0129 15:29:59.620080 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dqdxf\" (UniqueName: \"kubernetes.io/projected/7d5c80c8-4e74-4618-96c0-8e76168ad709-kube-api-access-dqdxf\") pod \"controller-manager-879f6c89f-fpmxk\" (UID: \"7d5c80c8-4e74-4618-96c0-8e76168ad709\") " pod="openshift-controller-manager/controller-manager-879f6c89f-fpmxk" Jan 29 15:29:59 crc kubenswrapper[5008]: I0129 15:29:59.635122 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v4bmd\" (UniqueName: \"kubernetes.io/projected/8eb3ecfb-3675-4931-b618-9a5ba6d23b1d-kube-api-access-v4bmd\") pod \"openshift-controller-manager-operator-756b6f6bc6-brcd7\" (UID: \"8eb3ecfb-3675-4931-b618-9a5ba6d23b1d\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-brcd7" Jan 29 15:29:59 crc kubenswrapper[5008]: I0129 15:29:59.654174 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-fpmxk" Jan 29 15:29:59 crc kubenswrapper[5008]: I0129 15:29:59.656157 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xc2mb\" (UniqueName: \"kubernetes.io/projected/696d81dd-3f1a-4c58-ae69-29fff54e590b-kube-api-access-xc2mb\") pod \"openshift-apiserver-operator-796bbdcf4f-tczgr\" (UID: \"696d81dd-3f1a-4c58-ae69-29fff54e590b\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-tczgr" Jan 29 15:29:59 crc kubenswrapper[5008]: I0129 15:29:59.690010 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-tczgr" Jan 29 15:29:59 crc kubenswrapper[5008]: I0129 15:29:59.697926 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-pqg9w" Jan 29 15:29:59 crc kubenswrapper[5008]: I0129 15:29:59.707230 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rrrl8\" (UniqueName: \"kubernetes.io/projected/8d495a4f-d952-4050-a895-e6650c083e0d-kube-api-access-rrrl8\") pod \"machine-approver-56656f9798-p8fx6\" (UID: \"8d495a4f-d952-4050-a895-e6650c083e0d\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-p8fx6" Jan 29 15:29:59 crc kubenswrapper[5008]: I0129 15:29:59.707697 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-brcd7" Jan 29 15:29:59 crc kubenswrapper[5008]: I0129 15:29:59.721103 5008 request.go:700] Waited for 1.870537953s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/hostpath-provisioner/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&limit=500&resourceVersion=0 Jan 29 15:29:59 crc kubenswrapper[5008]: I0129 15:29:59.721461 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8r42b\" (UniqueName: \"kubernetes.io/projected/f88f09ca-9a9f-4d6e-bb2f-f00d75ae11fb-kube-api-access-8r42b\") pod \"authentication-operator-69f744f599-wkn92\" (UID: \"f88f09ca-9a9f-4d6e-bb2f-f00d75ae11fb\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-wkn92" Jan 29 15:29:59 crc kubenswrapper[5008]: I0129 15:29:59.727705 5008 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Jan 29 15:29:59 crc kubenswrapper[5008]: I0129 15:29:59.733137 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-n2sqt" Jan 29 15:29:59 crc kubenswrapper[5008]: I0129 15:29:59.742517 5008 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Jan 29 15:29:59 crc kubenswrapper[5008]: I0129 15:29:59.763067 5008 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Jan 29 15:29:59 crc kubenswrapper[5008]: I0129 15:29:59.768742 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-4l85w" Jan 29 15:29:59 crc kubenswrapper[5008]: I0129 15:29:59.782933 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Jan 29 15:29:59 crc kubenswrapper[5008]: I0129 15:29:59.798033 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-g2rk6" Jan 29 15:29:59 crc kubenswrapper[5008]: I0129 15:29:59.802664 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Jan 29 15:29:59 crc kubenswrapper[5008]: I0129 15:29:59.805652 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-v7r8x" Jan 29 15:29:59 crc kubenswrapper[5008]: I0129 15:29:59.814719 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-4zwkl" Jan 29 15:29:59 crc kubenswrapper[5008]: I0129 15:29:59.823207 5008 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Jan 29 15:29:59 crc kubenswrapper[5008]: I0129 15:29:59.856575 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xfxpn\" (UniqueName: \"kubernetes.io/projected/30a4c50c-34f7-4c9c-9cbd-baaf50ed16e1-kube-api-access-xfxpn\") pod \"oauth-openshift-558db77b4-6zjns\" (UID: \"30a4c50c-34f7-4c9c-9cbd-baaf50ed16e1\") " pod="openshift-authentication/oauth-openshift-558db77b4-6zjns" Jan 29 15:29:59 crc kubenswrapper[5008]: I0129 15:29:59.875764 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bgpph\" (UniqueName: \"kubernetes.io/projected/1b0f95d5-456d-45a7-9bfd-49efbf2a16ce-kube-api-access-bgpph\") pod \"kube-storage-version-migrator-operator-b67b599dd-f5fs6\" (UID: \"1b0f95d5-456d-45a7-9bfd-49efbf2a16ce\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-f5fs6" Jan 29 15:29:59 crc kubenswrapper[5008]: I0129 15:29:59.892359 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-f5fs6" Jan 29 15:29:59 crc kubenswrapper[5008]: I0129 15:29:59.899188 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tsqb8\" (UniqueName: \"kubernetes.io/projected/b1a4a04b-067c-43f1-b355-46161babe869-kube-api-access-tsqb8\") pod \"collect-profiles-29494995-x4n8l\" (UID: \"b1a4a04b-067c-43f1-b355-46161babe869\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29494995-x4n8l" Jan 29 15:29:59 crc kubenswrapper[5008]: I0129 15:29:59.916347 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/1408f146-4652-41e3-8947-2f230e515750-bound-sa-token\") pod \"ingress-operator-5b745b69d9-2h8sf\" (UID: \"1408f146-4652-41e3-8947-2f230e515750\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-2h8sf" Jan 29 15:29:59 crc kubenswrapper[5008]: I0129 15:29:59.920291 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-6zjns" Jan 29 15:29:59 crc kubenswrapper[5008]: I0129 15:29:59.928317 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29494995-x4n8l" Jan 29 15:29:59 crc kubenswrapper[5008]: I0129 15:29:59.939856 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ec989c54-8ec3-4f9d-87b0-2665776ffd15-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-9gw94\" (UID: \"ec989c54-8ec3-4f9d-87b0-2665776ffd15\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-9gw94" Jan 29 15:29:59 crc kubenswrapper[5008]: I0129 15:29:59.949049 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-p8fx6" Jan 29 15:29:59 crc kubenswrapper[5008]: I0129 15:29:59.956102 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r6lwz\" (UniqueName: \"kubernetes.io/projected/657b37ac-43ff-4309-9bfa-5220bccb08c0-kube-api-access-r6lwz\") pod \"service-ca-9c57cc56f-w2lv5\" (UID: \"657b37ac-43ff-4309-9bfa-5220bccb08c0\") " pod="openshift-service-ca/service-ca-9c57cc56f-w2lv5" Jan 29 15:29:59 crc kubenswrapper[5008]: I0129 15:29:59.979298 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/98a7839a-3ca2-49f7-a330-f77ffc4e4da3-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-zrdsf\" (UID: \"98a7839a-3ca2-49f7-a330-f77ffc4e4da3\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-zrdsf" Jan 29 15:29:59 crc kubenswrapper[5008]: I0129 15:29:59.996002 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-w2lv5" Jan 29 15:30:00 crc kubenswrapper[5008]: I0129 15:30:00.000968 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r7qjf\" (UniqueName: \"kubernetes.io/projected/5b987d67-e424-4286-a25d-11bfc4d1e577-kube-api-access-r7qjf\") pod \"console-operator-58897d9998-zs2tk\" (UID: \"5b987d67-e424-4286-a25d-11bfc4d1e577\") " pod="openshift-console-operator/console-operator-58897d9998-zs2tk" Jan 29 15:30:00 crc kubenswrapper[5008]: I0129 15:30:00.006042 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-wkn92" Jan 29 15:30:00 crc kubenswrapper[5008]: I0129 15:30:00.009712 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-fpmxk"] Jan 29 15:30:00 crc kubenswrapper[5008]: I0129 15:30:00.025165 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ng4c5\" (UniqueName: \"kubernetes.io/projected/c9bc5b93-0c42-401c-8ca5-e5154e8be34d-kube-api-access-ng4c5\") pod \"packageserver-d55dfcdfc-j8wt8\" (UID: \"c9bc5b93-0c42-401c-8ca5-e5154e8be34d\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-j8wt8" Jan 29 15:30:00 crc kubenswrapper[5008]: I0129 15:30:00.042475 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cmjd2\" (UniqueName: \"kubernetes.io/projected/0b6fe31f-5401-4a2e-bccb-e57fab2a35ba-kube-api-access-cmjd2\") pod \"multus-admission-controller-857f4d67dd-cb6xn\" (UID: \"0b6fe31f-5401-4a2e-bccb-e57fab2a35ba\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-cb6xn" Jan 29 15:30:00 crc kubenswrapper[5008]: I0129 15:30:00.054068 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3e0bc350-e279-4e74-a70e-c89593f115f3-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-6lddg\" (UID: \"3e0bc350-e279-4e74-a70e-c89593f115f3\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-6lddg" Jan 29 15:30:00 crc kubenswrapper[5008]: I0129 15:30:00.077573 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rf25c\" (UniqueName: \"kubernetes.io/projected/3c5e8be2-fe94-488c-801e-d1a56700bfa5-kube-api-access-rf25c\") pod \"cluster-samples-operator-665b6dd947-ztdsl\" (UID: \"3c5e8be2-fe94-488c-801e-d1a56700bfa5\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-ztdsl" Jan 29 15:30:00 crc kubenswrapper[5008]: I0129 15:30:00.097358 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l2kqn\" (UniqueName: \"kubernetes.io/projected/7473d665-3627-4470-a820-ebdbdc113587-kube-api-access-l2kqn\") pod \"marketplace-operator-79b997595-4268l\" (UID: \"7473d665-3627-4470-a820-ebdbdc113587\") " pod="openshift-marketplace/marketplace-operator-79b997595-4268l" Jan 29 15:30:00 crc kubenswrapper[5008]: I0129 15:30:00.120329 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-ztdsl" Jan 29 15:30:00 crc kubenswrapper[5008]: I0129 15:30:00.120688 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2msjg\" (UniqueName: \"kubernetes.io/projected/820dc798-ef25-4bda-947f-8c66b290816d-kube-api-access-2msjg\") pod \"dns-operator-744455d44c-2dsnp\" (UID: \"820dc798-ef25-4bda-947f-8c66b290816d\") " pod="openshift-dns-operator/dns-operator-744455d44c-2dsnp" Jan 29 15:30:00 crc kubenswrapper[5008]: I0129 15:30:00.127388 5008 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29494995-x4n8l"] Jan 29 15:30:00 crc kubenswrapper[5008]: I0129 15:30:00.141734 5008 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29495010-t7nh4"] Jan 29 15:30:00 crc kubenswrapper[5008]: I0129 15:30:00.142893 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29495010-t7nh4" Jan 29 15:30:00 crc kubenswrapper[5008]: I0129 15:30:00.144953 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-6lddg" Jan 29 15:30:00 crc kubenswrapper[5008]: I0129 15:30:00.145180 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rsr8x\" (UniqueName: \"kubernetes.io/projected/cb93f308-4554-41a0-a5c7-28d516a419c7-kube-api-access-rsr8x\") pod \"machine-config-controller-84d6567774-ghcqr\" (UID: \"cb93f308-4554-41a0-a5c7-28d516a419c7\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-ghcqr" Jan 29 15:30:00 crc kubenswrapper[5008]: I0129 15:30:00.148634 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29495010-t7nh4"] Jan 29 15:30:00 crc kubenswrapper[5008]: I0129 15:30:00.165601 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d5jrc\" (UniqueName: \"kubernetes.io/projected/1408f146-4652-41e3-8947-2f230e515750-kube-api-access-d5jrc\") pod \"ingress-operator-5b745b69d9-2h8sf\" (UID: \"1408f146-4652-41e3-8947-2f230e515750\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-2h8sf" Jan 29 15:30:00 crc kubenswrapper[5008]: I0129 15:30:00.182893 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7f9xk\" (UniqueName: \"kubernetes.io/projected/380625b0-02b5-417a-bd1e-7ccf56f56059-kube-api-access-7f9xk\") pod \"router-default-5444994796-lkcrp\" (UID: \"380625b0-02b5-417a-bd1e-7ccf56f56059\") " pod="openshift-ingress/router-default-5444994796-lkcrp" Jan 29 15:30:00 crc kubenswrapper[5008]: I0129 15:30:00.202411 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-2dsnp" Jan 29 15:30:00 crc kubenswrapper[5008]: I0129 15:30:00.202700 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Jan 29 15:30:00 crc kubenswrapper[5008]: E0129 15:30:00.215277 5008 configmap.go:193] Couldn't get configMap openshift-machine-api/kube-rbac-proxy: failed to sync configmap cache: timed out waiting for the condition Jan 29 15:30:00 crc kubenswrapper[5008]: E0129 15:30:00.215400 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/6db03bb1-4833-4d3f-82d5-08ec5710251f-config podName:6db03bb1-4833-4d3f-82d5-08ec5710251f nodeName:}" failed. No retries permitted until 2026-01-29 15:30:01.21536919 +0000 UTC m=+144.888223467 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/6db03bb1-4833-4d3f-82d5-08ec5710251f-config") pod "machine-api-operator-5694c8668f-fsx74" (UID: "6db03bb1-4833-4d3f-82d5-08ec5710251f") : failed to sync configmap cache: timed out waiting for the condition Jan 29 15:30:00 crc kubenswrapper[5008]: I0129 15:30:00.221643 5008 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Jan 29 15:30:00 crc kubenswrapper[5008]: I0129 15:30:00.235327 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-9gw94" Jan 29 15:30:00 crc kubenswrapper[5008]: I0129 15:30:00.263611 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-6wmrp" event={"ID":"64cf2ff9-40f4-48a5-a16c-6513cf0470bd","Type":"ContainerStarted","Data":"9abd198d8b241b24280129834e1f5180fb259afc84a75988e07119fc2a4ada66"} Jan 29 15:30:00 crc kubenswrapper[5008]: I0129 15:30:00.376522 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-2h8sf" Jan 29 15:30:00 crc kubenswrapper[5008]: I0129 15:30:00.376570 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-4268l" Jan 29 15:30:00 crc kubenswrapper[5008]: I0129 15:30:00.376684 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-zs2tk" Jan 29 15:30:00 crc kubenswrapper[5008]: I0129 15:30:00.376863 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-j8wt8" Jan 29 15:30:00 crc kubenswrapper[5008]: I0129 15:30:00.377034 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-cb6xn" Jan 29 15:30:00 crc kubenswrapper[5008]: I0129 15:30:00.377127 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-ghcqr" Jan 29 15:30:00 crc kubenswrapper[5008]: I0129 15:30:00.376876 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-zrdsf" Jan 29 15:30:00 crc kubenswrapper[5008]: I0129 15:30:00.377609 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/272fd84c-e1ec-47ce-a8dc-fb0573d1208c-profile-collector-cert\") pod \"olm-operator-6b444d44fb-mqnz8\" (UID: \"272fd84c-e1ec-47ce-a8dc-fb0573d1208c\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-mqnz8" Jan 29 15:30:00 crc kubenswrapper[5008]: I0129 15:30:00.377667 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/30c54800-b443-4da8-9d41-22e8f156a1a1-bound-sa-token\") pod \"image-registry-697d97f7c8-qm54x\" (UID: \"30c54800-b443-4da8-9d41-22e8f156a1a1\") " pod="openshift-image-registry/image-registry-697d97f7c8-qm54x" Jan 29 15:30:00 crc kubenswrapper[5008]: I0129 15:30:00.377718 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/30c54800-b443-4da8-9d41-22e8f156a1a1-trusted-ca\") pod \"image-registry-697d97f7c8-qm54x\" (UID: \"30c54800-b443-4da8-9d41-22e8f156a1a1\") " pod="openshift-image-registry/image-registry-697d97f7c8-qm54x" Jan 29 15:30:00 crc kubenswrapper[5008]: I0129 15:30:00.377771 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/30c54800-b443-4da8-9d41-22e8f156a1a1-installation-pull-secrets\") pod \"image-registry-697d97f7c8-qm54x\" (UID: \"30c54800-b443-4da8-9d41-22e8f156a1a1\") " pod="openshift-image-registry/image-registry-697d97f7c8-qm54x" Jan 29 15:30:00 crc kubenswrapper[5008]: I0129 15:30:00.377838 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tsm4s\" (UniqueName: \"kubernetes.io/projected/30c54800-b443-4da8-9d41-22e8f156a1a1-kube-api-access-tsm4s\") pod \"image-registry-697d97f7c8-qm54x\" (UID: \"30c54800-b443-4da8-9d41-22e8f156a1a1\") " pod="openshift-image-registry/image-registry-697d97f7c8-qm54x" Jan 29 15:30:00 crc kubenswrapper[5008]: I0129 15:30:00.377894 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/30c54800-b443-4da8-9d41-22e8f156a1a1-registry-tls\") pod \"image-registry-697d97f7c8-qm54x\" (UID: \"30c54800-b443-4da8-9d41-22e8f156a1a1\") " pod="openshift-image-registry/image-registry-697d97f7c8-qm54x" Jan 29 15:30:00 crc kubenswrapper[5008]: I0129 15:30:00.378459 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/30c54800-b443-4da8-9d41-22e8f156a1a1-ca-trust-extracted\") pod \"image-registry-697d97f7c8-qm54x\" (UID: \"30c54800-b443-4da8-9d41-22e8f156a1a1\") " pod="openshift-image-registry/image-registry-697d97f7c8-qm54x" Jan 29 15:30:00 crc kubenswrapper[5008]: I0129 15:30:00.378497 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/272fd84c-e1ec-47ce-a8dc-fb0573d1208c-srv-cert\") pod \"olm-operator-6b444d44fb-mqnz8\" (UID: \"272fd84c-e1ec-47ce-a8dc-fb0573d1208c\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-mqnz8" Jan 29 15:30:00 crc kubenswrapper[5008]: I0129 15:30:00.378614 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qm54x\" (UID: \"30c54800-b443-4da8-9d41-22e8f156a1a1\") " pod="openshift-image-registry/image-registry-697d97f7c8-qm54x" Jan 29 15:30:00 crc kubenswrapper[5008]: E0129 15:30:00.379128 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 15:30:00.879099957 +0000 UTC m=+144.551954314 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qm54x" (UID: "30c54800-b443-4da8-9d41-22e8f156a1a1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:00 crc kubenswrapper[5008]: I0129 15:30:00.379283 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/30c54800-b443-4da8-9d41-22e8f156a1a1-registry-certificates\") pod \"image-registry-697d97f7c8-qm54x\" (UID: \"30c54800-b443-4da8-9d41-22e8f156a1a1\") " pod="openshift-image-registry/image-registry-697d97f7c8-qm54x" Jan 29 15:30:00 crc kubenswrapper[5008]: I0129 15:30:00.437866 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-lkcrp" Jan 29 15:30:00 crc kubenswrapper[5008]: I0129 15:30:00.468488 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ng4mr\" (UniqueName: \"kubernetes.io/projected/00332b75-a73b-49c1-9b72-73445baccf6d-kube-api-access-ng4mr\") pod \"openshift-config-operator-7777fb866f-468fl\" (UID: \"00332b75-a73b-49c1-9b72-73445baccf6d\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-468fl" Jan 29 15:30:00 crc kubenswrapper[5008]: I0129 15:30:00.480294 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 15:30:00 crc kubenswrapper[5008]: E0129 15:30:00.480424 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 15:30:00.980395429 +0000 UTC m=+144.653249706 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:00 crc kubenswrapper[5008]: I0129 15:30:00.482624 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/30c54800-b443-4da8-9d41-22e8f156a1a1-registry-tls\") pod \"image-registry-697d97f7c8-qm54x\" (UID: \"30c54800-b443-4da8-9d41-22e8f156a1a1\") " pod="openshift-image-registry/image-registry-697d97f7c8-qm54x" Jan 29 15:30:00 crc kubenswrapper[5008]: I0129 15:30:00.482715 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4fwml\" (UniqueName: \"kubernetes.io/projected/20ed8d47-c62e-4dfd-aa4d-630a6db1b3a9-kube-api-access-4fwml\") pod \"migrator-59844c95c7-s5vvl\" (UID: \"20ed8d47-c62e-4dfd-aa4d-630a6db1b3a9\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-s5vvl" Jan 29 15:30:00 crc kubenswrapper[5008]: I0129 15:30:00.482776 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7n6q5\" (UniqueName: \"kubernetes.io/projected/217f16d7-943b-4603-88fa-155377da9788-kube-api-access-7n6q5\") pod \"cluster-image-registry-operator-dc59b4c8b-lb8mt\" (UID: \"217f16d7-943b-4603-88fa-155377da9788\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-lb8mt" Jan 29 15:30:00 crc kubenswrapper[5008]: I0129 15:30:00.483144 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pgvxq\" (UniqueName: \"kubernetes.io/projected/a161323e-d13e-46da-b8bd-347b56ef5110-kube-api-access-pgvxq\") pod \"dns-default-tw5d5\" (UID: \"a161323e-d13e-46da-b8bd-347b56ef5110\") " pod="openshift-dns/dns-default-tw5d5" Jan 29 15:30:00 crc kubenswrapper[5008]: I0129 15:30:00.483352 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/5ca041e2-baff-40ee-8fc9-e9bc58aee628-registration-dir\") pod \"csi-hostpathplugin-g9x2n\" (UID: \"5ca041e2-baff-40ee-8fc9-e9bc58aee628\") " pod="hostpath-provisioner/csi-hostpathplugin-g9x2n" Jan 29 15:30:00 crc kubenswrapper[5008]: I0129 15:30:00.483535 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/272fd84c-e1ec-47ce-a8dc-fb0573d1208c-srv-cert\") pod \"olm-operator-6b444d44fb-mqnz8\" (UID: \"272fd84c-e1ec-47ce-a8dc-fb0573d1208c\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-mqnz8" Jan 29 15:30:00 crc kubenswrapper[5008]: I0129 15:30:00.483649 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/30c54800-b443-4da8-9d41-22e8f156a1a1-ca-trust-extracted\") pod \"image-registry-697d97f7c8-qm54x\" (UID: \"30c54800-b443-4da8-9d41-22e8f156a1a1\") " pod="openshift-image-registry/image-registry-697d97f7c8-qm54x" Jan 29 15:30:00 crc kubenswrapper[5008]: I0129 15:30:00.483776 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a161323e-d13e-46da-b8bd-347b56ef5110-metrics-tls\") pod \"dns-default-tw5d5\" (UID: \"a161323e-d13e-46da-b8bd-347b56ef5110\") " pod="openshift-dns/dns-default-tw5d5" Jan 29 15:30:00 crc kubenswrapper[5008]: I0129 15:30:00.483913 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/632f321e-e374-410c-9dc3-0aacadc97f3b-auth-proxy-config\") pod \"machine-config-operator-74547568cd-bmtm4\" (UID: \"632f321e-e374-410c-9dc3-0aacadc97f3b\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-bmtm4" Jan 29 15:30:00 crc kubenswrapper[5008]: I0129 15:30:00.484007 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qm54x\" (UID: \"30c54800-b443-4da8-9d41-22e8f156a1a1\") " pod="openshift-image-registry/image-registry-697d97f7c8-qm54x" Jan 29 15:30:00 crc kubenswrapper[5008]: I0129 15:30:00.484114 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/0ba6b3e7-02fc-4ad5-b6f1-8fcbd1940277-srv-cert\") pod \"catalog-operator-68c6474976-zvhxk\" (UID: \"0ba6b3e7-02fc-4ad5-b6f1-8fcbd1940277\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-zvhxk" Jan 29 15:30:00 crc kubenswrapper[5008]: I0129 15:30:00.484235 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/30c54800-b443-4da8-9d41-22e8f156a1a1-registry-certificates\") pod \"image-registry-697d97f7c8-qm54x\" (UID: \"30c54800-b443-4da8-9d41-22e8f156a1a1\") " pod="openshift-image-registry/image-registry-697d97f7c8-qm54x" Jan 29 15:30:00 crc kubenswrapper[5008]: I0129 15:30:00.484284 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/ed80deac-23a5-4504-af92-231afa07fd27-certs\") pod \"machine-config-server-qs6wx\" (UID: \"ed80deac-23a5-4504-af92-231afa07fd27\") " pod="openshift-machine-config-operator/machine-config-server-qs6wx" Jan 29 15:30:00 crc kubenswrapper[5008]: I0129 15:30:00.484401 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qkrwh\" (UniqueName: \"kubernetes.io/projected/0ba6b3e7-02fc-4ad5-b6f1-8fcbd1940277-kube-api-access-qkrwh\") pod \"catalog-operator-68c6474976-zvhxk\" (UID: \"0ba6b3e7-02fc-4ad5-b6f1-8fcbd1940277\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-zvhxk" Jan 29 15:30:00 crc kubenswrapper[5008]: I0129 15:30:00.484507 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/e3105b11-cb5b-4006-8f1b-17b90922d743-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-w5jbk\" (UID: \"e3105b11-cb5b-4006-8f1b-17b90922d743\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-w5jbk" Jan 29 15:30:00 crc kubenswrapper[5008]: I0129 15:30:00.484540 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a161323e-d13e-46da-b8bd-347b56ef5110-config-volume\") pod \"dns-default-tw5d5\" (UID: \"a161323e-d13e-46da-b8bd-347b56ef5110\") " pod="openshift-dns/dns-default-tw5d5" Jan 29 15:30:00 crc kubenswrapper[5008]: I0129 15:30:00.484570 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-44jf7\" (UniqueName: \"kubernetes.io/projected/e3105b11-cb5b-4006-8f1b-17b90922d743-kube-api-access-44jf7\") pod \"package-server-manager-789f6589d5-w5jbk\" (UID: \"e3105b11-cb5b-4006-8f1b-17b90922d743\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-w5jbk" Jan 29 15:30:00 crc kubenswrapper[5008]: I0129 15:30:00.484684 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/5ca041e2-baff-40ee-8fc9-e9bc58aee628-csi-data-dir\") pod \"csi-hostpathplugin-g9x2n\" (UID: \"5ca041e2-baff-40ee-8fc9-e9bc58aee628\") " pod="hostpath-provisioner/csi-hostpathplugin-g9x2n" Jan 29 15:30:00 crc kubenswrapper[5008]: I0129 15:30:00.484772 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/632f321e-e374-410c-9dc3-0aacadc97f3b-images\") pod \"machine-config-operator-74547568cd-bmtm4\" (UID: \"632f321e-e374-410c-9dc3-0aacadc97f3b\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-bmtm4" Jan 29 15:30:00 crc kubenswrapper[5008]: I0129 15:30:00.484844 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nhkzq\" (UniqueName: \"kubernetes.io/projected/4a912999-007c-495d-aaa3-857d76158a91-kube-api-access-nhkzq\") pod \"collect-profiles-29495010-t7nh4\" (UID: \"4a912999-007c-495d-aaa3-857d76158a91\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29495010-t7nh4" Jan 29 15:30:00 crc kubenswrapper[5008]: I0129 15:30:00.484933 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/aa595b2b-fee5-4e54-926b-40571cf2f472-cert\") pod \"ingress-canary-p7nds\" (UID: \"aa595b2b-fee5-4e54-926b-40571cf2f472\") " pod="openshift-ingress-canary/ingress-canary-p7nds" Jan 29 15:30:00 crc kubenswrapper[5008]: I0129 15:30:00.485106 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/217f16d7-943b-4603-88fa-155377da9788-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-lb8mt\" (UID: \"217f16d7-943b-4603-88fa-155377da9788\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-lb8mt" Jan 29 15:30:00 crc kubenswrapper[5008]: I0129 15:30:00.485208 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/5ca041e2-baff-40ee-8fc9-e9bc58aee628-mountpoint-dir\") pod \"csi-hostpathplugin-g9x2n\" (UID: \"5ca041e2-baff-40ee-8fc9-e9bc58aee628\") " pod="hostpath-provisioner/csi-hostpathplugin-g9x2n" Jan 29 15:30:00 crc kubenswrapper[5008]: I0129 15:30:00.485276 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/217f16d7-943b-4603-88fa-155377da9788-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-lb8mt\" (UID: \"217f16d7-943b-4603-88fa-155377da9788\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-lb8mt" Jan 29 15:30:00 crc kubenswrapper[5008]: I0129 15:30:00.485320 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/4a912999-007c-495d-aaa3-857d76158a91-secret-volume\") pod \"collect-profiles-29495010-t7nh4\" (UID: \"4a912999-007c-495d-aaa3-857d76158a91\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29495010-t7nh4" Jan 29 15:30:00 crc kubenswrapper[5008]: I0129 15:30:00.485523 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/272fd84c-e1ec-47ce-a8dc-fb0573d1208c-profile-collector-cert\") pod \"olm-operator-6b444d44fb-mqnz8\" (UID: \"272fd84c-e1ec-47ce-a8dc-fb0573d1208c\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-mqnz8" Jan 29 15:30:00 crc kubenswrapper[5008]: I0129 15:30:00.485573 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sc6tk\" (UniqueName: \"kubernetes.io/projected/272fd84c-e1ec-47ce-a8dc-fb0573d1208c-kube-api-access-sc6tk\") pod \"olm-operator-6b444d44fb-mqnz8\" (UID: \"272fd84c-e1ec-47ce-a8dc-fb0573d1208c\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-mqnz8" Jan 29 15:30:00 crc kubenswrapper[5008]: I0129 15:30:00.485633 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/cf3d6df4-e07e-4d72-b2b6-20dcb29700d7-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-x9bx7\" (UID: \"cf3d6df4-e07e-4d72-b2b6-20dcb29700d7\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-x9bx7" Jan 29 15:30:00 crc kubenswrapper[5008]: I0129 15:30:00.485713 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/30c54800-b443-4da8-9d41-22e8f156a1a1-bound-sa-token\") pod \"image-registry-697d97f7c8-qm54x\" (UID: \"30c54800-b443-4da8-9d41-22e8f156a1a1\") " pod="openshift-image-registry/image-registry-697d97f7c8-qm54x" Jan 29 15:30:00 crc kubenswrapper[5008]: I0129 15:30:00.485754 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/5ca041e2-baff-40ee-8fc9-e9bc58aee628-socket-dir\") pod \"csi-hostpathplugin-g9x2n\" (UID: \"5ca041e2-baff-40ee-8fc9-e9bc58aee628\") " pod="hostpath-provisioner/csi-hostpathplugin-g9x2n" Jan 29 15:30:00 crc kubenswrapper[5008]: I0129 15:30:00.485858 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/632f321e-e374-410c-9dc3-0aacadc97f3b-proxy-tls\") pod \"machine-config-operator-74547568cd-bmtm4\" (UID: \"632f321e-e374-410c-9dc3-0aacadc97f3b\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-bmtm4" Jan 29 15:30:00 crc kubenswrapper[5008]: I0129 15:30:00.485892 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a14210e2-42e9-45d9-8633-a5df1a863a9f-serving-cert\") pod \"service-ca-operator-777779d784-9b7ll\" (UID: \"a14210e2-42e9-45d9-8633-a5df1a863a9f\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-9b7ll" Jan 29 15:30:00 crc kubenswrapper[5008]: I0129 15:30:00.485946 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/ed80deac-23a5-4504-af92-231afa07fd27-node-bootstrap-token\") pod \"machine-config-server-qs6wx\" (UID: \"ed80deac-23a5-4504-af92-231afa07fd27\") " pod="openshift-machine-config-operator/machine-config-server-qs6wx" Jan 29 15:30:00 crc kubenswrapper[5008]: I0129 15:30:00.485979 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a14210e2-42e9-45d9-8633-a5df1a863a9f-config\") pod \"service-ca-operator-777779d784-9b7ll\" (UID: \"a14210e2-42e9-45d9-8633-a5df1a863a9f\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-9b7ll" Jan 29 15:30:00 crc kubenswrapper[5008]: I0129 15:30:00.486099 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d2dsz\" (UniqueName: \"kubernetes.io/projected/cf3d6df4-e07e-4d72-b2b6-20dcb29700d7-kube-api-access-d2dsz\") pod \"control-plane-machine-set-operator-78cbb6b69f-x9bx7\" (UID: \"cf3d6df4-e07e-4d72-b2b6-20dcb29700d7\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-x9bx7" Jan 29 15:30:00 crc kubenswrapper[5008]: I0129 15:30:00.486132 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2dg4s\" (UniqueName: \"kubernetes.io/projected/a14210e2-42e9-45d9-8633-a5df1a863a9f-kube-api-access-2dg4s\") pod \"service-ca-operator-777779d784-9b7ll\" (UID: \"a14210e2-42e9-45d9-8633-a5df1a863a9f\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-9b7ll" Jan 29 15:30:00 crc kubenswrapper[5008]: I0129 15:30:00.487318 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/30c54800-b443-4da8-9d41-22e8f156a1a1-ca-trust-extracted\") pod \"image-registry-697d97f7c8-qm54x\" (UID: \"30c54800-b443-4da8-9d41-22e8f156a1a1\") " pod="openshift-image-registry/image-registry-697d97f7c8-qm54x" Jan 29 15:30:00 crc kubenswrapper[5008]: I0129 15:30:00.487396 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gfwg7\" (UniqueName: \"kubernetes.io/projected/ed80deac-23a5-4504-af92-231afa07fd27-kube-api-access-gfwg7\") pod \"machine-config-server-qs6wx\" (UID: \"ed80deac-23a5-4504-af92-231afa07fd27\") " pod="openshift-machine-config-operator/machine-config-server-qs6wx" Jan 29 15:30:00 crc kubenswrapper[5008]: I0129 15:30:00.487540 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/30c54800-b443-4da8-9d41-22e8f156a1a1-trusted-ca\") pod \"image-registry-697d97f7c8-qm54x\" (UID: \"30c54800-b443-4da8-9d41-22e8f156a1a1\") " pod="openshift-image-registry/image-registry-697d97f7c8-qm54x" Jan 29 15:30:00 crc kubenswrapper[5008]: I0129 15:30:00.487584 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/5ca041e2-baff-40ee-8fc9-e9bc58aee628-plugins-dir\") pod \"csi-hostpathplugin-g9x2n\" (UID: \"5ca041e2-baff-40ee-8fc9-e9bc58aee628\") " pod="hostpath-provisioner/csi-hostpathplugin-g9x2n" Jan 29 15:30:00 crc kubenswrapper[5008]: I0129 15:30:00.487622 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/217f16d7-943b-4603-88fa-155377da9788-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-lb8mt\" (UID: \"217f16d7-943b-4603-88fa-155377da9788\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-lb8mt" Jan 29 15:30:00 crc kubenswrapper[5008]: I0129 15:30:00.487682 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4a912999-007c-495d-aaa3-857d76158a91-config-volume\") pod \"collect-profiles-29495010-t7nh4\" (UID: \"4a912999-007c-495d-aaa3-857d76158a91\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29495010-t7nh4" Jan 29 15:30:00 crc kubenswrapper[5008]: I0129 15:30:00.487718 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-prgds\" (UniqueName: \"kubernetes.io/projected/aa595b2b-fee5-4e54-926b-40571cf2f472-kube-api-access-prgds\") pod \"ingress-canary-p7nds\" (UID: \"aa595b2b-fee5-4e54-926b-40571cf2f472\") " pod="openshift-ingress-canary/ingress-canary-p7nds" Jan 29 15:30:00 crc kubenswrapper[5008]: I0129 15:30:00.488165 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/30c54800-b443-4da8-9d41-22e8f156a1a1-installation-pull-secrets\") pod \"image-registry-697d97f7c8-qm54x\" (UID: \"30c54800-b443-4da8-9d41-22e8f156a1a1\") " pod="openshift-image-registry/image-registry-697d97f7c8-qm54x" Jan 29 15:30:00 crc kubenswrapper[5008]: I0129 15:30:00.488231 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tsm4s\" (UniqueName: \"kubernetes.io/projected/30c54800-b443-4da8-9d41-22e8f156a1a1-kube-api-access-tsm4s\") pod \"image-registry-697d97f7c8-qm54x\" (UID: \"30c54800-b443-4da8-9d41-22e8f156a1a1\") " pod="openshift-image-registry/image-registry-697d97f7c8-qm54x" Jan 29 15:30:00 crc kubenswrapper[5008]: I0129 15:30:00.488293 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8kgkr\" (UniqueName: \"kubernetes.io/projected/632f321e-e374-410c-9dc3-0aacadc97f3b-kube-api-access-8kgkr\") pod \"machine-config-operator-74547568cd-bmtm4\" (UID: \"632f321e-e374-410c-9dc3-0aacadc97f3b\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-bmtm4" Jan 29 15:30:00 crc kubenswrapper[5008]: I0129 15:30:00.488395 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2lr4m\" (UniqueName: \"kubernetes.io/projected/5ca041e2-baff-40ee-8fc9-e9bc58aee628-kube-api-access-2lr4m\") pod \"csi-hostpathplugin-g9x2n\" (UID: \"5ca041e2-baff-40ee-8fc9-e9bc58aee628\") " pod="hostpath-provisioner/csi-hostpathplugin-g9x2n" Jan 29 15:30:00 crc kubenswrapper[5008]: E0129 15:30:00.488438 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 15:30:00.988416239 +0000 UTC m=+144.661270586 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qm54x" (UID: "30c54800-b443-4da8-9d41-22e8f156a1a1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:00 crc kubenswrapper[5008]: I0129 15:30:00.488489 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/0ba6b3e7-02fc-4ad5-b6f1-8fcbd1940277-profile-collector-cert\") pod \"catalog-operator-68c6474976-zvhxk\" (UID: \"0ba6b3e7-02fc-4ad5-b6f1-8fcbd1940277\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-zvhxk" Jan 29 15:30:00 crc kubenswrapper[5008]: I0129 15:30:00.490102 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/30c54800-b443-4da8-9d41-22e8f156a1a1-registry-certificates\") pod \"image-registry-697d97f7c8-qm54x\" (UID: \"30c54800-b443-4da8-9d41-22e8f156a1a1\") " pod="openshift-image-registry/image-registry-697d97f7c8-qm54x" Jan 29 15:30:00 crc kubenswrapper[5008]: I0129 15:30:00.493571 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/272fd84c-e1ec-47ce-a8dc-fb0573d1208c-srv-cert\") pod \"olm-operator-6b444d44fb-mqnz8\" (UID: \"272fd84c-e1ec-47ce-a8dc-fb0573d1208c\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-mqnz8" Jan 29 15:30:00 crc kubenswrapper[5008]: I0129 15:30:00.494385 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/30c54800-b443-4da8-9d41-22e8f156a1a1-trusted-ca\") pod \"image-registry-697d97f7c8-qm54x\" (UID: \"30c54800-b443-4da8-9d41-22e8f156a1a1\") " pod="openshift-image-registry/image-registry-697d97f7c8-qm54x" Jan 29 15:30:00 crc kubenswrapper[5008]: I0129 15:30:00.495672 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/30c54800-b443-4da8-9d41-22e8f156a1a1-installation-pull-secrets\") pod \"image-registry-697d97f7c8-qm54x\" (UID: \"30c54800-b443-4da8-9d41-22e8f156a1a1\") " pod="openshift-image-registry/image-registry-697d97f7c8-qm54x" Jan 29 15:30:00 crc kubenswrapper[5008]: I0129 15:30:00.497241 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/272fd84c-e1ec-47ce-a8dc-fb0573d1208c-profile-collector-cert\") pod \"olm-operator-6b444d44fb-mqnz8\" (UID: \"272fd84c-e1ec-47ce-a8dc-fb0573d1208c\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-mqnz8" Jan 29 15:30:00 crc kubenswrapper[5008]: I0129 15:30:00.499550 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/30c54800-b443-4da8-9d41-22e8f156a1a1-registry-tls\") pod \"image-registry-697d97f7c8-qm54x\" (UID: \"30c54800-b443-4da8-9d41-22e8f156a1a1\") " pod="openshift-image-registry/image-registry-697d97f7c8-qm54x" Jan 29 15:30:00 crc kubenswrapper[5008]: I0129 15:30:00.552397 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tsm4s\" (UniqueName: \"kubernetes.io/projected/30c54800-b443-4da8-9d41-22e8f156a1a1-kube-api-access-tsm4s\") pod \"image-registry-697d97f7c8-qm54x\" (UID: \"30c54800-b443-4da8-9d41-22e8f156a1a1\") " pod="openshift-image-registry/image-registry-697d97f7c8-qm54x" Jan 29 15:30:00 crc kubenswrapper[5008]: I0129 15:30:00.559519 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/30c54800-b443-4da8-9d41-22e8f156a1a1-bound-sa-token\") pod \"image-registry-697d97f7c8-qm54x\" (UID: \"30c54800-b443-4da8-9d41-22e8f156a1a1\") " pod="openshift-image-registry/image-registry-697d97f7c8-qm54x" Jan 29 15:30:00 crc kubenswrapper[5008]: I0129 15:30:00.591183 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 15:30:00 crc kubenswrapper[5008]: E0129 15:30:00.591412 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 15:30:01.091388505 +0000 UTC m=+144.764242742 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:00 crc kubenswrapper[5008]: I0129 15:30:00.591706 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/217f16d7-943b-4603-88fa-155377da9788-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-lb8mt\" (UID: \"217f16d7-943b-4603-88fa-155377da9788\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-lb8mt" Jan 29 15:30:00 crc kubenswrapper[5008]: I0129 15:30:00.592607 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/5ca041e2-baff-40ee-8fc9-e9bc58aee628-mountpoint-dir\") pod \"csi-hostpathplugin-g9x2n\" (UID: \"5ca041e2-baff-40ee-8fc9-e9bc58aee628\") " pod="hostpath-provisioner/csi-hostpathplugin-g9x2n" Jan 29 15:30:00 crc kubenswrapper[5008]: I0129 15:30:00.593637 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/217f16d7-943b-4603-88fa-155377da9788-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-lb8mt\" (UID: \"217f16d7-943b-4603-88fa-155377da9788\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-lb8mt" Jan 29 15:30:00 crc kubenswrapper[5008]: I0129 15:30:00.593706 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/5ca041e2-baff-40ee-8fc9-e9bc58aee628-mountpoint-dir\") pod \"csi-hostpathplugin-g9x2n\" (UID: \"5ca041e2-baff-40ee-8fc9-e9bc58aee628\") " pod="hostpath-provisioner/csi-hostpathplugin-g9x2n" Jan 29 15:30:00 crc kubenswrapper[5008]: I0129 15:30:00.593736 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/217f16d7-943b-4603-88fa-155377da9788-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-lb8mt\" (UID: \"217f16d7-943b-4603-88fa-155377da9788\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-lb8mt" Jan 29 15:30:00 crc kubenswrapper[5008]: I0129 15:30:00.593751 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/4a912999-007c-495d-aaa3-857d76158a91-secret-volume\") pod \"collect-profiles-29495010-t7nh4\" (UID: \"4a912999-007c-495d-aaa3-857d76158a91\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29495010-t7nh4" Jan 29 15:30:00 crc kubenswrapper[5008]: I0129 15:30:00.594230 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sc6tk\" (UniqueName: \"kubernetes.io/projected/272fd84c-e1ec-47ce-a8dc-fb0573d1208c-kube-api-access-sc6tk\") pod \"olm-operator-6b444d44fb-mqnz8\" (UID: \"272fd84c-e1ec-47ce-a8dc-fb0573d1208c\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-mqnz8" Jan 29 15:30:00 crc kubenswrapper[5008]: I0129 15:30:00.594278 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/cf3d6df4-e07e-4d72-b2b6-20dcb29700d7-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-x9bx7\" (UID: \"cf3d6df4-e07e-4d72-b2b6-20dcb29700d7\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-x9bx7" Jan 29 15:30:00 crc kubenswrapper[5008]: I0129 15:30:00.594302 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/5ca041e2-baff-40ee-8fc9-e9bc58aee628-socket-dir\") pod \"csi-hostpathplugin-g9x2n\" (UID: \"5ca041e2-baff-40ee-8fc9-e9bc58aee628\") " pod="hostpath-provisioner/csi-hostpathplugin-g9x2n" Jan 29 15:30:00 crc kubenswrapper[5008]: I0129 15:30:00.594341 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/632f321e-e374-410c-9dc3-0aacadc97f3b-proxy-tls\") pod \"machine-config-operator-74547568cd-bmtm4\" (UID: \"632f321e-e374-410c-9dc3-0aacadc97f3b\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-bmtm4" Jan 29 15:30:00 crc kubenswrapper[5008]: I0129 15:30:00.594359 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a14210e2-42e9-45d9-8633-a5df1a863a9f-serving-cert\") pod \"service-ca-operator-777779d784-9b7ll\" (UID: \"a14210e2-42e9-45d9-8633-a5df1a863a9f\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-9b7ll" Jan 29 15:30:00 crc kubenswrapper[5008]: I0129 15:30:00.594380 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/ed80deac-23a5-4504-af92-231afa07fd27-node-bootstrap-token\") pod \"machine-config-server-qs6wx\" (UID: \"ed80deac-23a5-4504-af92-231afa07fd27\") " pod="openshift-machine-config-operator/machine-config-server-qs6wx" Jan 29 15:30:00 crc kubenswrapper[5008]: I0129 15:30:00.594413 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a14210e2-42e9-45d9-8633-a5df1a863a9f-config\") pod \"service-ca-operator-777779d784-9b7ll\" (UID: \"a14210e2-42e9-45d9-8633-a5df1a863a9f\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-9b7ll" Jan 29 15:30:00 crc kubenswrapper[5008]: I0129 15:30:00.594434 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d2dsz\" (UniqueName: \"kubernetes.io/projected/cf3d6df4-e07e-4d72-b2b6-20dcb29700d7-kube-api-access-d2dsz\") pod \"control-plane-machine-set-operator-78cbb6b69f-x9bx7\" (UID: \"cf3d6df4-e07e-4d72-b2b6-20dcb29700d7\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-x9bx7" Jan 29 15:30:00 crc kubenswrapper[5008]: I0129 15:30:00.594451 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2dg4s\" (UniqueName: \"kubernetes.io/projected/a14210e2-42e9-45d9-8633-a5df1a863a9f-kube-api-access-2dg4s\") pod \"service-ca-operator-777779d784-9b7ll\" (UID: \"a14210e2-42e9-45d9-8633-a5df1a863a9f\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-9b7ll" Jan 29 15:30:00 crc kubenswrapper[5008]: I0129 15:30:00.594489 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gfwg7\" (UniqueName: \"kubernetes.io/projected/ed80deac-23a5-4504-af92-231afa07fd27-kube-api-access-gfwg7\") pod \"machine-config-server-qs6wx\" (UID: \"ed80deac-23a5-4504-af92-231afa07fd27\") " pod="openshift-machine-config-operator/machine-config-server-qs6wx" Jan 29 15:30:00 crc kubenswrapper[5008]: I0129 15:30:00.594516 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/5ca041e2-baff-40ee-8fc9-e9bc58aee628-plugins-dir\") pod \"csi-hostpathplugin-g9x2n\" (UID: \"5ca041e2-baff-40ee-8fc9-e9bc58aee628\") " pod="hostpath-provisioner/csi-hostpathplugin-g9x2n" Jan 29 15:30:00 crc kubenswrapper[5008]: I0129 15:30:00.594532 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/217f16d7-943b-4603-88fa-155377da9788-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-lb8mt\" (UID: \"217f16d7-943b-4603-88fa-155377da9788\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-lb8mt" Jan 29 15:30:00 crc kubenswrapper[5008]: I0129 15:30:00.594575 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4a912999-007c-495d-aaa3-857d76158a91-config-volume\") pod \"collect-profiles-29495010-t7nh4\" (UID: \"4a912999-007c-495d-aaa3-857d76158a91\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29495010-t7nh4" Jan 29 15:30:00 crc kubenswrapper[5008]: I0129 15:30:00.594598 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-prgds\" (UniqueName: \"kubernetes.io/projected/aa595b2b-fee5-4e54-926b-40571cf2f472-kube-api-access-prgds\") pod \"ingress-canary-p7nds\" (UID: \"aa595b2b-fee5-4e54-926b-40571cf2f472\") " pod="openshift-ingress-canary/ingress-canary-p7nds" Jan 29 15:30:00 crc kubenswrapper[5008]: I0129 15:30:00.594622 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8kgkr\" (UniqueName: \"kubernetes.io/projected/632f321e-e374-410c-9dc3-0aacadc97f3b-kube-api-access-8kgkr\") pod \"machine-config-operator-74547568cd-bmtm4\" (UID: \"632f321e-e374-410c-9dc3-0aacadc97f3b\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-bmtm4" Jan 29 15:30:00 crc kubenswrapper[5008]: I0129 15:30:00.594659 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/0ba6b3e7-02fc-4ad5-b6f1-8fcbd1940277-profile-collector-cert\") pod \"catalog-operator-68c6474976-zvhxk\" (UID: \"0ba6b3e7-02fc-4ad5-b6f1-8fcbd1940277\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-zvhxk" Jan 29 15:30:00 crc kubenswrapper[5008]: I0129 15:30:00.594684 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2lr4m\" (UniqueName: \"kubernetes.io/projected/5ca041e2-baff-40ee-8fc9-e9bc58aee628-kube-api-access-2lr4m\") pod \"csi-hostpathplugin-g9x2n\" (UID: \"5ca041e2-baff-40ee-8fc9-e9bc58aee628\") " pod="hostpath-provisioner/csi-hostpathplugin-g9x2n" Jan 29 15:30:00 crc kubenswrapper[5008]: I0129 15:30:00.594701 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4fwml\" (UniqueName: \"kubernetes.io/projected/20ed8d47-c62e-4dfd-aa4d-630a6db1b3a9-kube-api-access-4fwml\") pod \"migrator-59844c95c7-s5vvl\" (UID: \"20ed8d47-c62e-4dfd-aa4d-630a6db1b3a9\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-s5vvl" Jan 29 15:30:00 crc kubenswrapper[5008]: I0129 15:30:00.594740 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7n6q5\" (UniqueName: \"kubernetes.io/projected/217f16d7-943b-4603-88fa-155377da9788-kube-api-access-7n6q5\") pod \"cluster-image-registry-operator-dc59b4c8b-lb8mt\" (UID: \"217f16d7-943b-4603-88fa-155377da9788\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-lb8mt" Jan 29 15:30:00 crc kubenswrapper[5008]: I0129 15:30:00.594763 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pgvxq\" (UniqueName: \"kubernetes.io/projected/a161323e-d13e-46da-b8bd-347b56ef5110-kube-api-access-pgvxq\") pod \"dns-default-tw5d5\" (UID: \"a161323e-d13e-46da-b8bd-347b56ef5110\") " pod="openshift-dns/dns-default-tw5d5" Jan 29 15:30:00 crc kubenswrapper[5008]: I0129 15:30:00.594821 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/5ca041e2-baff-40ee-8fc9-e9bc58aee628-registration-dir\") pod \"csi-hostpathplugin-g9x2n\" (UID: \"5ca041e2-baff-40ee-8fc9-e9bc58aee628\") " pod="hostpath-provisioner/csi-hostpathplugin-g9x2n" Jan 29 15:30:00 crc kubenswrapper[5008]: I0129 15:30:00.594854 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a161323e-d13e-46da-b8bd-347b56ef5110-metrics-tls\") pod \"dns-default-tw5d5\" (UID: \"a161323e-d13e-46da-b8bd-347b56ef5110\") " pod="openshift-dns/dns-default-tw5d5" Jan 29 15:30:00 crc kubenswrapper[5008]: I0129 15:30:00.594871 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/632f321e-e374-410c-9dc3-0aacadc97f3b-auth-proxy-config\") pod \"machine-config-operator-74547568cd-bmtm4\" (UID: \"632f321e-e374-410c-9dc3-0aacadc97f3b\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-bmtm4" Jan 29 15:30:00 crc kubenswrapper[5008]: I0129 15:30:00.594921 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qm54x\" (UID: \"30c54800-b443-4da8-9d41-22e8f156a1a1\") " pod="openshift-image-registry/image-registry-697d97f7c8-qm54x" Jan 29 15:30:00 crc kubenswrapper[5008]: I0129 15:30:00.594948 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/0ba6b3e7-02fc-4ad5-b6f1-8fcbd1940277-srv-cert\") pod \"catalog-operator-68c6474976-zvhxk\" (UID: \"0ba6b3e7-02fc-4ad5-b6f1-8fcbd1940277\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-zvhxk" Jan 29 15:30:00 crc kubenswrapper[5008]: I0129 15:30:00.594991 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/ed80deac-23a5-4504-af92-231afa07fd27-certs\") pod \"machine-config-server-qs6wx\" (UID: \"ed80deac-23a5-4504-af92-231afa07fd27\") " pod="openshift-machine-config-operator/machine-config-server-qs6wx" Jan 29 15:30:00 crc kubenswrapper[5008]: I0129 15:30:00.595021 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qkrwh\" (UniqueName: \"kubernetes.io/projected/0ba6b3e7-02fc-4ad5-b6f1-8fcbd1940277-kube-api-access-qkrwh\") pod \"catalog-operator-68c6474976-zvhxk\" (UID: \"0ba6b3e7-02fc-4ad5-b6f1-8fcbd1940277\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-zvhxk" Jan 29 15:30:00 crc kubenswrapper[5008]: I0129 15:30:00.595022 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/5ca041e2-baff-40ee-8fc9-e9bc58aee628-socket-dir\") pod \"csi-hostpathplugin-g9x2n\" (UID: \"5ca041e2-baff-40ee-8fc9-e9bc58aee628\") " pod="hostpath-provisioner/csi-hostpathplugin-g9x2n" Jan 29 15:30:00 crc kubenswrapper[5008]: I0129 15:30:00.595069 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/e3105b11-cb5b-4006-8f1b-17b90922d743-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-w5jbk\" (UID: \"e3105b11-cb5b-4006-8f1b-17b90922d743\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-w5jbk" Jan 29 15:30:00 crc kubenswrapper[5008]: I0129 15:30:00.595092 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a161323e-d13e-46da-b8bd-347b56ef5110-config-volume\") pod \"dns-default-tw5d5\" (UID: \"a161323e-d13e-46da-b8bd-347b56ef5110\") " pod="openshift-dns/dns-default-tw5d5" Jan 29 15:30:00 crc kubenswrapper[5008]: I0129 15:30:00.595116 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-44jf7\" (UniqueName: \"kubernetes.io/projected/e3105b11-cb5b-4006-8f1b-17b90922d743-kube-api-access-44jf7\") pod \"package-server-manager-789f6589d5-w5jbk\" (UID: \"e3105b11-cb5b-4006-8f1b-17b90922d743\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-w5jbk" Jan 29 15:30:00 crc kubenswrapper[5008]: I0129 15:30:00.595118 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/5ca041e2-baff-40ee-8fc9-e9bc58aee628-registration-dir\") pod \"csi-hostpathplugin-g9x2n\" (UID: \"5ca041e2-baff-40ee-8fc9-e9bc58aee628\") " pod="hostpath-provisioner/csi-hostpathplugin-g9x2n" Jan 29 15:30:00 crc kubenswrapper[5008]: I0129 15:30:00.595189 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/5ca041e2-baff-40ee-8fc9-e9bc58aee628-csi-data-dir\") pod \"csi-hostpathplugin-g9x2n\" (UID: \"5ca041e2-baff-40ee-8fc9-e9bc58aee628\") " pod="hostpath-provisioner/csi-hostpathplugin-g9x2n" Jan 29 15:30:00 crc kubenswrapper[5008]: I0129 15:30:00.595197 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/5ca041e2-baff-40ee-8fc9-e9bc58aee628-plugins-dir\") pod \"csi-hostpathplugin-g9x2n\" (UID: \"5ca041e2-baff-40ee-8fc9-e9bc58aee628\") " pod="hostpath-provisioner/csi-hostpathplugin-g9x2n" Jan 29 15:30:00 crc kubenswrapper[5008]: I0129 15:30:00.595232 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/632f321e-e374-410c-9dc3-0aacadc97f3b-images\") pod \"machine-config-operator-74547568cd-bmtm4\" (UID: \"632f321e-e374-410c-9dc3-0aacadc97f3b\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-bmtm4" Jan 29 15:30:00 crc kubenswrapper[5008]: E0129 15:30:00.595598 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 15:30:01.095576684 +0000 UTC m=+144.768430991 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qm54x" (UID: "30c54800-b443-4da8-9d41-22e8f156a1a1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:00 crc kubenswrapper[5008]: I0129 15:30:00.596806 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/5ca041e2-baff-40ee-8fc9-e9bc58aee628-csi-data-dir\") pod \"csi-hostpathplugin-g9x2n\" (UID: \"5ca041e2-baff-40ee-8fc9-e9bc58aee628\") " pod="hostpath-provisioner/csi-hostpathplugin-g9x2n" Jan 29 15:30:00 crc kubenswrapper[5008]: I0129 15:30:00.596828 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a14210e2-42e9-45d9-8633-a5df1a863a9f-config\") pod \"service-ca-operator-777779d784-9b7ll\" (UID: \"a14210e2-42e9-45d9-8633-a5df1a863a9f\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-9b7ll" Jan 29 15:30:00 crc kubenswrapper[5008]: I0129 15:30:00.597439 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nhkzq\" (UniqueName: \"kubernetes.io/projected/4a912999-007c-495d-aaa3-857d76158a91-kube-api-access-nhkzq\") pod \"collect-profiles-29495010-t7nh4\" (UID: \"4a912999-007c-495d-aaa3-857d76158a91\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29495010-t7nh4" Jan 29 15:30:00 crc kubenswrapper[5008]: I0129 15:30:00.597488 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/aa595b2b-fee5-4e54-926b-40571cf2f472-cert\") pod \"ingress-canary-p7nds\" (UID: \"aa595b2b-fee5-4e54-926b-40571cf2f472\") " pod="openshift-ingress-canary/ingress-canary-p7nds" Jan 29 15:30:00 crc kubenswrapper[5008]: I0129 15:30:00.598298 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4a912999-007c-495d-aaa3-857d76158a91-config-volume\") pod \"collect-profiles-29495010-t7nh4\" (UID: \"4a912999-007c-495d-aaa3-857d76158a91\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29495010-t7nh4" Jan 29 15:30:00 crc kubenswrapper[5008]: I0129 15:30:00.598415 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/632f321e-e374-410c-9dc3-0aacadc97f3b-auth-proxy-config\") pod \"machine-config-operator-74547568cd-bmtm4\" (UID: \"632f321e-e374-410c-9dc3-0aacadc97f3b\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-bmtm4" Jan 29 15:30:00 crc kubenswrapper[5008]: I0129 15:30:00.598421 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/632f321e-e374-410c-9dc3-0aacadc97f3b-images\") pod \"machine-config-operator-74547568cd-bmtm4\" (UID: \"632f321e-e374-410c-9dc3-0aacadc97f3b\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-bmtm4" Jan 29 15:30:00 crc kubenswrapper[5008]: I0129 15:30:00.599083 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a161323e-d13e-46da-b8bd-347b56ef5110-config-volume\") pod \"dns-default-tw5d5\" (UID: \"a161323e-d13e-46da-b8bd-347b56ef5110\") " pod="openshift-dns/dns-default-tw5d5" Jan 29 15:30:00 crc kubenswrapper[5008]: I0129 15:30:00.599194 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/4a912999-007c-495d-aaa3-857d76158a91-secret-volume\") pod \"collect-profiles-29495010-t7nh4\" (UID: \"4a912999-007c-495d-aaa3-857d76158a91\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29495010-t7nh4" Jan 29 15:30:00 crc kubenswrapper[5008]: I0129 15:30:00.601105 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/0ba6b3e7-02fc-4ad5-b6f1-8fcbd1940277-profile-collector-cert\") pod \"catalog-operator-68c6474976-zvhxk\" (UID: \"0ba6b3e7-02fc-4ad5-b6f1-8fcbd1940277\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-zvhxk" Jan 29 15:30:00 crc kubenswrapper[5008]: I0129 15:30:00.601283 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a161323e-d13e-46da-b8bd-347b56ef5110-metrics-tls\") pod \"dns-default-tw5d5\" (UID: \"a161323e-d13e-46da-b8bd-347b56ef5110\") " pod="openshift-dns/dns-default-tw5d5" Jan 29 15:30:00 crc kubenswrapper[5008]: I0129 15:30:00.606910 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/ed80deac-23a5-4504-af92-231afa07fd27-node-bootstrap-token\") pod \"machine-config-server-qs6wx\" (UID: \"ed80deac-23a5-4504-af92-231afa07fd27\") " pod="openshift-machine-config-operator/machine-config-server-qs6wx" Jan 29 15:30:00 crc kubenswrapper[5008]: I0129 15:30:00.607184 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a14210e2-42e9-45d9-8633-a5df1a863a9f-serving-cert\") pod \"service-ca-operator-777779d784-9b7ll\" (UID: \"a14210e2-42e9-45d9-8633-a5df1a863a9f\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-9b7ll" Jan 29 15:30:00 crc kubenswrapper[5008]: I0129 15:30:00.607298 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/632f321e-e374-410c-9dc3-0aacadc97f3b-proxy-tls\") pod \"machine-config-operator-74547568cd-bmtm4\" (UID: \"632f321e-e374-410c-9dc3-0aacadc97f3b\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-bmtm4" Jan 29 15:30:00 crc kubenswrapper[5008]: I0129 15:30:00.607344 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/0ba6b3e7-02fc-4ad5-b6f1-8fcbd1940277-srv-cert\") pod \"catalog-operator-68c6474976-zvhxk\" (UID: \"0ba6b3e7-02fc-4ad5-b6f1-8fcbd1940277\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-zvhxk" Jan 29 15:30:00 crc kubenswrapper[5008]: I0129 15:30:00.607546 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/cf3d6df4-e07e-4d72-b2b6-20dcb29700d7-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-x9bx7\" (UID: \"cf3d6df4-e07e-4d72-b2b6-20dcb29700d7\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-x9bx7" Jan 29 15:30:00 crc kubenswrapper[5008]: I0129 15:30:00.607609 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/secret/ed80deac-23a5-4504-af92-231afa07fd27-certs\") pod \"machine-config-server-qs6wx\" (UID: \"ed80deac-23a5-4504-af92-231afa07fd27\") " pod="openshift-machine-config-operator/machine-config-server-qs6wx" Jan 29 15:30:00 crc kubenswrapper[5008]: I0129 15:30:00.607930 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/217f16d7-943b-4603-88fa-155377da9788-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-lb8mt\" (UID: \"217f16d7-943b-4603-88fa-155377da9788\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-lb8mt" Jan 29 15:30:00 crc kubenswrapper[5008]: I0129 15:30:00.608382 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/e3105b11-cb5b-4006-8f1b-17b90922d743-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-w5jbk\" (UID: \"e3105b11-cb5b-4006-8f1b-17b90922d743\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-w5jbk" Jan 29 15:30:00 crc kubenswrapper[5008]: I0129 15:30:00.608575 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/aa595b2b-fee5-4e54-926b-40571cf2f472-cert\") pod \"ingress-canary-p7nds\" (UID: \"aa595b2b-fee5-4e54-926b-40571cf2f472\") " pod="openshift-ingress-canary/ingress-canary-p7nds" Jan 29 15:30:00 crc kubenswrapper[5008]: I0129 15:30:00.634365 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-468fl" Jan 29 15:30:00 crc kubenswrapper[5008]: I0129 15:30:00.641848 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sc6tk\" (UniqueName: \"kubernetes.io/projected/272fd84c-e1ec-47ce-a8dc-fb0573d1208c-kube-api-access-sc6tk\") pod \"olm-operator-6b444d44fb-mqnz8\" (UID: \"272fd84c-e1ec-47ce-a8dc-fb0573d1208c\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-mqnz8" Jan 29 15:30:00 crc kubenswrapper[5008]: I0129 15:30:00.661045 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8kgkr\" (UniqueName: \"kubernetes.io/projected/632f321e-e374-410c-9dc3-0aacadc97f3b-kube-api-access-8kgkr\") pod \"machine-config-operator-74547568cd-bmtm4\" (UID: \"632f321e-e374-410c-9dc3-0aacadc97f3b\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-bmtm4" Jan 29 15:30:00 crc kubenswrapper[5008]: I0129 15:30:00.678017 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4fwml\" (UniqueName: \"kubernetes.io/projected/20ed8d47-c62e-4dfd-aa4d-630a6db1b3a9-kube-api-access-4fwml\") pod \"migrator-59844c95c7-s5vvl\" (UID: \"20ed8d47-c62e-4dfd-aa4d-630a6db1b3a9\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-s5vvl" Jan 29 15:30:00 crc kubenswrapper[5008]: I0129 15:30:00.698064 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 15:30:00 crc kubenswrapper[5008]: E0129 15:30:00.698280 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 15:30:01.198206382 +0000 UTC m=+144.871060619 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:00 crc kubenswrapper[5008]: I0129 15:30:00.698398 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qm54x\" (UID: \"30c54800-b443-4da8-9d41-22e8f156a1a1\") " pod="openshift-image-registry/image-registry-697d97f7c8-qm54x" Jan 29 15:30:00 crc kubenswrapper[5008]: E0129 15:30:00.699131 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 15:30:01.199118305 +0000 UTC m=+144.871972542 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qm54x" (UID: "30c54800-b443-4da8-9d41-22e8f156a1a1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:00 crc kubenswrapper[5008]: I0129 15:30:00.701330 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-tczgr"] Jan 29 15:30:00 crc kubenswrapper[5008]: I0129 15:30:00.708355 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/217f16d7-943b-4603-88fa-155377da9788-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-lb8mt\" (UID: \"217f16d7-943b-4603-88fa-155377da9788\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-lb8mt" Jan 29 15:30:00 crc kubenswrapper[5008]: I0129 15:30:00.744600 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7n6q5\" (UniqueName: \"kubernetes.io/projected/217f16d7-943b-4603-88fa-155377da9788-kube-api-access-7n6q5\") pod \"cluster-image-registry-operator-dc59b4c8b-lb8mt\" (UID: \"217f16d7-943b-4603-88fa-155377da9788\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-lb8mt" Jan 29 15:30:00 crc kubenswrapper[5008]: I0129 15:30:00.745663 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gfwg7\" (UniqueName: \"kubernetes.io/projected/ed80deac-23a5-4504-af92-231afa07fd27-kube-api-access-gfwg7\") pod \"machine-config-server-qs6wx\" (UID: \"ed80deac-23a5-4504-af92-231afa07fd27\") " pod="openshift-machine-config-operator/machine-config-server-qs6wx" Jan 29 15:30:00 crc kubenswrapper[5008]: I0129 15:30:00.756757 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-prgds\" (UniqueName: \"kubernetes.io/projected/aa595b2b-fee5-4e54-926b-40571cf2f472-kube-api-access-prgds\") pod \"ingress-canary-p7nds\" (UID: \"aa595b2b-fee5-4e54-926b-40571cf2f472\") " pod="openshift-ingress-canary/ingress-canary-p7nds" Jan 29 15:30:00 crc kubenswrapper[5008]: I0129 15:30:00.759953 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-s5vvl" Jan 29 15:30:00 crc kubenswrapper[5008]: I0129 15:30:00.775972 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2lr4m\" (UniqueName: \"kubernetes.io/projected/5ca041e2-baff-40ee-8fc9-e9bc58aee628-kube-api-access-2lr4m\") pod \"csi-hostpathplugin-g9x2n\" (UID: \"5ca041e2-baff-40ee-8fc9-e9bc58aee628\") " pod="hostpath-provisioner/csi-hostpathplugin-g9x2n" Jan 29 15:30:00 crc kubenswrapper[5008]: W0129 15:30:00.791858 5008 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod696d81dd_3f1a_4c58_ae69_29fff54e590b.slice/crio-766c295456432be9dc1224994442bbdfac4302ae1ac813849b4540a5a3403209 WatchSource:0}: Error finding container 766c295456432be9dc1224994442bbdfac4302ae1ac813849b4540a5a3403209: Status 404 returned error can't find the container with id 766c295456432be9dc1224994442bbdfac4302ae1ac813849b4540a5a3403209 Jan 29 15:30:00 crc kubenswrapper[5008]: I0129 15:30:00.797915 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qkrwh\" (UniqueName: \"kubernetes.io/projected/0ba6b3e7-02fc-4ad5-b6f1-8fcbd1940277-kube-api-access-qkrwh\") pod \"catalog-operator-68c6474976-zvhxk\" (UID: \"0ba6b3e7-02fc-4ad5-b6f1-8fcbd1940277\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-zvhxk" Jan 29 15:30:00 crc kubenswrapper[5008]: I0129 15:30:00.799636 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 15:30:00 crc kubenswrapper[5008]: E0129 15:30:00.799794 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 15:30:01.29975983 +0000 UTC m=+144.972614067 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:00 crc kubenswrapper[5008]: I0129 15:30:00.799977 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qm54x\" (UID: \"30c54800-b443-4da8-9d41-22e8f156a1a1\") " pod="openshift-image-registry/image-registry-697d97f7c8-qm54x" Jan 29 15:30:00 crc kubenswrapper[5008]: E0129 15:30:00.800308 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 15:30:01.300298064 +0000 UTC m=+144.973152341 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qm54x" (UID: "30c54800-b443-4da8-9d41-22e8f156a1a1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:00 crc kubenswrapper[5008]: I0129 15:30:00.820645 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-44jf7\" (UniqueName: \"kubernetes.io/projected/e3105b11-cb5b-4006-8f1b-17b90922d743-kube-api-access-44jf7\") pod \"package-server-manager-789f6589d5-w5jbk\" (UID: \"e3105b11-cb5b-4006-8f1b-17b90922d743\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-w5jbk" Jan 29 15:30:00 crc kubenswrapper[5008]: I0129 15:30:00.837320 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d2dsz\" (UniqueName: \"kubernetes.io/projected/cf3d6df4-e07e-4d72-b2b6-20dcb29700d7-kube-api-access-d2dsz\") pod \"control-plane-machine-set-operator-78cbb6b69f-x9bx7\" (UID: \"cf3d6df4-e07e-4d72-b2b6-20dcb29700d7\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-x9bx7" Jan 29 15:30:00 crc kubenswrapper[5008]: I0129 15:30:00.843843 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-bmtm4" Jan 29 15:30:00 crc kubenswrapper[5008]: I0129 15:30:00.859034 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2dg4s\" (UniqueName: \"kubernetes.io/projected/a14210e2-42e9-45d9-8633-a5df1a863a9f-kube-api-access-2dg4s\") pod \"service-ca-operator-777779d784-9b7ll\" (UID: \"a14210e2-42e9-45d9-8633-a5df1a863a9f\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-9b7ll" Jan 29 15:30:00 crc kubenswrapper[5008]: I0129 15:30:00.899100 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nhkzq\" (UniqueName: \"kubernetes.io/projected/4a912999-007c-495d-aaa3-857d76158a91-kube-api-access-nhkzq\") pod \"collect-profiles-29495010-t7nh4\" (UID: \"4a912999-007c-495d-aaa3-857d76158a91\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29495010-t7nh4" Jan 29 15:30:00 crc kubenswrapper[5008]: I0129 15:30:00.902456 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 15:30:00 crc kubenswrapper[5008]: E0129 15:30:00.903013 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 15:30:01.402994273 +0000 UTC m=+145.075848520 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:00 crc kubenswrapper[5008]: I0129 15:30:00.904128 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-mqnz8" Jan 29 15:30:00 crc kubenswrapper[5008]: I0129 15:30:00.926696 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pgvxq\" (UniqueName: \"kubernetes.io/projected/a161323e-d13e-46da-b8bd-347b56ef5110-kube-api-access-pgvxq\") pod \"dns-default-tw5d5\" (UID: \"a161323e-d13e-46da-b8bd-347b56ef5110\") " pod="openshift-dns/dns-default-tw5d5" Jan 29 15:30:00 crc kubenswrapper[5008]: I0129 15:30:00.927269 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-9b7ll" Jan 29 15:30:00 crc kubenswrapper[5008]: I0129 15:30:00.929562 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-w5jbk" Jan 29 15:30:00 crc kubenswrapper[5008]: I0129 15:30:00.938678 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-x9bx7" Jan 29 15:30:00 crc kubenswrapper[5008]: I0129 15:30:00.948959 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-zvhxk" Jan 29 15:30:00 crc kubenswrapper[5008]: I0129 15:30:00.959331 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-ghcqr"] Jan 29 15:30:00 crc kubenswrapper[5008]: I0129 15:30:00.959558 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-qs6wx" Jan 29 15:30:00 crc kubenswrapper[5008]: I0129 15:30:00.966927 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-p7nds" Jan 29 15:30:00 crc kubenswrapper[5008]: I0129 15:30:00.998989 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-g9x2n" Jan 29 15:30:01 crc kubenswrapper[5008]: I0129 15:30:01.001921 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-tw5d5" Jan 29 15:30:01 crc kubenswrapper[5008]: I0129 15:30:01.004536 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qm54x\" (UID: \"30c54800-b443-4da8-9d41-22e8f156a1a1\") " pod="openshift-image-registry/image-registry-697d97f7c8-qm54x" Jan 29 15:30:01 crc kubenswrapper[5008]: E0129 15:30:01.004891 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 15:30:01.50487963 +0000 UTC m=+145.177733857 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qm54x" (UID: "30c54800-b443-4da8-9d41-22e8f156a1a1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:01 crc kubenswrapper[5008]: I0129 15:30:01.032053 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-lb8mt" Jan 29 15:30:01 crc kubenswrapper[5008]: I0129 15:30:01.069278 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29495010-t7nh4" Jan 29 15:30:01 crc kubenswrapper[5008]: I0129 15:30:01.106114 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 15:30:01 crc kubenswrapper[5008]: E0129 15:30:01.106431 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 15:30:01.606412517 +0000 UTC m=+145.279266754 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:01 crc kubenswrapper[5008]: I0129 15:30:01.181409 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-2dsnp"] Jan 29 15:30:01 crc kubenswrapper[5008]: I0129 15:30:01.207453 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qm54x\" (UID: \"30c54800-b443-4da8-9d41-22e8f156a1a1\") " pod="openshift-image-registry/image-registry-697d97f7c8-qm54x" Jan 29 15:30:01 crc kubenswrapper[5008]: E0129 15:30:01.207813 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 15:30:01.707775971 +0000 UTC m=+145.380649738 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qm54x" (UID: "30c54800-b443-4da8-9d41-22e8f156a1a1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:01 crc kubenswrapper[5008]: I0129 15:30:01.227060 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-w2lv5"] Jan 29 15:30:01 crc kubenswrapper[5008]: I0129 15:30:01.232306 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-6zjns"] Jan 29 15:30:01 crc kubenswrapper[5008]: I0129 15:30:01.280167 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-fpmxk" event={"ID":"7d5c80c8-4e74-4618-96c0-8e76168ad709","Type":"ContainerStarted","Data":"877a7a5331b5add1273bcb856b0a6b558e22fc4ee16ab1f101067f85b3c64f92"} Jan 29 15:30:01 crc kubenswrapper[5008]: I0129 15:30:01.282071 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-6wmrp" event={"ID":"64cf2ff9-40f4-48a5-a16c-6513cf0470bd","Type":"ContainerStarted","Data":"b7c6360486afb3695d7f0cab5e94240be2d35122a76f5d2f164ac0cff78e316c"} Jan 29 15:30:01 crc kubenswrapper[5008]: I0129 15:30:01.282494 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/downloads-7954f5f757-6wmrp" Jan 29 15:30:01 crc kubenswrapper[5008]: I0129 15:30:01.282977 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-ghcqr" event={"ID":"cb93f308-4554-41a0-a5c7-28d516a419c7","Type":"ContainerStarted","Data":"48cc5b0c7577ca631f2af7126b9199d3db84603543952247236516fe60199dfd"} Jan 29 15:30:01 crc kubenswrapper[5008]: I0129 15:30:01.284513 5008 patch_prober.go:28] interesting pod/downloads-7954f5f757-6wmrp container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.42:8080/\": dial tcp 10.217.0.42:8080: connect: connection refused" start-of-body= Jan 29 15:30:01 crc kubenswrapper[5008]: I0129 15:30:01.284554 5008 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-6wmrp" podUID="64cf2ff9-40f4-48a5-a16c-6513cf0470bd" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.42:8080/\": dial tcp 10.217.0.42:8080: connect: connection refused" Jan 29 15:30:01 crc kubenswrapper[5008]: I0129 15:30:01.286224 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-tczgr" event={"ID":"696d81dd-3f1a-4c58-ae69-29fff54e590b","Type":"ContainerStarted","Data":"766c295456432be9dc1224994442bbdfac4302ae1ac813849b4540a5a3403209"} Jan 29 15:30:01 crc kubenswrapper[5008]: I0129 15:30:01.287991 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-p8fx6" event={"ID":"8d495a4f-d952-4050-a895-e6650c083e0d","Type":"ContainerStarted","Data":"60e0f31c678f70981e70a492642eae649c71539fbf0605d0a371bacca465f83a"} Jan 29 15:30:01 crc kubenswrapper[5008]: I0129 15:30:01.290991 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-lkcrp" event={"ID":"380625b0-02b5-417a-bd1e-7ccf56f56059","Type":"ContainerStarted","Data":"7b5065932b2f00b6ed88c79311b771081ad7ec24f48aa25d546d34c280f791c7"} Jan 29 15:30:01 crc kubenswrapper[5008]: W0129 15:30:01.297612 5008 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod30a4c50c_34f7_4c9c_9cbd_baaf50ed16e1.slice/crio-06359078d405bd0e54235a406ebdf31eea4653e6c329abc798e56c3dfc469667 WatchSource:0}: Error finding container 06359078d405bd0e54235a406ebdf31eea4653e6c329abc798e56c3dfc469667: Status 404 returned error can't find the container with id 06359078d405bd0e54235a406ebdf31eea4653e6c329abc798e56c3dfc469667 Jan 29 15:30:01 crc kubenswrapper[5008]: I0129 15:30:01.309235 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 15:30:01 crc kubenswrapper[5008]: I0129 15:30:01.309495 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6db03bb1-4833-4d3f-82d5-08ec5710251f-config\") pod \"machine-api-operator-5694c8668f-fsx74\" (UID: \"6db03bb1-4833-4d3f-82d5-08ec5710251f\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-fsx74" Jan 29 15:30:01 crc kubenswrapper[5008]: E0129 15:30:01.310549 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 15:30:01.810530032 +0000 UTC m=+145.483384279 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:01 crc kubenswrapper[5008]: I0129 15:30:01.311602 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6db03bb1-4833-4d3f-82d5-08ec5710251f-config\") pod \"machine-api-operator-5694c8668f-fsx74\" (UID: \"6db03bb1-4833-4d3f-82d5-08ec5710251f\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-fsx74" Jan 29 15:30:01 crc kubenswrapper[5008]: I0129 15:30:01.368701 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-wkn92"] Jan 29 15:30:01 crc kubenswrapper[5008]: I0129 15:30:01.415599 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qm54x\" (UID: \"30c54800-b443-4da8-9d41-22e8f156a1a1\") " pod="openshift-image-registry/image-registry-697d97f7c8-qm54x" Jan 29 15:30:01 crc kubenswrapper[5008]: E0129 15:30:01.416342 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 15:30:01.916324531 +0000 UTC m=+145.589178768 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qm54x" (UID: "30c54800-b443-4da8-9d41-22e8f156a1a1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:01 crc kubenswrapper[5008]: I0129 15:30:01.428597 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-6lddg"] Jan 29 15:30:01 crc kubenswrapper[5008]: I0129 15:30:01.428720 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-fsx74" Jan 29 15:30:01 crc kubenswrapper[5008]: I0129 15:30:01.448576 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-zrdsf"] Jan 29 15:30:01 crc kubenswrapper[5008]: I0129 15:30:01.449821 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-j8wt8"] Jan 29 15:30:01 crc kubenswrapper[5008]: W0129 15:30:01.507234 5008 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf88f09ca_9a9f_4d6e_bb2f_f00d75ae11fb.slice/crio-945e699e59852dc812c44fd74b49c97c250fab60ad324066e0e2e1c3a950db2e WatchSource:0}: Error finding container 945e699e59852dc812c44fd74b49c97c250fab60ad324066e0e2e1c3a950db2e: Status 404 returned error can't find the container with id 945e699e59852dc812c44fd74b49c97c250fab60ad324066e0e2e1c3a950db2e Jan 29 15:30:01 crc kubenswrapper[5008]: I0129 15:30:01.516835 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 15:30:01 crc kubenswrapper[5008]: E0129 15:30:01.517797 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 15:30:02.017756357 +0000 UTC m=+145.690610594 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:01 crc kubenswrapper[5008]: I0129 15:30:01.617874 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qm54x\" (UID: \"30c54800-b443-4da8-9d41-22e8f156a1a1\") " pod="openshift-image-registry/image-registry-697d97f7c8-qm54x" Jan 29 15:30:01 crc kubenswrapper[5008]: E0129 15:30:01.618206 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 15:30:02.118195587 +0000 UTC m=+145.791049824 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qm54x" (UID: "30c54800-b443-4da8-9d41-22e8f156a1a1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:01 crc kubenswrapper[5008]: I0129 15:30:01.723278 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 15:30:01 crc kubenswrapper[5008]: E0129 15:30:01.723746 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 15:30:02.22372455 +0000 UTC m=+145.896578787 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:01 crc kubenswrapper[5008]: I0129 15:30:01.824493 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qm54x\" (UID: \"30c54800-b443-4da8-9d41-22e8f156a1a1\") " pod="openshift-image-registry/image-registry-697d97f7c8-qm54x" Jan 29 15:30:01 crc kubenswrapper[5008]: E0129 15:30:01.825364 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 15:30:02.32534961 +0000 UTC m=+145.998203837 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qm54x" (UID: "30c54800-b443-4da8-9d41-22e8f156a1a1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:01 crc kubenswrapper[5008]: I0129 15:30:01.928165 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 15:30:01 crc kubenswrapper[5008]: E0129 15:30:01.928548 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 15:30:02.42852863 +0000 UTC m=+146.101382867 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:01 crc kubenswrapper[5008]: I0129 15:30:01.995372 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-4zwkl"] Jan 29 15:30:02 crc kubenswrapper[5008]: I0129 15:30:02.016394 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-ztdsl"] Jan 29 15:30:02 crc kubenswrapper[5008]: I0129 15:30:02.029376 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qm54x\" (UID: \"30c54800-b443-4da8-9d41-22e8f156a1a1\") " pod="openshift-image-registry/image-registry-697d97f7c8-qm54x" Jan 29 15:30:02 crc kubenswrapper[5008]: E0129 15:30:02.029694 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 15:30:02.529682099 +0000 UTC m=+146.202536336 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qm54x" (UID: "30c54800-b443-4da8-9d41-22e8f156a1a1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:02 crc kubenswrapper[5008]: W0129 15:30:02.058899 5008 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf56b5e44_f079_4c56_9e19_e09996979003.slice/crio-283a3b198b8ebcea901bee24ad0194d994a822693f8e2f8f5e5b86077a5737c1 WatchSource:0}: Error finding container 283a3b198b8ebcea901bee24ad0194d994a822693f8e2f8f5e5b86077a5737c1: Status 404 returned error can't find the container with id 283a3b198b8ebcea901bee24ad0194d994a822693f8e2f8f5e5b86077a5737c1 Jan 29 15:30:02 crc kubenswrapper[5008]: I0129 15:30:02.130472 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 15:30:02 crc kubenswrapper[5008]: E0129 15:30:02.130724 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 15:30:02.630708434 +0000 UTC m=+146.303562671 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:02 crc kubenswrapper[5008]: I0129 15:30:02.203332 5008 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/downloads-7954f5f757-6wmrp" podStartSLOduration=124.203314595 podStartE2EDuration="2m4.203314595s" podCreationTimestamp="2026-01-29 15:27:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 15:30:02.202202116 +0000 UTC m=+145.875056353" watchObservedRunningTime="2026-01-29 15:30:02.203314595 +0000 UTC m=+145.876168832" Jan 29 15:30:02 crc kubenswrapper[5008]: I0129 15:30:02.232537 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qm54x\" (UID: \"30c54800-b443-4da8-9d41-22e8f156a1a1\") " pod="openshift-image-registry/image-registry-697d97f7c8-qm54x" Jan 29 15:30:02 crc kubenswrapper[5008]: E0129 15:30:02.233014 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 15:30:02.733000023 +0000 UTC m=+146.405854260 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qm54x" (UID: "30c54800-b443-4da8-9d41-22e8f156a1a1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:02 crc kubenswrapper[5008]: I0129 15:30:02.314171 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-j8wt8" event={"ID":"c9bc5b93-0c42-401c-8ca5-e5154e8be34d","Type":"ContainerStarted","Data":"f9fdd5e63506b623e7ac7fad8b3704775ab1baa47b2c8a054b36ef7c51f63734"} Jan 29 15:30:02 crc kubenswrapper[5008]: I0129 15:30:02.314423 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-j8wt8" event={"ID":"c9bc5b93-0c42-401c-8ca5-e5154e8be34d","Type":"ContainerStarted","Data":"a94db479e2c3c28357dcdc8bd1f0553d527dc2b6d6b066269259da8b458dc0d6"} Jan 29 15:30:02 crc kubenswrapper[5008]: I0129 15:30:02.317164 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-tczgr" event={"ID":"696d81dd-3f1a-4c58-ae69-29fff54e590b","Type":"ContainerStarted","Data":"faddad3801b36a1b5efb7f021265ba0b8cf5ce6cc6212681d5448ba08c10d676"} Jan 29 15:30:02 crc kubenswrapper[5008]: I0129 15:30:02.318250 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-6lddg" event={"ID":"3e0bc350-e279-4e74-a70e-c89593f115f3","Type":"ContainerStarted","Data":"ba194aecb8b7b07da24347645e17594538b3bffb024abe9f2b10c66f8e58e0ae"} Jan 29 15:30:02 crc kubenswrapper[5008]: I0129 15:30:02.320270 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-w2lv5" event={"ID":"657b37ac-43ff-4309-9bfa-5220bccb08c0","Type":"ContainerStarted","Data":"73abf826bcbd7e7504623e7b47699d195d27874fe60fd7928104048edbf5d2bf"} Jan 29 15:30:02 crc kubenswrapper[5008]: I0129 15:30:02.320323 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-w2lv5" event={"ID":"657b37ac-43ff-4309-9bfa-5220bccb08c0","Type":"ContainerStarted","Data":"360187cae4c917b9123e7622621d57ac9a8bad205ce113f28ca8e357f786a76a"} Jan 29 15:30:02 crc kubenswrapper[5008]: I0129 15:30:02.322507 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-6zjns" event={"ID":"30a4c50c-34f7-4c9c-9cbd-baaf50ed16e1","Type":"ContainerStarted","Data":"06359078d405bd0e54235a406ebdf31eea4653e6c329abc798e56c3dfc469667"} Jan 29 15:30:02 crc kubenswrapper[5008]: I0129 15:30:02.323511 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-ghcqr" event={"ID":"cb93f308-4554-41a0-a5c7-28d516a419c7","Type":"ContainerStarted","Data":"ecd556d3b48a990bce744b3530d2400e624783729add88ee057e582c469708cf"} Jan 29 15:30:02 crc kubenswrapper[5008]: I0129 15:30:02.324123 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-zrdsf" event={"ID":"98a7839a-3ca2-49f7-a330-f77ffc4e4da3","Type":"ContainerStarted","Data":"856bd5b826873efd4ba7fb31e6a28bffee9b67efdc724753b10c4a2d1afe1c3c"} Jan 29 15:30:02 crc kubenswrapper[5008]: I0129 15:30:02.324770 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-wkn92" event={"ID":"f88f09ca-9a9f-4d6e-bb2f-f00d75ae11fb","Type":"ContainerStarted","Data":"945e699e59852dc812c44fd74b49c97c250fab60ad324066e0e2e1c3a950db2e"} Jan 29 15:30:02 crc kubenswrapper[5008]: I0129 15:30:02.326240 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-p8fx6" event={"ID":"8d495a4f-d952-4050-a895-e6650c083e0d","Type":"ContainerStarted","Data":"cd6d4f39442284946f16d7a0b792ec3e66de30e9dc56a9bdfd64c76f9b7148cd"} Jan 29 15:30:02 crc kubenswrapper[5008]: I0129 15:30:02.327486 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-fpmxk" event={"ID":"7d5c80c8-4e74-4618-96c0-8e76168ad709","Type":"ContainerStarted","Data":"4c0c93394c1503334716279d33aab711196676ea784b3c3aa6166010a6b66a0e"} Jan 29 15:30:02 crc kubenswrapper[5008]: I0129 15:30:02.327713 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-879f6c89f-fpmxk" Jan 29 15:30:02 crc kubenswrapper[5008]: I0129 15:30:02.332268 5008 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-fpmxk container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.6:8443/healthz\": dial tcp 10.217.0.6:8443: connect: connection refused" start-of-body= Jan 29 15:30:02 crc kubenswrapper[5008]: I0129 15:30:02.333917 5008 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-fpmxk" podUID="7d5c80c8-4e74-4618-96c0-8e76168ad709" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.6:8443/healthz\": dial tcp 10.217.0.6:8443: connect: connection refused" Jan 29 15:30:02 crc kubenswrapper[5008]: E0129 15:30:02.333355 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 15:30:02.833335909 +0000 UTC m=+146.506190146 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:02 crc kubenswrapper[5008]: I0129 15:30:02.333285 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 15:30:02 crc kubenswrapper[5008]: I0129 15:30:02.334650 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-2dsnp" event={"ID":"820dc798-ef25-4bda-947f-8c66b290816d","Type":"ContainerStarted","Data":"0b6fc6fe80c6bb0353c34b853cc6c54cd78d9c076665787a76bcc0efafcba012"} Jan 29 15:30:02 crc kubenswrapper[5008]: I0129 15:30:02.334729 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-2dsnp" event={"ID":"820dc798-ef25-4bda-947f-8c66b290816d","Type":"ContainerStarted","Data":"53b0b2512c48956ec122d0b88b3c39c6dbd02e3557a3d71540e30ef4c1665b09"} Jan 29 15:30:02 crc kubenswrapper[5008]: I0129 15:30:02.335241 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qm54x\" (UID: \"30c54800-b443-4da8-9d41-22e8f156a1a1\") " pod="openshift-image-registry/image-registry-697d97f7c8-qm54x" Jan 29 15:30:02 crc kubenswrapper[5008]: I0129 15:30:02.335259 5008 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-tczgr" podStartSLOduration=124.335247099 podStartE2EDuration="2m4.335247099s" podCreationTimestamp="2026-01-29 15:27:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 15:30:02.33260486 +0000 UTC m=+146.005459107" watchObservedRunningTime="2026-01-29 15:30:02.335247099 +0000 UTC m=+146.008101336" Jan 29 15:30:02 crc kubenswrapper[5008]: E0129 15:30:02.336180 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 15:30:02.836163513 +0000 UTC m=+146.509017750 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qm54x" (UID: "30c54800-b443-4da8-9d41-22e8f156a1a1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:02 crc kubenswrapper[5008]: I0129 15:30:02.340916 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-qs6wx" event={"ID":"ed80deac-23a5-4504-af92-231afa07fd27","Type":"ContainerStarted","Data":"2df5f3001b1a9158190e6ec9b9ff492d2de9247fbaf7d8bfa0d6971c2a614273"} Jan 29 15:30:02 crc kubenswrapper[5008]: I0129 15:30:02.348466 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-lkcrp" event={"ID":"380625b0-02b5-417a-bd1e-7ccf56f56059","Type":"ContainerStarted","Data":"ec07f9f91e2751c1b8e9b75c9c2c6e1533e44fea17e8e88966dd9a07a4ccf470"} Jan 29 15:30:02 crc kubenswrapper[5008]: I0129 15:30:02.351739 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-4zwkl" event={"ID":"f56b5e44-f079-4c56-9e19-e09996979003","Type":"ContainerStarted","Data":"283a3b198b8ebcea901bee24ad0194d994a822693f8e2f8f5e5b86077a5737c1"} Jan 29 15:30:02 crc kubenswrapper[5008]: I0129 15:30:02.351942 5008 patch_prober.go:28] interesting pod/downloads-7954f5f757-6wmrp container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.42:8080/\": dial tcp 10.217.0.42:8080: connect: connection refused" start-of-body= Jan 29 15:30:02 crc kubenswrapper[5008]: I0129 15:30:02.352025 5008 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-6wmrp" podUID="64cf2ff9-40f4-48a5-a16c-6513cf0470bd" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.42:8080/\": dial tcp 10.217.0.42:8080: connect: connection refused" Jan 29 15:30:02 crc kubenswrapper[5008]: I0129 15:30:02.396739 5008 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress/router-default-5444994796-lkcrp" podStartSLOduration=123.396717939 podStartE2EDuration="2m3.396717939s" podCreationTimestamp="2026-01-29 15:27:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 15:30:02.396181144 +0000 UTC m=+146.069035381" watchObservedRunningTime="2026-01-29 15:30:02.396717939 +0000 UTC m=+146.069572186" Jan 29 15:30:02 crc kubenswrapper[5008]: I0129 15:30:02.399149 5008 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-879f6c89f-fpmxk" podStartSLOduration=124.399140032 podStartE2EDuration="2m4.399140032s" podCreationTimestamp="2026-01-29 15:27:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 15:30:02.354518863 +0000 UTC m=+146.027373100" watchObservedRunningTime="2026-01-29 15:30:02.399140032 +0000 UTC m=+146.071994289" Jan 29 15:30:02 crc kubenswrapper[5008]: I0129 15:30:02.424907 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-2h8sf"] Jan 29 15:30:02 crc kubenswrapper[5008]: I0129 15:30:02.434334 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-g2rk6"] Jan 29 15:30:02 crc kubenswrapper[5008]: I0129 15:30:02.434392 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-f5fs6"] Jan 29 15:30:02 crc kubenswrapper[5008]: I0129 15:30:02.436281 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 15:30:02 crc kubenswrapper[5008]: E0129 15:30:02.438014 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 15:30:02.937988579 +0000 UTC m=+146.610842876 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:02 crc kubenswrapper[5008]: I0129 15:30:02.438577 5008 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-5444994796-lkcrp" Jan 29 15:30:02 crc kubenswrapper[5008]: I0129 15:30:02.441306 5008 patch_prober.go:28] interesting pod/router-default-5444994796-lkcrp container/router namespace/openshift-ingress: Startup probe status=failure output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" start-of-body= Jan 29 15:30:02 crc kubenswrapper[5008]: I0129 15:30:02.441348 5008 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-lkcrp" podUID="380625b0-02b5-417a-bd1e-7ccf56f56059" containerName="router" probeResult="failure" output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" Jan 29 15:30:02 crc kubenswrapper[5008]: I0129 15:30:02.459620 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-zs2tk"] Jan 29 15:30:02 crc kubenswrapper[5008]: W0129 15:30:02.460741 5008 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5b987d67_e424_4286_a25d_11bfc4d1e577.slice/crio-0b8cee56a36757113254bd0a8115cfe8d9b4af6f1d22f14ff0455c0b63a5f6ba WatchSource:0}: Error finding container 0b8cee56a36757113254bd0a8115cfe8d9b4af6f1d22f14ff0455c0b63a5f6ba: Status 404 returned error can't find the container with id 0b8cee56a36757113254bd0a8115cfe8d9b4af6f1d22f14ff0455c0b63a5f6ba Jan 29 15:30:02 crc kubenswrapper[5008]: I0129 15:30:02.464928 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-4l85w"] Jan 29 15:30:02 crc kubenswrapper[5008]: I0129 15:30:02.466264 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-cb6xn"] Jan 29 15:30:02 crc kubenswrapper[5008]: I0129 15:30:02.475835 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-v7r8x"] Jan 29 15:30:02 crc kubenswrapper[5008]: I0129 15:30:02.483763 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-4268l"] Jan 29 15:30:02 crc kubenswrapper[5008]: I0129 15:30:02.491217 5008 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29494995-x4n8l"] Jan 29 15:30:02 crc kubenswrapper[5008]: I0129 15:30:02.495853 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-brcd7"] Jan 29 15:30:02 crc kubenswrapper[5008]: I0129 15:30:02.516383 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-w5jbk"] Jan 29 15:30:02 crc kubenswrapper[5008]: I0129 15:30:02.532166 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-bmtm4"] Jan 29 15:30:02 crc kubenswrapper[5008]: I0129 15:30:02.543750 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qm54x\" (UID: \"30c54800-b443-4da8-9d41-22e8f156a1a1\") " pod="openshift-image-registry/image-registry-697d97f7c8-qm54x" Jan 29 15:30:02 crc kubenswrapper[5008]: E0129 15:30:02.545242 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 15:30:03.045227967 +0000 UTC m=+146.718082204 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qm54x" (UID: "30c54800-b443-4da8-9d41-22e8f156a1a1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:02 crc kubenswrapper[5008]: I0129 15:30:02.558875 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-n2sqt"] Jan 29 15:30:02 crc kubenswrapper[5008]: I0129 15:30:02.564275 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-s5vvl"] Jan 29 15:30:02 crc kubenswrapper[5008]: I0129 15:30:02.575720 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-468fl"] Jan 29 15:30:02 crc kubenswrapper[5008]: I0129 15:30:02.584965 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-p7nds"] Jan 29 15:30:02 crc kubenswrapper[5008]: W0129 15:30:02.587720 5008 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7473d665_3627_4470_a820_ebdbdc113587.slice/crio-744d2c5b14b18a0366937cb219697ae3c655391e7942e7c446395ce7d6b803ff WatchSource:0}: Error finding container 744d2c5b14b18a0366937cb219697ae3c655391e7942e7c446395ce7d6b803ff: Status 404 returned error can't find the container with id 744d2c5b14b18a0366937cb219697ae3c655391e7942e7c446395ce7d6b803ff Jan 29 15:30:02 crc kubenswrapper[5008]: I0129 15:30:02.588609 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-g9x2n"] Jan 29 15:30:02 crc kubenswrapper[5008]: I0129 15:30:02.590524 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-x9bx7"] Jan 29 15:30:02 crc kubenswrapper[5008]: I0129 15:30:02.593139 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-lb8mt"] Jan 29 15:30:02 crc kubenswrapper[5008]: I0129 15:30:02.604899 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-9gw94"] Jan 29 15:30:02 crc kubenswrapper[5008]: W0129 15:30:02.608015 5008 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb1a4a04b_067c_43f1_b355_46161babe869.slice/crio-1e01f1c47448495ee747be64b54e9beedefe2ff7cb0493bf37d8a12ea3bb0a20 WatchSource:0}: Error finding container 1e01f1c47448495ee747be64b54e9beedefe2ff7cb0493bf37d8a12ea3bb0a20: Status 404 returned error can't find the container with id 1e01f1c47448495ee747be64b54e9beedefe2ff7cb0493bf37d8a12ea3bb0a20 Jan 29 15:30:02 crc kubenswrapper[5008]: W0129 15:30:02.650114 5008 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcf3d6df4_e07e_4d72_b2b6_20dcb29700d7.slice/crio-f26ad938836f57f6ce9095bd6c0aed92071459715bd04cf319a04b353ef05a53 WatchSource:0}: Error finding container f26ad938836f57f6ce9095bd6c0aed92071459715bd04cf319a04b353ef05a53: Status 404 returned error can't find the container with id f26ad938836f57f6ce9095bd6c0aed92071459715bd04cf319a04b353ef05a53 Jan 29 15:30:02 crc kubenswrapper[5008]: I0129 15:30:02.651746 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 15:30:02 crc kubenswrapper[5008]: E0129 15:30:02.652136 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 15:30:03.151973992 +0000 UTC m=+146.824828239 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:02 crc kubenswrapper[5008]: I0129 15:30:02.652543 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qm54x\" (UID: \"30c54800-b443-4da8-9d41-22e8f156a1a1\") " pod="openshift-image-registry/image-registry-697d97f7c8-qm54x" Jan 29 15:30:02 crc kubenswrapper[5008]: E0129 15:30:02.653168 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 15:30:03.153156683 +0000 UTC m=+146.826010920 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qm54x" (UID: "30c54800-b443-4da8-9d41-22e8f156a1a1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:02 crc kubenswrapper[5008]: I0129 15:30:02.704232 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29495010-t7nh4"] Jan 29 15:30:02 crc kubenswrapper[5008]: I0129 15:30:02.753604 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 15:30:02 crc kubenswrapper[5008]: E0129 15:30:02.753917 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 15:30:03.25390053 +0000 UTC m=+146.926754767 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:02 crc kubenswrapper[5008]: W0129 15:30:02.757883 5008 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4a912999_007c_495d_aaa3_857d76158a91.slice/crio-e472830b4505664315811f646f65ea00f2b653c72238508aa40d729f5d7fedcb WatchSource:0}: Error finding container e472830b4505664315811f646f65ea00f2b653c72238508aa40d729f5d7fedcb: Status 404 returned error can't find the container with id e472830b4505664315811f646f65ea00f2b653c72238508aa40d729f5d7fedcb Jan 29 15:30:02 crc kubenswrapper[5008]: I0129 15:30:02.767584 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-zvhxk"] Jan 29 15:30:02 crc kubenswrapper[5008]: I0129 15:30:02.811554 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-mqnz8"] Jan 29 15:30:02 crc kubenswrapper[5008]: I0129 15:30:02.828204 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-9b7ll"] Jan 29 15:30:02 crc kubenswrapper[5008]: I0129 15:30:02.862020 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qm54x\" (UID: \"30c54800-b443-4da8-9d41-22e8f156a1a1\") " pod="openshift-image-registry/image-registry-697d97f7c8-qm54x" Jan 29 15:30:02 crc kubenswrapper[5008]: E0129 15:30:02.862404 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 15:30:03.36238808 +0000 UTC m=+147.035242317 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qm54x" (UID: "30c54800-b443-4da8-9d41-22e8f156a1a1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:02 crc kubenswrapper[5008]: I0129 15:30:02.907536 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-fsx74"] Jan 29 15:30:02 crc kubenswrapper[5008]: I0129 15:30:02.928695 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-tw5d5"] Jan 29 15:30:02 crc kubenswrapper[5008]: I0129 15:30:02.962807 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 15:30:02 crc kubenswrapper[5008]: E0129 15:30:02.963229 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 15:30:03.46321147 +0000 UTC m=+147.136065707 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:03 crc kubenswrapper[5008]: W0129 15:30:03.000308 5008 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod272fd84c_e1ec_47ce_a8dc_fb0573d1208c.slice/crio-9cd6c3af2f085fd20ceca6542f6e23c5f980afb4f8976332f21c1f6e5a3f9c95 WatchSource:0}: Error finding container 9cd6c3af2f085fd20ceca6542f6e23c5f980afb4f8976332f21c1f6e5a3f9c95: Status 404 returned error can't find the container with id 9cd6c3af2f085fd20ceca6542f6e23c5f980afb4f8976332f21c1f6e5a3f9c95 Jan 29 15:30:03 crc kubenswrapper[5008]: W0129 15:30:03.048444 5008 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda161323e_d13e_46da_b8bd_347b56ef5110.slice/crio-7a2225925c0c07bab41a304c11b06395202c0115dd5e08fbdf79ca5be853a611 WatchSource:0}: Error finding container 7a2225925c0c07bab41a304c11b06395202c0115dd5e08fbdf79ca5be853a611: Status 404 returned error can't find the container with id 7a2225925c0c07bab41a304c11b06395202c0115dd5e08fbdf79ca5be853a611 Jan 29 15:30:03 crc kubenswrapper[5008]: I0129 15:30:03.064757 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qm54x\" (UID: \"30c54800-b443-4da8-9d41-22e8f156a1a1\") " pod="openshift-image-registry/image-registry-697d97f7c8-qm54x" Jan 29 15:30:03 crc kubenswrapper[5008]: E0129 15:30:03.065279 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 15:30:03.565260422 +0000 UTC m=+147.238114659 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qm54x" (UID: "30c54800-b443-4da8-9d41-22e8f156a1a1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:03 crc kubenswrapper[5008]: I0129 15:30:03.165383 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 15:30:03 crc kubenswrapper[5008]: E0129 15:30:03.165845 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 15:30:03.665829585 +0000 UTC m=+147.338683822 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:03 crc kubenswrapper[5008]: I0129 15:30:03.267164 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qm54x\" (UID: \"30c54800-b443-4da8-9d41-22e8f156a1a1\") " pod="openshift-image-registry/image-registry-697d97f7c8-qm54x" Jan 29 15:30:03 crc kubenswrapper[5008]: E0129 15:30:03.267483 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 15:30:03.767470697 +0000 UTC m=+147.440324934 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qm54x" (UID: "30c54800-b443-4da8-9d41-22e8f156a1a1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:03 crc kubenswrapper[5008]: I0129 15:30:03.369671 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 15:30:03 crc kubenswrapper[5008]: E0129 15:30:03.369874 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 15:30:03.869847167 +0000 UTC m=+147.542701404 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:03 crc kubenswrapper[5008]: I0129 15:30:03.370386 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qm54x\" (UID: \"30c54800-b443-4da8-9d41-22e8f156a1a1\") " pod="openshift-image-registry/image-registry-697d97f7c8-qm54x" Jan 29 15:30:03 crc kubenswrapper[5008]: E0129 15:30:03.370710 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 15:30:03.870702458 +0000 UTC m=+147.543556695 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qm54x" (UID: "30c54800-b443-4da8-9d41-22e8f156a1a1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:03 crc kubenswrapper[5008]: I0129 15:30:03.370774 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-2h8sf" event={"ID":"1408f146-4652-41e3-8947-2f230e515750","Type":"ContainerStarted","Data":"7fd3bac13c8d6d623ec5ce8691ee565adf989abe6b4a9d696fc41378d51b54c1"} Jan 29 15:30:03 crc kubenswrapper[5008]: I0129 15:30:03.370823 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-2h8sf" event={"ID":"1408f146-4652-41e3-8947-2f230e515750","Type":"ContainerStarted","Data":"62a4c862802455f7f13e84a4f5d43f1b0e2fb36f0296a54bcd5b45a113396b5a"} Jan 29 15:30:03 crc kubenswrapper[5008]: I0129 15:30:03.372671 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-brcd7" event={"ID":"8eb3ecfb-3675-4931-b618-9a5ba6d23b1d","Type":"ContainerStarted","Data":"163684b7504773b63bd5adad40214b4960cfe011f40abdc9034978ac1e6139df"} Jan 29 15:30:03 crc kubenswrapper[5008]: I0129 15:30:03.374214 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-bmtm4" event={"ID":"632f321e-e374-410c-9dc3-0aacadc97f3b","Type":"ContainerStarted","Data":"336e419022e770079f099785d1b181791219e135db5bd3ba119d808a509365d4"} Jan 29 15:30:03 crc kubenswrapper[5008]: I0129 15:30:03.396684 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-zs2tk" event={"ID":"5b987d67-e424-4286-a25d-11bfc4d1e577","Type":"ContainerStarted","Data":"0b8cee56a36757113254bd0a8115cfe8d9b4af6f1d22f14ff0455c0b63a5f6ba"} Jan 29 15:30:03 crc kubenswrapper[5008]: I0129 15:30:03.401336 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-g2rk6" event={"ID":"3f7de4a5-3819-41c0-9e2e-766dcff408bb","Type":"ContainerStarted","Data":"0d50d0b75f6e0f8a4026a940843934088791e81f1a0bc633f602d35cd43598eb"} Jan 29 15:30:03 crc kubenswrapper[5008]: I0129 15:30:03.410020 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-wkn92" event={"ID":"f88f09ca-9a9f-4d6e-bb2f-f00d75ae11fb","Type":"ContainerStarted","Data":"a95caa66886156554c453682e623f6b46a194df2e3bcacdcc9b6c1208e8f9e27"} Jan 29 15:30:03 crc kubenswrapper[5008]: I0129 15:30:03.430639 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-cb6xn" event={"ID":"0b6fe31f-5401-4a2e-bccb-e57fab2a35ba","Type":"ContainerStarted","Data":"b6c734ddae850b020a8937b3c086e0456e1a5603348817d8875b69a322e1d4cb"} Jan 29 15:30:03 crc kubenswrapper[5008]: I0129 15:30:03.434528 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-9gw94" event={"ID":"ec989c54-8ec3-4f9d-87b0-2665776ffd15","Type":"ContainerStarted","Data":"b1f492a372d6eae470027fc505b85da5dcd1cc39903a5f647e52dfb3b2d873ca"} Jan 29 15:30:03 crc kubenswrapper[5008]: I0129 15:30:03.435921 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29494995-x4n8l" event={"ID":"b1a4a04b-067c-43f1-b355-46161babe869","Type":"ContainerStarted","Data":"1e01f1c47448495ee747be64b54e9beedefe2ff7cb0493bf37d8a12ea3bb0a20"} Jan 29 15:30:03 crc kubenswrapper[5008]: I0129 15:30:03.440223 5008 patch_prober.go:28] interesting pod/router-default-5444994796-lkcrp container/router namespace/openshift-ingress: Startup probe status=failure output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" start-of-body= Jan 29 15:30:03 crc kubenswrapper[5008]: I0129 15:30:03.440280 5008 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-lkcrp" podUID="380625b0-02b5-417a-bd1e-7ccf56f56059" containerName="router" probeResult="failure" output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" Jan 29 15:30:03 crc kubenswrapper[5008]: I0129 15:30:03.455975 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-4zwkl" event={"ID":"f56b5e44-f079-4c56-9e19-e09996979003","Type":"ContainerStarted","Data":"8a58e85619a9d68ab7ca1c73646da4750ac77969c5d738aeb0d3b0851d9dc82e"} Jan 29 15:30:03 crc kubenswrapper[5008]: I0129 15:30:03.456949 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-4zwkl" Jan 29 15:30:03 crc kubenswrapper[5008]: I0129 15:30:03.459249 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-6lddg" event={"ID":"3e0bc350-e279-4e74-a70e-c89593f115f3","Type":"ContainerStarted","Data":"87926bebfd41473ef5acb541830b9fae196b3d4f84efe83c9867c94af1c84690"} Jan 29 15:30:03 crc kubenswrapper[5008]: I0129 15:30:03.467420 5008 patch_prober.go:28] interesting pod/route-controller-manager-6576b87f9c-4zwkl container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.31:8443/healthz\": dial tcp 10.217.0.31:8443: connect: connection refused" start-of-body= Jan 29 15:30:03 crc kubenswrapper[5008]: I0129 15:30:03.467469 5008 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-4zwkl" podUID="f56b5e44-f079-4c56-9e19-e09996979003" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.31:8443/healthz\": dial tcp 10.217.0.31:8443: connect: connection refused" Jan 29 15:30:03 crc kubenswrapper[5008]: I0129 15:30:03.468982 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-f5fs6" event={"ID":"1b0f95d5-456d-45a7-9bfd-49efbf2a16ce","Type":"ContainerStarted","Data":"8d07e1f320ec80ca7ae40d7dd78e3fb623341ff7bca3b744228d15d6a44094c2"} Jan 29 15:30:03 crc kubenswrapper[5008]: I0129 15:30:03.469014 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-f5fs6" event={"ID":"1b0f95d5-456d-45a7-9bfd-49efbf2a16ce","Type":"ContainerStarted","Data":"722c047341be5f0f9b650010f9f21dcad960b41633cc80d83490742446b5f6c5"} Jan 29 15:30:03 crc kubenswrapper[5008]: I0129 15:30:03.470677 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-w5jbk" event={"ID":"e3105b11-cb5b-4006-8f1b-17b90922d743","Type":"ContainerStarted","Data":"73f6d1b44636709ff4b14e56b2dddc17b510c87ed09c684f23c5478e481c98d4"} Jan 29 15:30:03 crc kubenswrapper[5008]: I0129 15:30:03.472330 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-qs6wx" event={"ID":"ed80deac-23a5-4504-af92-231afa07fd27","Type":"ContainerStarted","Data":"d1858076cc9c0d4595b98d460d6b6ce088c202486ac2e5b777caebf358b1b004"} Jan 29 15:30:03 crc kubenswrapper[5008]: I0129 15:30:03.474030 5008 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication-operator/authentication-operator-69f744f599-wkn92" podStartSLOduration=125.474017313 podStartE2EDuration="2m5.474017313s" podCreationTimestamp="2026-01-29 15:27:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 15:30:03.42920303 +0000 UTC m=+147.102057267" watchObservedRunningTime="2026-01-29 15:30:03.474017313 +0000 UTC m=+147.146871560" Jan 29 15:30:03 crc kubenswrapper[5008]: I0129 15:30:03.474112 5008 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-4zwkl" podStartSLOduration=124.474109156 podStartE2EDuration="2m4.474109156s" podCreationTimestamp="2026-01-29 15:27:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 15:30:03.473600333 +0000 UTC m=+147.146454570" watchObservedRunningTime="2026-01-29 15:30:03.474109156 +0000 UTC m=+147.146963403" Jan 29 15:30:03 crc kubenswrapper[5008]: I0129 15:30:03.477212 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 15:30:03 crc kubenswrapper[5008]: E0129 15:30:03.478257 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 15:30:03.978240014 +0000 UTC m=+147.651094251 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:03 crc kubenswrapper[5008]: I0129 15:30:03.478992 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-9b7ll" event={"ID":"a14210e2-42e9-45d9-8633-a5df1a863a9f","Type":"ContainerStarted","Data":"94279afa3975feaf12b417ce18986133f731f91bdcd91225d93fa3677504f600"} Jan 29 15:30:03 crc kubenswrapper[5008]: I0129 15:30:03.481839 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-2dsnp" event={"ID":"820dc798-ef25-4bda-947f-8c66b290816d","Type":"ContainerStarted","Data":"eaa35375549041df26c5c2562e481b702aa80b65ed6196f86f72e81a93c0ef28"} Jan 29 15:30:03 crc kubenswrapper[5008]: I0129 15:30:03.483155 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-zrdsf" event={"ID":"98a7839a-3ca2-49f7-a330-f77ffc4e4da3","Type":"ContainerStarted","Data":"3b34e358490b24ea840ff744cc1313b6fb6efc2a6401f73c0f711942fb851192"} Jan 29 15:30:03 crc kubenswrapper[5008]: I0129 15:30:03.500416 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-x9bx7" event={"ID":"cf3d6df4-e07e-4d72-b2b6-20dcb29700d7","Type":"ContainerStarted","Data":"f26ad938836f57f6ce9095bd6c0aed92071459715bd04cf319a04b353ef05a53"} Jan 29 15:30:03 crc kubenswrapper[5008]: I0129 15:30:03.502160 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-468fl" event={"ID":"00332b75-a73b-49c1-9b72-73445baccf6d","Type":"ContainerStarted","Data":"b733768ba86559d686adf72003f41b4761850c81887b6cabc93a0692634ef414"} Jan 29 15:30:03 crc kubenswrapper[5008]: I0129 15:30:03.505262 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-tw5d5" event={"ID":"a161323e-d13e-46da-b8bd-347b56ef5110","Type":"ContainerStarted","Data":"7a2225925c0c07bab41a304c11b06395202c0115dd5e08fbdf79ca5be853a611"} Jan 29 15:30:03 crc kubenswrapper[5008]: I0129 15:30:03.506197 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-s5vvl" event={"ID":"20ed8d47-c62e-4dfd-aa4d-630a6db1b3a9","Type":"ContainerStarted","Data":"5663bfb52f3262c7efcc8eb03f615bfc6f226e4272bbbf5e73e8b69e357d20cb"} Jan 29 15:30:03 crc kubenswrapper[5008]: I0129 15:30:03.508967 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-fsx74" event={"ID":"6db03bb1-4833-4d3f-82d5-08ec5710251f","Type":"ContainerStarted","Data":"4921a3d56c7fa08f67856e16dd8430555752f54f44d7bc78fe71cbcdf760a6dc"} Jan 29 15:30:03 crc kubenswrapper[5008]: I0129 15:30:03.511186 5008 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-6lddg" podStartSLOduration=124.511164396 podStartE2EDuration="2m4.511164396s" podCreationTimestamp="2026-01-29 15:27:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 15:30:03.508967579 +0000 UTC m=+147.181821816" watchObservedRunningTime="2026-01-29 15:30:03.511164396 +0000 UTC m=+147.184018633" Jan 29 15:30:03 crc kubenswrapper[5008]: I0129 15:30:03.511485 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-mqnz8" event={"ID":"272fd84c-e1ec-47ce-a8dc-fb0573d1208c","Type":"ContainerStarted","Data":"9cd6c3af2f085fd20ceca6542f6e23c5f980afb4f8976332f21c1f6e5a3f9c95"} Jan 29 15:30:03 crc kubenswrapper[5008]: I0129 15:30:03.511570 5008 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-server-qs6wx" podStartSLOduration=6.511563447 podStartE2EDuration="6.511563447s" podCreationTimestamp="2026-01-29 15:29:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 15:30:03.493852893 +0000 UTC m=+147.166707160" watchObservedRunningTime="2026-01-29 15:30:03.511563447 +0000 UTC m=+147.184417684" Jan 29 15:30:03 crc kubenswrapper[5008]: I0129 15:30:03.517194 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-lb8mt" event={"ID":"217f16d7-943b-4603-88fa-155377da9788","Type":"ContainerStarted","Data":"dfba1fc312a36bee852844d68251d314f733f9dac3c325e151c142c3787b0de9"} Jan 29 15:30:03 crc kubenswrapper[5008]: I0129 15:30:03.519601 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-p7nds" event={"ID":"aa595b2b-fee5-4e54-926b-40571cf2f472","Type":"ContainerStarted","Data":"195c3f24829bfaf34921e251b99a6f3bdae50c2b9262173c00cead0ae583e0b9"} Jan 29 15:30:03 crc kubenswrapper[5008]: I0129 15:30:03.521804 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-ztdsl" event={"ID":"3c5e8be2-fe94-488c-801e-d1a56700bfa5","Type":"ContainerStarted","Data":"100ecffc6cff9494691eabff05729c4d5b7c0766f0e736a4cc1be50aa03aa882"} Jan 29 15:30:03 crc kubenswrapper[5008]: I0129 15:30:03.521857 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-ztdsl" event={"ID":"3c5e8be2-fe94-488c-801e-d1a56700bfa5","Type":"ContainerStarted","Data":"327173dfc0d4a0283c57ab91db8bf6bfaf7d338be803aaada8937111649f350b"} Jan 29 15:30:03 crc kubenswrapper[5008]: I0129 15:30:03.523850 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-6zjns" event={"ID":"30a4c50c-34f7-4c9c-9cbd-baaf50ed16e1","Type":"ContainerStarted","Data":"2fdcfc92513722a0ed1839d1becd6b4c7cf2ef93e9416fff2dde6f74896351b7"} Jan 29 15:30:03 crc kubenswrapper[5008]: I0129 15:30:03.524383 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-558db77b4-6zjns" Jan 29 15:30:03 crc kubenswrapper[5008]: I0129 15:30:03.525279 5008 patch_prober.go:28] interesting pod/oauth-openshift-558db77b4-6zjns container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.36:6443/healthz\": dial tcp 10.217.0.36:6443: connect: connection refused" start-of-body= Jan 29 15:30:03 crc kubenswrapper[5008]: I0129 15:30:03.525319 5008 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-558db77b4-6zjns" podUID="30a4c50c-34f7-4c9c-9cbd-baaf50ed16e1" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.36:6443/healthz\": dial tcp 10.217.0.36:6443: connect: connection refused" Jan 29 15:30:03 crc kubenswrapper[5008]: I0129 15:30:03.545112 5008 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-zrdsf" podStartSLOduration=124.545087514 podStartE2EDuration="2m4.545087514s" podCreationTimestamp="2026-01-29 15:27:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 15:30:03.529183157 +0000 UTC m=+147.202037414" watchObservedRunningTime="2026-01-29 15:30:03.545087514 +0000 UTC m=+147.217941761" Jan 29 15:30:03 crc kubenswrapper[5008]: I0129 15:30:03.548974 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-n2sqt" event={"ID":"4adf65cb-4f11-4061-bcb5-71c3d9b890f7","Type":"ContainerStarted","Data":"402d302b4932479374dd27184aa53d55585f96a1840b5fcb7e2d79bb208c3ae4"} Jan 29 15:30:03 crc kubenswrapper[5008]: I0129 15:30:03.556104 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-ghcqr" event={"ID":"cb93f308-4554-41a0-a5c7-28d516a419c7","Type":"ContainerStarted","Data":"0d939ce01262d7645bf7f7f25b58669fa679b6b6aa223e47946a2b36751b1d53"} Jan 29 15:30:03 crc kubenswrapper[5008]: I0129 15:30:03.559571 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-zvhxk" event={"ID":"0ba6b3e7-02fc-4ad5-b6f1-8fcbd1940277","Type":"ContainerStarted","Data":"e3d6b86a7668acbc7b03cdc7f635fbf977789cd2e641c90b388de06d57416348"} Jan 29 15:30:03 crc kubenswrapper[5008]: I0129 15:30:03.566363 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29495010-t7nh4" event={"ID":"4a912999-007c-495d-aaa3-857d76158a91","Type":"ContainerStarted","Data":"e472830b4505664315811f646f65ea00f2b653c72238508aa40d729f5d7fedcb"} Jan 29 15:30:03 crc kubenswrapper[5008]: I0129 15:30:03.569964 5008 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-558db77b4-6zjns" podStartSLOduration=124.569951366 podStartE2EDuration="2m4.569951366s" podCreationTimestamp="2026-01-29 15:27:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 15:30:03.55065894 +0000 UTC m=+147.223513177" watchObservedRunningTime="2026-01-29 15:30:03.569951366 +0000 UTC m=+147.242805603" Jan 29 15:30:03 crc kubenswrapper[5008]: I0129 15:30:03.570477 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-4l85w" event={"ID":"653b37fe-d452-4111-b27f-ef75530abe41","Type":"ContainerStarted","Data":"f6fbbe2d489f924541978c5eb7db46c1df1746d94d1a6044b1c931f8e41a1780"} Jan 29 15:30:03 crc kubenswrapper[5008]: I0129 15:30:03.570588 5008 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-ghcqr" podStartSLOduration=124.570584182 podStartE2EDuration="2m4.570584182s" podCreationTimestamp="2026-01-29 15:27:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 15:30:03.569636937 +0000 UTC m=+147.242491194" watchObservedRunningTime="2026-01-29 15:30:03.570584182 +0000 UTC m=+147.243438419" Jan 29 15:30:03 crc kubenswrapper[5008]: I0129 15:30:03.578493 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qm54x\" (UID: \"30c54800-b443-4da8-9d41-22e8f156a1a1\") " pod="openshift-image-registry/image-registry-697d97f7c8-qm54x" Jan 29 15:30:03 crc kubenswrapper[5008]: E0129 15:30:03.581246 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 15:30:04.081224841 +0000 UTC m=+147.754079148 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qm54x" (UID: "30c54800-b443-4da8-9d41-22e8f156a1a1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:03 crc kubenswrapper[5008]: I0129 15:30:03.582928 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-g9x2n" event={"ID":"5ca041e2-baff-40ee-8fc9-e9bc58aee628","Type":"ContainerStarted","Data":"f96f93669d3d81dd721b4badc2ba7048ef6f1363d70a87e0a835ce8e7ff42513"} Jan 29 15:30:03 crc kubenswrapper[5008]: I0129 15:30:03.594770 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-v7r8x" event={"ID":"1c37e4bb-792b-4317-87ae-ca4172740500","Type":"ContainerStarted","Data":"8261c2974da97e6c65209b7eb7ac686cc65f4f3389e21ef6308e7bdc35698547"} Jan 29 15:30:03 crc kubenswrapper[5008]: I0129 15:30:03.594859 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-v7r8x" event={"ID":"1c37e4bb-792b-4317-87ae-ca4172740500","Type":"ContainerStarted","Data":"b8b44f3c1bdb03e7c9f1bd11c59ce79debe7026a884d1cd10a95c60fbd40cce7"} Jan 29 15:30:03 crc kubenswrapper[5008]: I0129 15:30:03.598775 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-4268l" event={"ID":"7473d665-3627-4470-a820-ebdbdc113587","Type":"ContainerStarted","Data":"744d2c5b14b18a0366937cb219697ae3c655391e7942e7c446395ce7d6b803ff"} Jan 29 15:30:03 crc kubenswrapper[5008]: I0129 15:30:03.600278 5008 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-fpmxk container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.6:8443/healthz\": dial tcp 10.217.0.6:8443: connect: connection refused" start-of-body= Jan 29 15:30:03 crc kubenswrapper[5008]: I0129 15:30:03.600332 5008 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-fpmxk" podUID="7d5c80c8-4e74-4618-96c0-8e76168ad709" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.6:8443/healthz\": dial tcp 10.217.0.6:8443: connect: connection refused" Jan 29 15:30:03 crc kubenswrapper[5008]: I0129 15:30:03.614836 5008 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca/service-ca-9c57cc56f-w2lv5" podStartSLOduration=124.61481975 podStartE2EDuration="2m4.61481975s" podCreationTimestamp="2026-01-29 15:27:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 15:30:03.614239435 +0000 UTC m=+147.287093692" watchObservedRunningTime="2026-01-29 15:30:03.61481975 +0000 UTC m=+147.287673987" Jan 29 15:30:03 crc kubenswrapper[5008]: I0129 15:30:03.680112 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 15:30:03 crc kubenswrapper[5008]: E0129 15:30:03.681060 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 15:30:04.181033814 +0000 UTC m=+147.853888121 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:03 crc kubenswrapper[5008]: I0129 15:30:03.784100 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qm54x\" (UID: \"30c54800-b443-4da8-9d41-22e8f156a1a1\") " pod="openshift-image-registry/image-registry-697d97f7c8-qm54x" Jan 29 15:30:03 crc kubenswrapper[5008]: E0129 15:30:03.784595 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 15:30:04.284578685 +0000 UTC m=+147.957432932 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qm54x" (UID: "30c54800-b443-4da8-9d41-22e8f156a1a1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:03 crc kubenswrapper[5008]: I0129 15:30:03.888264 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 15:30:03 crc kubenswrapper[5008]: E0129 15:30:03.888396 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 15:30:04.388367432 +0000 UTC m=+148.061221679 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:03 crc kubenswrapper[5008]: I0129 15:30:03.889666 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qm54x\" (UID: \"30c54800-b443-4da8-9d41-22e8f156a1a1\") " pod="openshift-image-registry/image-registry-697d97f7c8-qm54x" Jan 29 15:30:03 crc kubenswrapper[5008]: E0129 15:30:03.890266 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 15:30:04.390247971 +0000 UTC m=+148.063102248 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qm54x" (UID: "30c54800-b443-4da8-9d41-22e8f156a1a1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:03 crc kubenswrapper[5008]: I0129 15:30:03.991556 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 15:30:03 crc kubenswrapper[5008]: E0129 15:30:03.992009 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 15:30:04.491981535 +0000 UTC m=+148.164835782 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:03 crc kubenswrapper[5008]: I0129 15:30:03.992190 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qm54x\" (UID: \"30c54800-b443-4da8-9d41-22e8f156a1a1\") " pod="openshift-image-registry/image-registry-697d97f7c8-qm54x" Jan 29 15:30:03 crc kubenswrapper[5008]: E0129 15:30:03.992623 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 15:30:04.492611601 +0000 UTC m=+148.165465838 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qm54x" (UID: "30c54800-b443-4da8-9d41-22e8f156a1a1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:04 crc kubenswrapper[5008]: I0129 15:30:04.096232 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 15:30:04 crc kubenswrapper[5008]: E0129 15:30:04.096384 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 15:30:04.596359458 +0000 UTC m=+148.269213695 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:04 crc kubenswrapper[5008]: I0129 15:30:04.096853 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qm54x\" (UID: \"30c54800-b443-4da8-9d41-22e8f156a1a1\") " pod="openshift-image-registry/image-registry-697d97f7c8-qm54x" Jan 29 15:30:04 crc kubenswrapper[5008]: E0129 15:30:04.097121 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 15:30:04.597112467 +0000 UTC m=+148.269966704 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qm54x" (UID: "30c54800-b443-4da8-9d41-22e8f156a1a1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:04 crc kubenswrapper[5008]: I0129 15:30:04.197550 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 15:30:04 crc kubenswrapper[5008]: E0129 15:30:04.197774 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 15:30:04.697741882 +0000 UTC m=+148.370596119 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:04 crc kubenswrapper[5008]: I0129 15:30:04.198176 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qm54x\" (UID: \"30c54800-b443-4da8-9d41-22e8f156a1a1\") " pod="openshift-image-registry/image-registry-697d97f7c8-qm54x" Jan 29 15:30:04 crc kubenswrapper[5008]: E0129 15:30:04.198543 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 15:30:04.698532113 +0000 UTC m=+148.371386350 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qm54x" (UID: "30c54800-b443-4da8-9d41-22e8f156a1a1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:04 crc kubenswrapper[5008]: I0129 15:30:04.299439 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 15:30:04 crc kubenswrapper[5008]: E0129 15:30:04.299914 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 15:30:04.799891336 +0000 UTC m=+148.472745573 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:04 crc kubenswrapper[5008]: I0129 15:30:04.401527 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qm54x\" (UID: \"30c54800-b443-4da8-9d41-22e8f156a1a1\") " pod="openshift-image-registry/image-registry-697d97f7c8-qm54x" Jan 29 15:30:04 crc kubenswrapper[5008]: E0129 15:30:04.401886 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 15:30:04.901874786 +0000 UTC m=+148.574729023 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qm54x" (UID: "30c54800-b443-4da8-9d41-22e8f156a1a1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:04 crc kubenswrapper[5008]: I0129 15:30:04.446441 5008 patch_prober.go:28] interesting pod/router-default-5444994796-lkcrp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 29 15:30:04 crc kubenswrapper[5008]: [-]has-synced failed: reason withheld Jan 29 15:30:04 crc kubenswrapper[5008]: [+]process-running ok Jan 29 15:30:04 crc kubenswrapper[5008]: healthz check failed Jan 29 15:30:04 crc kubenswrapper[5008]: I0129 15:30:04.446515 5008 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-lkcrp" podUID="380625b0-02b5-417a-bd1e-7ccf56f56059" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 29 15:30:04 crc kubenswrapper[5008]: I0129 15:30:04.502290 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 15:30:04 crc kubenswrapper[5008]: E0129 15:30:04.502444 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 15:30:05.002422089 +0000 UTC m=+148.675276326 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:04 crc kubenswrapper[5008]: I0129 15:30:04.503006 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qm54x\" (UID: \"30c54800-b443-4da8-9d41-22e8f156a1a1\") " pod="openshift-image-registry/image-registry-697d97f7c8-qm54x" Jan 29 15:30:04 crc kubenswrapper[5008]: E0129 15:30:04.503339 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 15:30:05.003321012 +0000 UTC m=+148.676175249 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qm54x" (UID: "30c54800-b443-4da8-9d41-22e8f156a1a1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:04 crc kubenswrapper[5008]: I0129 15:30:04.603941 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 15:30:04 crc kubenswrapper[5008]: E0129 15:30:04.604198 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 15:30:05.104150952 +0000 UTC m=+148.777005229 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:04 crc kubenswrapper[5008]: I0129 15:30:04.604358 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qm54x\" (UID: \"30c54800-b443-4da8-9d41-22e8f156a1a1\") " pod="openshift-image-registry/image-registry-697d97f7c8-qm54x" Jan 29 15:30:04 crc kubenswrapper[5008]: E0129 15:30:04.604770 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 15:30:05.104755588 +0000 UTC m=+148.777609815 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qm54x" (UID: "30c54800-b443-4da8-9d41-22e8f156a1a1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:04 crc kubenswrapper[5008]: I0129 15:30:04.608937 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-4268l" event={"ID":"7473d665-3627-4470-a820-ebdbdc113587","Type":"ContainerStarted","Data":"8d7598ad2c3c5a660fb19d3ee369a6710759e6bbe8cbe47b3f02e5b7530f821c"} Jan 29 15:30:04 crc kubenswrapper[5008]: I0129 15:30:04.609125 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-4268l" Jan 29 15:30:04 crc kubenswrapper[5008]: I0129 15:30:04.611128 5008 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-4268l container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.25:8080/healthz\": dial tcp 10.217.0.25:8080: connect: connection refused" start-of-body= Jan 29 15:30:04 crc kubenswrapper[5008]: I0129 15:30:04.611176 5008 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-4268l" podUID="7473d665-3627-4470-a820-ebdbdc113587" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.25:8080/healthz\": dial tcp 10.217.0.25:8080: connect: connection refused" Jan 29 15:30:04 crc kubenswrapper[5008]: I0129 15:30:04.611648 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29495010-t7nh4" event={"ID":"4a912999-007c-495d-aaa3-857d76158a91","Type":"ContainerStarted","Data":"74e48ee561dff74c0b937607b1d67f636544c839b5dfad578f5c993d847e004b"} Jan 29 15:30:04 crc kubenswrapper[5008]: I0129 15:30:04.613124 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-mqnz8" event={"ID":"272fd84c-e1ec-47ce-a8dc-fb0573d1208c","Type":"ContainerStarted","Data":"391533279b80e4f5f53727ee47007e86e4298a4d570c7b399251dd3de6e7d292"} Jan 29 15:30:04 crc kubenswrapper[5008]: I0129 15:30:04.613809 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-mqnz8" Jan 29 15:30:04 crc kubenswrapper[5008]: I0129 15:30:04.614804 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-w5jbk" event={"ID":"e3105b11-cb5b-4006-8f1b-17b90922d743","Type":"ContainerStarted","Data":"0d68e51992e60e13aa2ec240834f40001678fbd6640680f3d58ebe34a71c7d34"} Jan 29 15:30:04 crc kubenswrapper[5008]: I0129 15:30:04.614837 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-w5jbk" event={"ID":"e3105b11-cb5b-4006-8f1b-17b90922d743","Type":"ContainerStarted","Data":"7f1a73c150ece73daa71b0bfe26d7b550d33ef87ca603f83e488c75bfe1df3c7"} Jan 29 15:30:04 crc kubenswrapper[5008]: I0129 15:30:04.615829 5008 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-mqnz8 container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.29:8443/healthz\": dial tcp 10.217.0.29:8443: connect: connection refused" start-of-body= Jan 29 15:30:04 crc kubenswrapper[5008]: I0129 15:30:04.615879 5008 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-mqnz8" podUID="272fd84c-e1ec-47ce-a8dc-fb0573d1208c" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.29:8443/healthz\": dial tcp 10.217.0.29:8443: connect: connection refused" Jan 29 15:30:04 crc kubenswrapper[5008]: I0129 15:30:04.616314 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29494995-x4n8l" event={"ID":"b1a4a04b-067c-43f1-b355-46161babe869","Type":"ContainerStarted","Data":"3e1d83d49207f7e8ce5235b5d25891dfd2e43340feba1d11402b5242e6b975a7"} Jan 29 15:30:04 crc kubenswrapper[5008]: I0129 15:30:04.616432 5008 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-operator-lifecycle-manager/collect-profiles-29494995-x4n8l" podUID="b1a4a04b-067c-43f1-b355-46161babe869" containerName="collect-profiles" containerID="cri-o://3e1d83d49207f7e8ce5235b5d25891dfd2e43340feba1d11402b5242e6b975a7" gracePeriod=30 Jan 29 15:30:04 crc kubenswrapper[5008]: I0129 15:30:04.620712 5008 generic.go:334] "Generic (PLEG): container finished" podID="4adf65cb-4f11-4061-bcb5-71c3d9b890f7" containerID="0a1a01356733e8fdcf29791389d756c3ebde2fc9de1824cec4875d7045e6d565" exitCode=0 Jan 29 15:30:04 crc kubenswrapper[5008]: I0129 15:30:04.620839 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-n2sqt" event={"ID":"4adf65cb-4f11-4061-bcb5-71c3d9b890f7","Type":"ContainerDied","Data":"0a1a01356733e8fdcf29791389d756c3ebde2fc9de1824cec4875d7045e6d565"} Jan 29 15:30:04 crc kubenswrapper[5008]: I0129 15:30:04.624579 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-p7nds" event={"ID":"aa595b2b-fee5-4e54-926b-40571cf2f472","Type":"ContainerStarted","Data":"7d900cb3e39061652d19e25fcee4c156a7509c50ea1f253d827a31184a732862"} Jan 29 15:30:04 crc kubenswrapper[5008]: I0129 15:30:04.628556 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-s5vvl" event={"ID":"20ed8d47-c62e-4dfd-aa4d-630a6db1b3a9","Type":"ContainerStarted","Data":"24be9116f08661dd2ec1ffb7c3811b0e9ea964d72aeb90b316f7eab89f80a3fd"} Jan 29 15:30:04 crc kubenswrapper[5008]: I0129 15:30:04.628614 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-s5vvl" event={"ID":"20ed8d47-c62e-4dfd-aa4d-630a6db1b3a9","Type":"ContainerStarted","Data":"f25d5aefbd420ed5cf85b13633187a76a22fb95cc3f616f0f209c2dfbb186574"} Jan 29 15:30:04 crc kubenswrapper[5008]: I0129 15:30:04.631094 5008 generic.go:334] "Generic (PLEG): container finished" podID="00332b75-a73b-49c1-9b72-73445baccf6d" containerID="3bfad117be29eee4bccfcaee08b906879445a7ed0a1bbcdc5632ce698e47ade9" exitCode=0 Jan 29 15:30:04 crc kubenswrapper[5008]: I0129 15:30:04.631219 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-468fl" event={"ID":"00332b75-a73b-49c1-9b72-73445baccf6d","Type":"ContainerDied","Data":"3bfad117be29eee4bccfcaee08b906879445a7ed0a1bbcdc5632ce698e47ade9"} Jan 29 15:30:04 crc kubenswrapper[5008]: I0129 15:30:04.632771 5008 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-j8wt8" podStartSLOduration=125.632750321 podStartE2EDuration="2m5.632750321s" podCreationTimestamp="2026-01-29 15:27:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 15:30:03.630323436 +0000 UTC m=+147.303177703" watchObservedRunningTime="2026-01-29 15:30:04.632750321 +0000 UTC m=+148.305604558" Jan 29 15:30:04 crc kubenswrapper[5008]: I0129 15:30:04.634201 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-bmtm4" event={"ID":"632f321e-e374-410c-9dc3-0aacadc97f3b","Type":"ContainerStarted","Data":"3296cae282984c7e6920a454f45d67fa5e778e435d6dbd7baa1c7f2891ef7698"} Jan 29 15:30:04 crc kubenswrapper[5008]: I0129 15:30:04.634250 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-bmtm4" event={"ID":"632f321e-e374-410c-9dc3-0aacadc97f3b","Type":"ContainerStarted","Data":"8a3fdc3b71ca79a1299d1e828bd840933f9809724fa7bd5c34abc069769ee2f0"} Jan 29 15:30:04 crc kubenswrapper[5008]: I0129 15:30:04.634289 5008 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-4268l" podStartSLOduration=125.634278921 podStartE2EDuration="2m5.634278921s" podCreationTimestamp="2026-01-29 15:27:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 15:30:04.632617588 +0000 UTC m=+148.305471845" watchObservedRunningTime="2026-01-29 15:30:04.634278921 +0000 UTC m=+148.307133158" Jan 29 15:30:04 crc kubenswrapper[5008]: I0129 15:30:04.640002 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-9b7ll" event={"ID":"a14210e2-42e9-45d9-8633-a5df1a863a9f","Type":"ContainerStarted","Data":"1e0a0d9ca8dff4c21b5f79fbacec87777d92f4850a7d9e7c69963e8eca6ad82d"} Jan 29 15:30:04 crc kubenswrapper[5008]: I0129 15:30:04.645849 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-2h8sf" event={"ID":"1408f146-4652-41e3-8947-2f230e515750","Type":"ContainerStarted","Data":"0d75173362fa52d9cc595b3470116e9df07384275cde3f5e4d7aa4ccbd9945e4"} Jan 29 15:30:04 crc kubenswrapper[5008]: I0129 15:30:04.648712 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-zvhxk" event={"ID":"0ba6b3e7-02fc-4ad5-b6f1-8fcbd1940277","Type":"ContainerStarted","Data":"5a1e2653892ae4b1ba274181729a0761b119f8409558bb4b94fe34fc6adcd12b"} Jan 29 15:30:04 crc kubenswrapper[5008]: I0129 15:30:04.649118 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-zvhxk" Jan 29 15:30:04 crc kubenswrapper[5008]: I0129 15:30:04.650462 5008 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-zvhxk container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.41:8443/healthz\": dial tcp 10.217.0.41:8443: connect: connection refused" start-of-body= Jan 29 15:30:04 crc kubenswrapper[5008]: I0129 15:30:04.650509 5008 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-zvhxk" podUID="0ba6b3e7-02fc-4ad5-b6f1-8fcbd1940277" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.41:8443/healthz\": dial tcp 10.217.0.41:8443: connect: connection refused" Jan 29 15:30:04 crc kubenswrapper[5008]: I0129 15:30:04.650652 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-9gw94" event={"ID":"ec989c54-8ec3-4f9d-87b0-2665776ffd15","Type":"ContainerStarted","Data":"1b82191d9c4944ce570495bcc0385f820b0df1ac7caeefc10b73c411e5f4e461"} Jan 29 15:30:04 crc kubenswrapper[5008]: I0129 15:30:04.652420 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-cb6xn" event={"ID":"0b6fe31f-5401-4a2e-bccb-e57fab2a35ba","Type":"ContainerStarted","Data":"bf02f6d83597a645783d7a4b36e0c926cbd336c8598ba779c42fe94294415f8f"} Jan 29 15:30:04 crc kubenswrapper[5008]: I0129 15:30:04.652446 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-cb6xn" event={"ID":"0b6fe31f-5401-4a2e-bccb-e57fab2a35ba","Type":"ContainerStarted","Data":"44786dbd89839bfaeeb45009ec688ded945e7d27a0f947c0e5d968e2ac0c9c82"} Jan 29 15:30:04 crc kubenswrapper[5008]: I0129 15:30:04.655520 5008 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-mqnz8" podStartSLOduration=125.655510537 podStartE2EDuration="2m5.655510537s" podCreationTimestamp="2026-01-29 15:27:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 15:30:04.653399191 +0000 UTC m=+148.326253428" watchObservedRunningTime="2026-01-29 15:30:04.655510537 +0000 UTC m=+148.328364774" Jan 29 15:30:04 crc kubenswrapper[5008]: I0129 15:30:04.657081 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-tw5d5" event={"ID":"a161323e-d13e-46da-b8bd-347b56ef5110","Type":"ContainerStarted","Data":"2bd500525eb46ec20a4b0e11ad856c6dccaa8b6e0c0742e92a837b87a9f961e3"} Jan 29 15:30:04 crc kubenswrapper[5008]: I0129 15:30:04.660107 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-g2rk6" event={"ID":"3f7de4a5-3819-41c0-9e2e-766dcff408bb","Type":"ContainerStarted","Data":"df5ae52d7003ab128c12d9fe4ed77a8f1ef6ec06ad705d9f914ff4635fb217e5"} Jan 29 15:30:04 crc kubenswrapper[5008]: I0129 15:30:04.663530 5008 generic.go:334] "Generic (PLEG): container finished" podID="653b37fe-d452-4111-b27f-ef75530abe41" containerID="103761a5e1810d875d78be2a091de722cf91467b2e894ae56cf0127f4867da60" exitCode=0 Jan 29 15:30:04 crc kubenswrapper[5008]: I0129 15:30:04.663577 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-4l85w" event={"ID":"653b37fe-d452-4111-b27f-ef75530abe41","Type":"ContainerDied","Data":"103761a5e1810d875d78be2a091de722cf91467b2e894ae56cf0127f4867da60"} Jan 29 15:30:04 crc kubenswrapper[5008]: I0129 15:30:04.667125 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-ztdsl" event={"ID":"3c5e8be2-fe94-488c-801e-d1a56700bfa5","Type":"ContainerStarted","Data":"cd1a48045d8ac4b70ad1691f2d053ec8afe9c01194bcbe9830b15c4fe2e87ba3"} Jan 29 15:30:04 crc kubenswrapper[5008]: I0129 15:30:04.671042 5008 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-canary/ingress-canary-p7nds" podStartSLOduration=7.671023583 podStartE2EDuration="7.671023583s" podCreationTimestamp="2026-01-29 15:29:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 15:30:04.666863654 +0000 UTC m=+148.339717891" watchObservedRunningTime="2026-01-29 15:30:04.671023583 +0000 UTC m=+148.343877820" Jan 29 15:30:04 crc kubenswrapper[5008]: I0129 15:30:04.692766 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-x9bx7" event={"ID":"cf3d6df4-e07e-4d72-b2b6-20dcb29700d7","Type":"ContainerStarted","Data":"8afae448fd06804663a482d0a781ad7f23f4ad9fbf2f57bda116e75b1bea36a1"} Jan 29 15:30:04 crc kubenswrapper[5008]: I0129 15:30:04.701878 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-p8fx6" event={"ID":"8d495a4f-d952-4050-a895-e6650c083e0d","Type":"ContainerStarted","Data":"76fde1e7356564005a3d5c2e44cfd4e65aa26bf34cad6b298cc295a256ca252e"} Jan 29 15:30:04 crc kubenswrapper[5008]: I0129 15:30:04.704019 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-lb8mt" event={"ID":"217f16d7-943b-4603-88fa-155377da9788","Type":"ContainerStarted","Data":"8b6da3bdc6d1eba4c81b206e8eb959228ded2c4354635ea2f17c3404bd13a2e5"} Jan 29 15:30:04 crc kubenswrapper[5008]: I0129 15:30:04.705466 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 15:30:04 crc kubenswrapper[5008]: E0129 15:30:04.706460 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 15:30:05.20644568 +0000 UTC m=+148.879299917 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:04 crc kubenswrapper[5008]: I0129 15:30:04.719880 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-zs2tk" event={"ID":"5b987d67-e424-4286-a25d-11bfc4d1e577","Type":"ContainerStarted","Data":"60eb230a415a5a7dbb7ada59496bbe501d736b44fa85bf2c654d2168bfa57b98"} Jan 29 15:30:04 crc kubenswrapper[5008]: I0129 15:30:04.720671 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console-operator/console-operator-58897d9998-zs2tk" Jan 29 15:30:04 crc kubenswrapper[5008]: I0129 15:30:04.735110 5008 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29494995-x4n8l" podStartSLOduration=126.73508956 podStartE2EDuration="2m6.73508956s" podCreationTimestamp="2026-01-29 15:27:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 15:30:04.732641116 +0000 UTC m=+148.405495363" watchObservedRunningTime="2026-01-29 15:30:04.73508956 +0000 UTC m=+148.407943807" Jan 29 15:30:04 crc kubenswrapper[5008]: I0129 15:30:04.743921 5008 patch_prober.go:28] interesting pod/console-operator-58897d9998-zs2tk container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.26:8443/readyz\": dial tcp 10.217.0.26:8443: connect: connection refused" start-of-body= Jan 29 15:30:04 crc kubenswrapper[5008]: I0129 15:30:04.743979 5008 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-zs2tk" podUID="5b987d67-e424-4286-a25d-11bfc4d1e577" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.26:8443/readyz\": dial tcp 10.217.0.26:8443: connect: connection refused" Jan 29 15:30:04 crc kubenswrapper[5008]: I0129 15:30:04.753991 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-fsx74" event={"ID":"6db03bb1-4833-4d3f-82d5-08ec5710251f","Type":"ContainerStarted","Data":"6a05b6684e6b55217056921f9d150f3a111d0469bdd16f7be671021d94fbb59f"} Jan 29 15:30:04 crc kubenswrapper[5008]: I0129 15:30:04.763036 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-brcd7" event={"ID":"8eb3ecfb-3675-4931-b618-9a5ba6d23b1d","Type":"ContainerStarted","Data":"1469fd1bbc8563be92335c838cbab649b37256c9755798768624d28bc156469e"} Jan 29 15:30:04 crc kubenswrapper[5008]: I0129 15:30:04.772474 5008 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29495010-t7nh4" podStartSLOduration=4.761387288 podStartE2EDuration="4.761387288s" podCreationTimestamp="2026-01-29 15:30:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 15:30:04.755566776 +0000 UTC m=+148.428421013" watchObservedRunningTime="2026-01-29 15:30:04.761387288 +0000 UTC m=+148.434241555" Jan 29 15:30:04 crc kubenswrapper[5008]: I0129 15:30:04.774950 5008 patch_prober.go:28] interesting pod/oauth-openshift-558db77b4-6zjns container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.36:6443/healthz\": dial tcp 10.217.0.36:6443: connect: connection refused" start-of-body= Jan 29 15:30:04 crc kubenswrapper[5008]: I0129 15:30:04.775022 5008 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-558db77b4-6zjns" podUID="30a4c50c-34f7-4c9c-9cbd-baaf50ed16e1" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.36:6443/healthz\": dial tcp 10.217.0.36:6443: connect: connection refused" Jan 29 15:30:04 crc kubenswrapper[5008]: I0129 15:30:04.775301 5008 patch_prober.go:28] interesting pod/route-controller-manager-6576b87f9c-4zwkl container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.31:8443/healthz\": dial tcp 10.217.0.31:8443: connect: connection refused" start-of-body= Jan 29 15:30:04 crc kubenswrapper[5008]: I0129 15:30:04.775419 5008 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-4zwkl" podUID="f56b5e44-f079-4c56-9e19-e09996979003" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.31:8443/healthz\": dial tcp 10.217.0.31:8443: connect: connection refused" Jan 29 15:30:04 crc kubenswrapper[5008]: I0129 15:30:04.777602 5008 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-x9bx7" podStartSLOduration=125.777587183 podStartE2EDuration="2m5.777587183s" podCreationTimestamp="2026-01-29 15:27:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 15:30:04.774842381 +0000 UTC m=+148.447696608" watchObservedRunningTime="2026-01-29 15:30:04.777587183 +0000 UTC m=+148.450441420" Jan 29 15:30:04 crc kubenswrapper[5008]: I0129 15:30:04.797586 5008 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca-operator/service-ca-operator-777779d784-9b7ll" podStartSLOduration=125.797565716 podStartE2EDuration="2m5.797565716s" podCreationTimestamp="2026-01-29 15:27:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 15:30:04.794238629 +0000 UTC m=+148.467092866" watchObservedRunningTime="2026-01-29 15:30:04.797565716 +0000 UTC m=+148.470419953" Jan 29 15:30:04 crc kubenswrapper[5008]: I0129 15:30:04.825389 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qm54x\" (UID: \"30c54800-b443-4da8-9d41-22e8f156a1a1\") " pod="openshift-image-registry/image-registry-697d97f7c8-qm54x" Jan 29 15:30:04 crc kubenswrapper[5008]: E0129 15:30:04.826272 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 15:30:05.326252627 +0000 UTC m=+148.999106944 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qm54x" (UID: "30c54800-b443-4da8-9d41-22e8f156a1a1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:04 crc kubenswrapper[5008]: I0129 15:30:04.845878 5008 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console-operator/console-operator-58897d9998-zs2tk" podStartSLOduration=126.845862231 podStartE2EDuration="2m6.845862231s" podCreationTimestamp="2026-01-29 15:27:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 15:30:04.827325565 +0000 UTC m=+148.500179823" watchObservedRunningTime="2026-01-29 15:30:04.845862231 +0000 UTC m=+148.518716468" Jan 29 15:30:04 crc kubenswrapper[5008]: I0129 15:30:04.870382 5008 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-9gw94" podStartSLOduration=125.870358732 podStartE2EDuration="2m5.870358732s" podCreationTimestamp="2026-01-29 15:27:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 15:30:04.847739789 +0000 UTC m=+148.520594026" watchObservedRunningTime="2026-01-29 15:30:04.870358732 +0000 UTC m=+148.543212979" Jan 29 15:30:04 crc kubenswrapper[5008]: I0129 15:30:04.901260 5008 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-2h8sf" podStartSLOduration=125.90123522 podStartE2EDuration="2m5.90123522s" podCreationTimestamp="2026-01-29 15:27:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 15:30:04.873278148 +0000 UTC m=+148.546132385" watchObservedRunningTime="2026-01-29 15:30:04.90123522 +0000 UTC m=+148.574089467" Jan 29 15:30:04 crc kubenswrapper[5008]: I0129 15:30:04.929730 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 15:30:04 crc kubenswrapper[5008]: E0129 15:30:04.930943 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 15:30:05.430908158 +0000 UTC m=+149.103762395 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:04 crc kubenswrapper[5008]: I0129 15:30:04.954442 5008 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-zvhxk" podStartSLOduration=125.954418082 podStartE2EDuration="2m5.954418082s" podCreationTimestamp="2026-01-29 15:27:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 15:30:04.936327018 +0000 UTC m=+148.609181255" watchObservedRunningTime="2026-01-29 15:30:04.954418082 +0000 UTC m=+148.627272319" Jan 29 15:30:04 crc kubenswrapper[5008]: I0129 15:30:04.983995 5008 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-p8fx6" podStartSLOduration=126.983977936 podStartE2EDuration="2m6.983977936s" podCreationTimestamp="2026-01-29 15:27:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 15:30:04.957773381 +0000 UTC m=+148.630627618" watchObservedRunningTime="2026-01-29 15:30:04.983977936 +0000 UTC m=+148.656832173" Jan 29 15:30:04 crc kubenswrapper[5008]: I0129 15:30:04.984204 5008 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-ztdsl" podStartSLOduration=126.984199293 podStartE2EDuration="2m6.984199293s" podCreationTimestamp="2026-01-29 15:27:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 15:30:04.982648382 +0000 UTC m=+148.655502629" watchObservedRunningTime="2026-01-29 15:30:04.984199293 +0000 UTC m=+148.657053540" Jan 29 15:30:05 crc kubenswrapper[5008]: I0129 15:30:05.006477 5008 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-lb8mt" podStartSLOduration=127.006457476 podStartE2EDuration="2m7.006457476s" podCreationTimestamp="2026-01-29 15:27:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 15:30:05.004025612 +0000 UTC m=+148.676879869" watchObservedRunningTime="2026-01-29 15:30:05.006457476 +0000 UTC m=+148.679311723" Jan 29 15:30:05 crc kubenswrapper[5008]: I0129 15:30:05.032121 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qm54x\" (UID: \"30c54800-b443-4da8-9d41-22e8f156a1a1\") " pod="openshift-image-registry/image-registry-697d97f7c8-qm54x" Jan 29 15:30:05 crc kubenswrapper[5008]: E0129 15:30:05.032468 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 15:30:05.532457386 +0000 UTC m=+149.205311613 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qm54x" (UID: "30c54800-b443-4da8-9d41-22e8f156a1a1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:05 crc kubenswrapper[5008]: I0129 15:30:05.051463 5008 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-f9d7485db-g2rk6" podStartSLOduration=127.051449963 podStartE2EDuration="2m7.051449963s" podCreationTimestamp="2026-01-29 15:27:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 15:30:05.051212096 +0000 UTC m=+148.724066353" watchObservedRunningTime="2026-01-29 15:30:05.051449963 +0000 UTC m=+148.724304200" Jan 29 15:30:05 crc kubenswrapper[5008]: I0129 15:30:05.079940 5008 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-f5fs6" podStartSLOduration=126.079922338 podStartE2EDuration="2m6.079922338s" podCreationTimestamp="2026-01-29 15:27:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 15:30:05.078393539 +0000 UTC m=+148.751247776" watchObservedRunningTime="2026-01-29 15:30:05.079922338 +0000 UTC m=+148.752776585" Jan 29 15:30:05 crc kubenswrapper[5008]: I0129 15:30:05.133165 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 15:30:05 crc kubenswrapper[5008]: E0129 15:30:05.133588 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 15:30:05.633571783 +0000 UTC m=+149.306426020 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:05 crc kubenswrapper[5008]: I0129 15:30:05.141842 5008 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns-operator/dns-operator-744455d44c-2dsnp" podStartSLOduration=127.141825819 podStartE2EDuration="2m7.141825819s" podCreationTimestamp="2026-01-29 15:27:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 15:30:05.103061095 +0000 UTC m=+148.775915332" watchObservedRunningTime="2026-01-29 15:30:05.141825819 +0000 UTC m=+148.814680056" Jan 29 15:30:05 crc kubenswrapper[5008]: I0129 15:30:05.142125 5008 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd-operator/etcd-operator-b45778765-v7r8x" podStartSLOduration=127.142120617 podStartE2EDuration="2m7.142120617s" podCreationTimestamp="2026-01-29 15:27:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 15:30:05.135928325 +0000 UTC m=+148.808782552" watchObservedRunningTime="2026-01-29 15:30:05.142120617 +0000 UTC m=+148.814974854" Jan 29 15:30:05 crc kubenswrapper[5008]: I0129 15:30:05.161900 5008 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-brcd7" podStartSLOduration=127.161881915 podStartE2EDuration="2m7.161881915s" podCreationTimestamp="2026-01-29 15:27:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 15:30:05.1598048 +0000 UTC m=+148.832659047" watchObservedRunningTime="2026-01-29 15:30:05.161881915 +0000 UTC m=+148.834736152" Jan 29 15:30:05 crc kubenswrapper[5008]: I0129 15:30:05.234569 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qm54x\" (UID: \"30c54800-b443-4da8-9d41-22e8f156a1a1\") " pod="openshift-image-registry/image-registry-697d97f7c8-qm54x" Jan 29 15:30:05 crc kubenswrapper[5008]: E0129 15:30:05.234951 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 15:30:05.734934427 +0000 UTC m=+149.407788664 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qm54x" (UID: "30c54800-b443-4da8-9d41-22e8f156a1a1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:05 crc kubenswrapper[5008]: I0129 15:30:05.335422 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 15:30:05 crc kubenswrapper[5008]: E0129 15:30:05.335575 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 15:30:05.835543072 +0000 UTC m=+149.508397309 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:05 crc kubenswrapper[5008]: I0129 15:30:05.335662 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qm54x\" (UID: \"30c54800-b443-4da8-9d41-22e8f156a1a1\") " pod="openshift-image-registry/image-registry-697d97f7c8-qm54x" Jan 29 15:30:05 crc kubenswrapper[5008]: E0129 15:30:05.335945 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 15:30:05.835935472 +0000 UTC m=+149.508789709 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qm54x" (UID: "30c54800-b443-4da8-9d41-22e8f156a1a1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:05 crc kubenswrapper[5008]: I0129 15:30:05.437101 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 15:30:05 crc kubenswrapper[5008]: E0129 15:30:05.437382 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 15:30:05.937332986 +0000 UTC m=+149.610187263 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:05 crc kubenswrapper[5008]: I0129 15:30:05.437514 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qm54x\" (UID: \"30c54800-b443-4da8-9d41-22e8f156a1a1\") " pod="openshift-image-registry/image-registry-697d97f7c8-qm54x" Jan 29 15:30:05 crc kubenswrapper[5008]: E0129 15:30:05.437847 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 15:30:05.93783432 +0000 UTC m=+149.610688557 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qm54x" (UID: "30c54800-b443-4da8-9d41-22e8f156a1a1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:05 crc kubenswrapper[5008]: I0129 15:30:05.441236 5008 patch_prober.go:28] interesting pod/router-default-5444994796-lkcrp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 29 15:30:05 crc kubenswrapper[5008]: [-]has-synced failed: reason withheld Jan 29 15:30:05 crc kubenswrapper[5008]: [+]process-running ok Jan 29 15:30:05 crc kubenswrapper[5008]: healthz check failed Jan 29 15:30:05 crc kubenswrapper[5008]: I0129 15:30:05.441277 5008 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-lkcrp" podUID="380625b0-02b5-417a-bd1e-7ccf56f56059" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 29 15:30:05 crc kubenswrapper[5008]: I0129 15:30:05.538259 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 15:30:05 crc kubenswrapper[5008]: E0129 15:30:05.538432 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 15:30:06.038410992 +0000 UTC m=+149.711265229 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:05 crc kubenswrapper[5008]: I0129 15:30:05.538606 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qm54x\" (UID: \"30c54800-b443-4da8-9d41-22e8f156a1a1\") " pod="openshift-image-registry/image-registry-697d97f7c8-qm54x" Jan 29 15:30:05 crc kubenswrapper[5008]: E0129 15:30:05.538926 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 15:30:06.038916786 +0000 UTC m=+149.711771023 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qm54x" (UID: "30c54800-b443-4da8-9d41-22e8f156a1a1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:05 crc kubenswrapper[5008]: I0129 15:30:05.639321 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 15:30:05 crc kubenswrapper[5008]: E0129 15:30:05.639474 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 15:30:06.139440818 +0000 UTC m=+149.812295095 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:05 crc kubenswrapper[5008]: I0129 15:30:05.639569 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qm54x\" (UID: \"30c54800-b443-4da8-9d41-22e8f156a1a1\") " pod="openshift-image-registry/image-registry-697d97f7c8-qm54x" Jan 29 15:30:05 crc kubenswrapper[5008]: E0129 15:30:05.639910 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 15:30:06.13989671 +0000 UTC m=+149.812750947 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qm54x" (UID: "30c54800-b443-4da8-9d41-22e8f156a1a1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:05 crc kubenswrapper[5008]: I0129 15:30:05.740246 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 15:30:05 crc kubenswrapper[5008]: E0129 15:30:05.740447 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 15:30:06.240402081 +0000 UTC m=+149.913256358 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:05 crc kubenswrapper[5008]: I0129 15:30:05.740525 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qm54x\" (UID: \"30c54800-b443-4da8-9d41-22e8f156a1a1\") " pod="openshift-image-registry/image-registry-697d97f7c8-qm54x" Jan 29 15:30:05 crc kubenswrapper[5008]: E0129 15:30:05.741033 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 15:30:06.241003657 +0000 UTC m=+149.913857944 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qm54x" (UID: "30c54800-b443-4da8-9d41-22e8f156a1a1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:05 crc kubenswrapper[5008]: I0129 15:30:05.768483 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-tw5d5" event={"ID":"a161323e-d13e-46da-b8bd-347b56ef5110","Type":"ContainerStarted","Data":"8a3fd0c545cecffef0eefee384beab5dfdc354a553d84845f204cb0ed3f9d3f5"} Jan 29 15:30:05 crc kubenswrapper[5008]: I0129 15:30:05.770384 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-fsx74" event={"ID":"6db03bb1-4833-4d3f-82d5-08ec5710251f","Type":"ContainerStarted","Data":"1e260b26f54be54fccd58ded45998736fb21255fd1fbad49025c25da64de58b4"} Jan 29 15:30:05 crc kubenswrapper[5008]: I0129 15:30:05.772249 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-468fl" event={"ID":"00332b75-a73b-49c1-9b72-73445baccf6d","Type":"ContainerStarted","Data":"ada8cd4946b6f3f363e70e26539d7ad5c75f7dff04f50ead2bda78d440c0a541"} Jan 29 15:30:05 crc kubenswrapper[5008]: I0129 15:30:05.786843 5008 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operator-lifecycle-manager_collect-profiles-29494995-x4n8l_b1a4a04b-067c-43f1-b355-46161babe869/collect-profiles/0.log" Jan 29 15:30:05 crc kubenswrapper[5008]: I0129 15:30:05.786914 5008 generic.go:334] "Generic (PLEG): container finished" podID="b1a4a04b-067c-43f1-b355-46161babe869" containerID="3e1d83d49207f7e8ce5235b5d25891dfd2e43340feba1d11402b5242e6b975a7" exitCode=2 Jan 29 15:30:05 crc kubenswrapper[5008]: I0129 15:30:05.787140 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29494995-x4n8l" event={"ID":"b1a4a04b-067c-43f1-b355-46161babe869","Type":"ContainerDied","Data":"3e1d83d49207f7e8ce5235b5d25891dfd2e43340feba1d11402b5242e6b975a7"} Jan 29 15:30:05 crc kubenswrapper[5008]: I0129 15:30:05.788740 5008 patch_prober.go:28] interesting pod/route-controller-manager-6576b87f9c-4zwkl container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.31:8443/healthz\": dial tcp 10.217.0.31:8443: connect: connection refused" start-of-body= Jan 29 15:30:05 crc kubenswrapper[5008]: I0129 15:30:05.788763 5008 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-zvhxk container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.41:8443/healthz\": dial tcp 10.217.0.41:8443: connect: connection refused" start-of-body= Jan 29 15:30:05 crc kubenswrapper[5008]: I0129 15:30:05.788822 5008 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-4268l container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.25:8080/healthz\": dial tcp 10.217.0.25:8080: connect: connection refused" start-of-body= Jan 29 15:30:05 crc kubenswrapper[5008]: I0129 15:30:05.788860 5008 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-mqnz8 container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.29:8443/healthz\": dial tcp 10.217.0.29:8443: connect: connection refused" start-of-body= Jan 29 15:30:05 crc kubenswrapper[5008]: I0129 15:30:05.788882 5008 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-mqnz8" podUID="272fd84c-e1ec-47ce-a8dc-fb0573d1208c" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.29:8443/healthz\": dial tcp 10.217.0.29:8443: connect: connection refused" Jan 29 15:30:05 crc kubenswrapper[5008]: I0129 15:30:05.788880 5008 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-4268l" podUID="7473d665-3627-4470-a820-ebdbdc113587" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.25:8080/healthz\": dial tcp 10.217.0.25:8080: connect: connection refused" Jan 29 15:30:05 crc kubenswrapper[5008]: I0129 15:30:05.788790 5008 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-4zwkl" podUID="f56b5e44-f079-4c56-9e19-e09996979003" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.31:8443/healthz\": dial tcp 10.217.0.31:8443: connect: connection refused" Jan 29 15:30:05 crc kubenswrapper[5008]: I0129 15:30:05.788827 5008 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-zvhxk" podUID="0ba6b3e7-02fc-4ad5-b6f1-8fcbd1940277" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.41:8443/healthz\": dial tcp 10.217.0.41:8443: connect: connection refused" Jan 29 15:30:05 crc kubenswrapper[5008]: I0129 15:30:05.789144 5008 patch_prober.go:28] interesting pod/console-operator-58897d9998-zs2tk container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.26:8443/readyz\": dial tcp 10.217.0.26:8443: connect: connection refused" start-of-body= Jan 29 15:30:05 crc kubenswrapper[5008]: I0129 15:30:05.789563 5008 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-zs2tk" podUID="5b987d67-e424-4286-a25d-11bfc4d1e577" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.26:8443/readyz\": dial tcp 10.217.0.26:8443: connect: connection refused" Jan 29 15:30:05 crc kubenswrapper[5008]: I0129 15:30:05.791560 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-w5jbk" Jan 29 15:30:05 crc kubenswrapper[5008]: I0129 15:30:05.806043 5008 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/machine-api-operator-5694c8668f-fsx74" podStartSLOduration=126.806027739 podStartE2EDuration="2m6.806027739s" podCreationTimestamp="2026-01-29 15:27:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 15:30:05.805478584 +0000 UTC m=+149.478332821" watchObservedRunningTime="2026-01-29 15:30:05.806027739 +0000 UTC m=+149.478881976" Jan 29 15:30:05 crc kubenswrapper[5008]: I0129 15:30:05.838208 5008 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-bmtm4" podStartSLOduration=126.838185541 podStartE2EDuration="2m6.838185541s" podCreationTimestamp="2026-01-29 15:27:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 15:30:05.837038061 +0000 UTC m=+149.509892318" watchObservedRunningTime="2026-01-29 15:30:05.838185541 +0000 UTC m=+149.511039778" Jan 29 15:30:05 crc kubenswrapper[5008]: I0129 15:30:05.844958 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 15:30:05 crc kubenswrapper[5008]: E0129 15:30:05.847033 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 15:30:06.347012272 +0000 UTC m=+150.019866509 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:05 crc kubenswrapper[5008]: I0129 15:30:05.898395 5008 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-w5jbk" podStartSLOduration=126.898378257 podStartE2EDuration="2m6.898378257s" podCreationTimestamp="2026-01-29 15:27:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 15:30:05.876108564 +0000 UTC m=+149.548962811" watchObservedRunningTime="2026-01-29 15:30:05.898378257 +0000 UTC m=+149.571232504" Jan 29 15:30:05 crc kubenswrapper[5008]: I0129 15:30:05.899215 5008 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-s5vvl" podStartSLOduration=126.899208099 podStartE2EDuration="2m6.899208099s" podCreationTimestamp="2026-01-29 15:27:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 15:30:05.8977152 +0000 UTC m=+149.570569467" watchObservedRunningTime="2026-01-29 15:30:05.899208099 +0000 UTC m=+149.572062346" Jan 29 15:30:05 crc kubenswrapper[5008]: I0129 15:30:05.947629 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qm54x\" (UID: \"30c54800-b443-4da8-9d41-22e8f156a1a1\") " pod="openshift-image-registry/image-registry-697d97f7c8-qm54x" Jan 29 15:30:05 crc kubenswrapper[5008]: E0129 15:30:05.948097 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 15:30:06.448081988 +0000 UTC m=+150.120936225 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qm54x" (UID: "30c54800-b443-4da8-9d41-22e8f156a1a1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:06 crc kubenswrapper[5008]: I0129 15:30:06.049143 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 15:30:06 crc kubenswrapper[5008]: E0129 15:30:06.049357 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 15:30:06.549333449 +0000 UTC m=+150.222187686 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:06 crc kubenswrapper[5008]: I0129 15:30:06.049520 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qm54x\" (UID: \"30c54800-b443-4da8-9d41-22e8f156a1a1\") " pod="openshift-image-registry/image-registry-697d97f7c8-qm54x" Jan 29 15:30:06 crc kubenswrapper[5008]: E0129 15:30:06.049940 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 15:30:06.549930785 +0000 UTC m=+150.222785022 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qm54x" (UID: "30c54800-b443-4da8-9d41-22e8f156a1a1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:06 crc kubenswrapper[5008]: I0129 15:30:06.151038 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 15:30:06 crc kubenswrapper[5008]: E0129 15:30:06.151206 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 15:30:06.651183236 +0000 UTC m=+150.324037553 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:06 crc kubenswrapper[5008]: I0129 15:30:06.151300 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qm54x\" (UID: \"30c54800-b443-4da8-9d41-22e8f156a1a1\") " pod="openshift-image-registry/image-registry-697d97f7c8-qm54x" Jan 29 15:30:06 crc kubenswrapper[5008]: E0129 15:30:06.151659 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 15:30:06.651649588 +0000 UTC m=+150.324503825 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qm54x" (UID: "30c54800-b443-4da8-9d41-22e8f156a1a1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:06 crc kubenswrapper[5008]: I0129 15:30:06.252334 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 15:30:06 crc kubenswrapper[5008]: E0129 15:30:06.252518 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 15:30:06.752471217 +0000 UTC m=+150.425325454 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:06 crc kubenswrapper[5008]: I0129 15:30:06.252940 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qm54x\" (UID: \"30c54800-b443-4da8-9d41-22e8f156a1a1\") " pod="openshift-image-registry/image-registry-697d97f7c8-qm54x" Jan 29 15:30:06 crc kubenswrapper[5008]: E0129 15:30:06.253358 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 15:30:06.75334796 +0000 UTC m=+150.426202197 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qm54x" (UID: "30c54800-b443-4da8-9d41-22e8f156a1a1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:06 crc kubenswrapper[5008]: I0129 15:30:06.353929 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 15:30:06 crc kubenswrapper[5008]: E0129 15:30:06.354331 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 15:30:06.854290413 +0000 UTC m=+150.527144670 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:06 crc kubenswrapper[5008]: I0129 15:30:06.354523 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 15:30:06 crc kubenswrapper[5008]: I0129 15:30:06.371846 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 15:30:06 crc kubenswrapper[5008]: I0129 15:30:06.442017 5008 patch_prober.go:28] interesting pod/router-default-5444994796-lkcrp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 29 15:30:06 crc kubenswrapper[5008]: [-]has-synced failed: reason withheld Jan 29 15:30:06 crc kubenswrapper[5008]: [+]process-running ok Jan 29 15:30:06 crc kubenswrapper[5008]: healthz check failed Jan 29 15:30:06 crc kubenswrapper[5008]: I0129 15:30:06.442134 5008 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-lkcrp" podUID="380625b0-02b5-417a-bd1e-7ccf56f56059" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 29 15:30:06 crc kubenswrapper[5008]: I0129 15:30:06.456363 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 15:30:06 crc kubenswrapper[5008]: I0129 15:30:06.456420 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qm54x\" (UID: \"30c54800-b443-4da8-9d41-22e8f156a1a1\") " pod="openshift-image-registry/image-registry-697d97f7c8-qm54x" Jan 29 15:30:06 crc kubenswrapper[5008]: I0129 15:30:06.456484 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 15:30:06 crc kubenswrapper[5008]: I0129 15:30:06.456533 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 15:30:06 crc kubenswrapper[5008]: E0129 15:30:06.457031 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 15:30:06.957009233 +0000 UTC m=+150.629863660 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qm54x" (UID: "30c54800-b443-4da8-9d41-22e8f156a1a1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:06 crc kubenswrapper[5008]: I0129 15:30:06.462911 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 15:30:06 crc kubenswrapper[5008]: I0129 15:30:06.462927 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 15:30:06 crc kubenswrapper[5008]: I0129 15:30:06.557518 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 15:30:06 crc kubenswrapper[5008]: E0129 15:30:06.557761 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 15:30:07.057719669 +0000 UTC m=+150.730573916 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:06 crc kubenswrapper[5008]: I0129 15:30:06.558129 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qm54x\" (UID: \"30c54800-b443-4da8-9d41-22e8f156a1a1\") " pod="openshift-image-registry/image-registry-697d97f7c8-qm54x" Jan 29 15:30:06 crc kubenswrapper[5008]: E0129 15:30:06.558491 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 15:30:07.058479829 +0000 UTC m=+150.731334266 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qm54x" (UID: "30c54800-b443-4da8-9d41-22e8f156a1a1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:06 crc kubenswrapper[5008]: I0129 15:30:06.656888 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 15:30:06 crc kubenswrapper[5008]: E0129 15:30:06.659141 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 15:30:07.159112374 +0000 UTC m=+150.831966611 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:06 crc kubenswrapper[5008]: I0129 15:30:06.659690 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 15:30:06 crc kubenswrapper[5008]: I0129 15:30:06.660186 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qm54x\" (UID: \"30c54800-b443-4da8-9d41-22e8f156a1a1\") " pod="openshift-image-registry/image-registry-697d97f7c8-qm54x" Jan 29 15:30:06 crc kubenswrapper[5008]: E0129 15:30:06.660600 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 15:30:07.160590023 +0000 UTC m=+150.833444360 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qm54x" (UID: "30c54800-b443-4da8-9d41-22e8f156a1a1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:06 crc kubenswrapper[5008]: I0129 15:30:06.666610 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 15:30:06 crc kubenswrapper[5008]: I0129 15:30:06.762493 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 15:30:06 crc kubenswrapper[5008]: E0129 15:30:06.763388 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 15:30:07.263362783 +0000 UTC m=+150.936217020 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:06 crc kubenswrapper[5008]: I0129 15:30:06.797613 5008 patch_prober.go:28] interesting pod/console-operator-58897d9998-zs2tk container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.26:8443/readyz\": dial tcp 10.217.0.26:8443: connect: connection refused" start-of-body= Jan 29 15:30:06 crc kubenswrapper[5008]: I0129 15:30:06.797638 5008 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-mqnz8 container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.29:8443/healthz\": dial tcp 10.217.0.29:8443: connect: connection refused" start-of-body= Jan 29 15:30:06 crc kubenswrapper[5008]: I0129 15:30:06.797673 5008 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-zs2tk" podUID="5b987d67-e424-4286-a25d-11bfc4d1e577" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.26:8443/readyz\": dial tcp 10.217.0.26:8443: connect: connection refused" Jan 29 15:30:06 crc kubenswrapper[5008]: I0129 15:30:06.797696 5008 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-mqnz8" podUID="272fd84c-e1ec-47ce-a8dc-fb0573d1208c" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.29:8443/healthz\": dial tcp 10.217.0.29:8443: connect: connection refused" Jan 29 15:30:06 crc kubenswrapper[5008]: I0129 15:30:06.865126 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qm54x\" (UID: \"30c54800-b443-4da8-9d41-22e8f156a1a1\") " pod="openshift-image-registry/image-registry-697d97f7c8-qm54x" Jan 29 15:30:06 crc kubenswrapper[5008]: E0129 15:30:06.865762 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 15:30:07.365748024 +0000 UTC m=+151.038602261 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qm54x" (UID: "30c54800-b443-4da8-9d41-22e8f156a1a1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:06 crc kubenswrapper[5008]: I0129 15:30:06.872535 5008 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-admission-controller-857f4d67dd-cb6xn" podStartSLOduration=127.872513681 podStartE2EDuration="2m7.872513681s" podCreationTimestamp="2026-01-29 15:27:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 15:30:05.922076647 +0000 UTC m=+149.594930904" watchObservedRunningTime="2026-01-29 15:30:06.872513681 +0000 UTC m=+150.545367918" Jan 29 15:30:06 crc kubenswrapper[5008]: I0129 15:30:06.966006 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 15:30:06 crc kubenswrapper[5008]: E0129 15:30:06.966189 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 15:30:07.466164483 +0000 UTC m=+151.139018740 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:06 crc kubenswrapper[5008]: I0129 15:30:06.966258 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qm54x\" (UID: \"30c54800-b443-4da8-9d41-22e8f156a1a1\") " pod="openshift-image-registry/image-registry-697d97f7c8-qm54x" Jan 29 15:30:06 crc kubenswrapper[5008]: E0129 15:30:06.966633 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 15:30:07.466621746 +0000 UTC m=+151.139475983 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qm54x" (UID: "30c54800-b443-4da8-9d41-22e8f156a1a1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:07 crc kubenswrapper[5008]: I0129 15:30:07.066967 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 15:30:07 crc kubenswrapper[5008]: E0129 15:30:07.067306 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 15:30:07.5672877 +0000 UTC m=+151.240141937 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:07 crc kubenswrapper[5008]: I0129 15:30:07.168983 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qm54x\" (UID: \"30c54800-b443-4da8-9d41-22e8f156a1a1\") " pod="openshift-image-registry/image-registry-697d97f7c8-qm54x" Jan 29 15:30:07 crc kubenswrapper[5008]: E0129 15:30:07.169411 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 15:30:07.669393214 +0000 UTC m=+151.342247471 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qm54x" (UID: "30c54800-b443-4da8-9d41-22e8f156a1a1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:07 crc kubenswrapper[5008]: I0129 15:30:07.270513 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 15:30:07 crc kubenswrapper[5008]: E0129 15:30:07.270874 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 15:30:07.770853531 +0000 UTC m=+151.443707778 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:07 crc kubenswrapper[5008]: I0129 15:30:07.371958 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qm54x\" (UID: \"30c54800-b443-4da8-9d41-22e8f156a1a1\") " pod="openshift-image-registry/image-registry-697d97f7c8-qm54x" Jan 29 15:30:07 crc kubenswrapper[5008]: E0129 15:30:07.372455 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 15:30:07.872430839 +0000 UTC m=+151.545285076 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qm54x" (UID: "30c54800-b443-4da8-9d41-22e8f156a1a1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:07 crc kubenswrapper[5008]: I0129 15:30:07.441364 5008 patch_prober.go:28] interesting pod/router-default-5444994796-lkcrp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 29 15:30:07 crc kubenswrapper[5008]: [-]has-synced failed: reason withheld Jan 29 15:30:07 crc kubenswrapper[5008]: [+]process-running ok Jan 29 15:30:07 crc kubenswrapper[5008]: healthz check failed Jan 29 15:30:07 crc kubenswrapper[5008]: I0129 15:30:07.441442 5008 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-lkcrp" podUID="380625b0-02b5-417a-bd1e-7ccf56f56059" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 29 15:30:07 crc kubenswrapper[5008]: I0129 15:30:07.472515 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 15:30:07 crc kubenswrapper[5008]: E0129 15:30:07.472713 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 15:30:07.972676614 +0000 UTC m=+151.645530851 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:07 crc kubenswrapper[5008]: I0129 15:30:07.472903 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qm54x\" (UID: \"30c54800-b443-4da8-9d41-22e8f156a1a1\") " pod="openshift-image-registry/image-registry-697d97f7c8-qm54x" Jan 29 15:30:07 crc kubenswrapper[5008]: E0129 15:30:07.473193 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 15:30:07.973180997 +0000 UTC m=+151.646035244 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qm54x" (UID: "30c54800-b443-4da8-9d41-22e8f156a1a1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:07 crc kubenswrapper[5008]: I0129 15:30:07.573519 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 15:30:07 crc kubenswrapper[5008]: E0129 15:30:07.573537 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 15:30:08.073513814 +0000 UTC m=+151.746368051 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:07 crc kubenswrapper[5008]: I0129 15:30:07.574075 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qm54x\" (UID: \"30c54800-b443-4da8-9d41-22e8f156a1a1\") " pod="openshift-image-registry/image-registry-697d97f7c8-qm54x" Jan 29 15:30:07 crc kubenswrapper[5008]: E0129 15:30:07.574437 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 15:30:08.074429359 +0000 UTC m=+151.747283596 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qm54x" (UID: "30c54800-b443-4da8-9d41-22e8f156a1a1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:07 crc kubenswrapper[5008]: I0129 15:30:07.675039 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 15:30:07 crc kubenswrapper[5008]: E0129 15:30:07.675403 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 15:30:08.175385952 +0000 UTC m=+151.848240189 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:07 crc kubenswrapper[5008]: I0129 15:30:07.777750 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qm54x\" (UID: \"30c54800-b443-4da8-9d41-22e8f156a1a1\") " pod="openshift-image-registry/image-registry-697d97f7c8-qm54x" Jan 29 15:30:07 crc kubenswrapper[5008]: E0129 15:30:07.780577 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 15:30:08.280549145 +0000 UTC m=+151.953403392 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qm54x" (UID: "30c54800-b443-4da8-9d41-22e8f156a1a1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:07 crc kubenswrapper[5008]: I0129 15:30:07.878840 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 15:30:07 crc kubenswrapper[5008]: E0129 15:30:07.879283 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 15:30:08.379262779 +0000 UTC m=+152.052117016 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:07 crc kubenswrapper[5008]: I0129 15:30:07.980548 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qm54x\" (UID: \"30c54800-b443-4da8-9d41-22e8f156a1a1\") " pod="openshift-image-registry/image-registry-697d97f7c8-qm54x" Jan 29 15:30:07 crc kubenswrapper[5008]: E0129 15:30:07.980946 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 15:30:08.480931711 +0000 UTC m=+152.153785948 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qm54x" (UID: "30c54800-b443-4da8-9d41-22e8f156a1a1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:08 crc kubenswrapper[5008]: I0129 15:30:08.081646 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 15:30:08 crc kubenswrapper[5008]: E0129 15:30:08.081910 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 15:30:08.581882504 +0000 UTC m=+152.254736741 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:08 crc kubenswrapper[5008]: I0129 15:30:08.081982 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qm54x\" (UID: \"30c54800-b443-4da8-9d41-22e8f156a1a1\") " pod="openshift-image-registry/image-registry-697d97f7c8-qm54x" Jan 29 15:30:08 crc kubenswrapper[5008]: E0129 15:30:08.082316 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 15:30:08.582298645 +0000 UTC m=+152.255152882 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qm54x" (UID: "30c54800-b443-4da8-9d41-22e8f156a1a1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:08 crc kubenswrapper[5008]: I0129 15:30:08.182908 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 15:30:08 crc kubenswrapper[5008]: E0129 15:30:08.183113 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 15:30:08.683079603 +0000 UTC m=+152.355933850 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:08 crc kubenswrapper[5008]: I0129 15:30:08.183343 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qm54x\" (UID: \"30c54800-b443-4da8-9d41-22e8f156a1a1\") " pod="openshift-image-registry/image-registry-697d97f7c8-qm54x" Jan 29 15:30:08 crc kubenswrapper[5008]: E0129 15:30:08.183949 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 15:30:08.683919586 +0000 UTC m=+152.356773833 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qm54x" (UID: "30c54800-b443-4da8-9d41-22e8f156a1a1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:08 crc kubenswrapper[5008]: I0129 15:30:08.284307 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 15:30:08 crc kubenswrapper[5008]: E0129 15:30:08.284757 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 15:30:08.784731565 +0000 UTC m=+152.457585832 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:08 crc kubenswrapper[5008]: I0129 15:30:08.386501 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qm54x\" (UID: \"30c54800-b443-4da8-9d41-22e8f156a1a1\") " pod="openshift-image-registry/image-registry-697d97f7c8-qm54x" Jan 29 15:30:08 crc kubenswrapper[5008]: E0129 15:30:08.387245 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 15:30:08.887208488 +0000 UTC m=+152.560062775 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qm54x" (UID: "30c54800-b443-4da8-9d41-22e8f156a1a1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:08 crc kubenswrapper[5008]: I0129 15:30:08.441047 5008 patch_prober.go:28] interesting pod/router-default-5444994796-lkcrp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 29 15:30:08 crc kubenswrapper[5008]: [-]has-synced failed: reason withheld Jan 29 15:30:08 crc kubenswrapper[5008]: [+]process-running ok Jan 29 15:30:08 crc kubenswrapper[5008]: healthz check failed Jan 29 15:30:08 crc kubenswrapper[5008]: I0129 15:30:08.441115 5008 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-lkcrp" podUID="380625b0-02b5-417a-bd1e-7ccf56f56059" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 29 15:30:08 crc kubenswrapper[5008]: I0129 15:30:08.487921 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 15:30:08 crc kubenswrapper[5008]: E0129 15:30:08.488349 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 15:30:08.988325335 +0000 UTC m=+152.661179592 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:08 crc kubenswrapper[5008]: I0129 15:30:08.590227 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qm54x\" (UID: \"30c54800-b443-4da8-9d41-22e8f156a1a1\") " pod="openshift-image-registry/image-registry-697d97f7c8-qm54x" Jan 29 15:30:08 crc kubenswrapper[5008]: E0129 15:30:08.590653 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 15:30:09.090636324 +0000 UTC m=+152.763490581 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qm54x" (UID: "30c54800-b443-4da8-9d41-22e8f156a1a1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:08 crc kubenswrapper[5008]: I0129 15:30:08.691986 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 15:30:08 crc kubenswrapper[5008]: E0129 15:30:08.692589 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 15:30:09.192563173 +0000 UTC m=+152.865417450 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:08 crc kubenswrapper[5008]: I0129 15:30:08.794042 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qm54x\" (UID: \"30c54800-b443-4da8-9d41-22e8f156a1a1\") " pod="openshift-image-registry/image-registry-697d97f7c8-qm54x" Jan 29 15:30:08 crc kubenswrapper[5008]: E0129 15:30:08.794592 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 15:30:09.294572924 +0000 UTC m=+152.967427191 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qm54x" (UID: "30c54800-b443-4da8-9d41-22e8f156a1a1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:08 crc kubenswrapper[5008]: I0129 15:30:08.895489 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 15:30:08 crc kubenswrapper[5008]: E0129 15:30:08.895975 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 15:30:09.395957237 +0000 UTC m=+153.068811474 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:08 crc kubenswrapper[5008]: I0129 15:30:08.997236 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qm54x\" (UID: \"30c54800-b443-4da8-9d41-22e8f156a1a1\") " pod="openshift-image-registry/image-registry-697d97f7c8-qm54x" Jan 29 15:30:08 crc kubenswrapper[5008]: E0129 15:30:08.997724 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 15:30:09.497709172 +0000 UTC m=+153.170563409 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qm54x" (UID: "30c54800-b443-4da8-9d41-22e8f156a1a1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:09 crc kubenswrapper[5008]: I0129 15:30:09.098692 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 15:30:09 crc kubenswrapper[5008]: E0129 15:30:09.099017 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 15:30:09.598967753 +0000 UTC m=+153.271822000 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:09 crc kubenswrapper[5008]: I0129 15:30:09.099228 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qm54x\" (UID: \"30c54800-b443-4da8-9d41-22e8f156a1a1\") " pod="openshift-image-registry/image-registry-697d97f7c8-qm54x" Jan 29 15:30:09 crc kubenswrapper[5008]: E0129 15:30:09.099576 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 15:30:09.599557729 +0000 UTC m=+153.272411966 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qm54x" (UID: "30c54800-b443-4da8-9d41-22e8f156a1a1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:09 crc kubenswrapper[5008]: I0129 15:30:09.125101 5008 patch_prober.go:28] interesting pod/downloads-7954f5f757-6wmrp container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.42:8080/\": dial tcp 10.217.0.42:8080: connect: connection refused" start-of-body= Jan 29 15:30:09 crc kubenswrapper[5008]: I0129 15:30:09.125170 5008 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-6wmrp" podUID="64cf2ff9-40f4-48a5-a16c-6513cf0470bd" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.42:8080/\": dial tcp 10.217.0.42:8080: connect: connection refused" Jan 29 15:30:09 crc kubenswrapper[5008]: I0129 15:30:09.128974 5008 patch_prober.go:28] interesting pod/downloads-7954f5f757-6wmrp container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.42:8080/\": dial tcp 10.217.0.42:8080: connect: connection refused" start-of-body= Jan 29 15:30:09 crc kubenswrapper[5008]: I0129 15:30:09.129048 5008 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-6wmrp" podUID="64cf2ff9-40f4-48a5-a16c-6513cf0470bd" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.42:8080/\": dial tcp 10.217.0.42:8080: connect: connection refused" Jan 29 15:30:09 crc kubenswrapper[5008]: I0129 15:30:09.200459 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 15:30:09 crc kubenswrapper[5008]: E0129 15:30:09.201006 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 15:30:09.700985374 +0000 UTC m=+153.373839611 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:09 crc kubenswrapper[5008]: I0129 15:30:09.302447 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qm54x\" (UID: \"30c54800-b443-4da8-9d41-22e8f156a1a1\") " pod="openshift-image-registry/image-registry-697d97f7c8-qm54x" Jan 29 15:30:09 crc kubenswrapper[5008]: E0129 15:30:09.302879 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 15:30:09.802857471 +0000 UTC m=+153.475711708 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qm54x" (UID: "30c54800-b443-4da8-9d41-22e8f156a1a1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:09 crc kubenswrapper[5008]: I0129 15:30:09.403669 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 15:30:09 crc kubenswrapper[5008]: E0129 15:30:09.404172 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 15:30:09.904143883 +0000 UTC m=+153.576998150 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:09 crc kubenswrapper[5008]: I0129 15:30:09.441386 5008 patch_prober.go:28] interesting pod/router-default-5444994796-lkcrp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 29 15:30:09 crc kubenswrapper[5008]: [-]has-synced failed: reason withheld Jan 29 15:30:09 crc kubenswrapper[5008]: [+]process-running ok Jan 29 15:30:09 crc kubenswrapper[5008]: healthz check failed Jan 29 15:30:09 crc kubenswrapper[5008]: I0129 15:30:09.441463 5008 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-lkcrp" podUID="380625b0-02b5-417a-bd1e-7ccf56f56059" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 29 15:30:09 crc kubenswrapper[5008]: I0129 15:30:09.506037 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qm54x\" (UID: \"30c54800-b443-4da8-9d41-22e8f156a1a1\") " pod="openshift-image-registry/image-registry-697d97f7c8-qm54x" Jan 29 15:30:09 crc kubenswrapper[5008]: E0129 15:30:09.506734 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 15:30:10.006707289 +0000 UTC m=+153.679561566 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qm54x" (UID: "30c54800-b443-4da8-9d41-22e8f156a1a1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:09 crc kubenswrapper[5008]: I0129 15:30:09.607836 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 15:30:09 crc kubenswrapper[5008]: E0129 15:30:09.608161 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 15:30:10.108141674 +0000 UTC m=+153.780995911 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:09 crc kubenswrapper[5008]: I0129 15:30:09.655605 5008 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-fpmxk container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.6:8443/healthz\": dial tcp 10.217.0.6:8443: connect: connection refused" start-of-body= Jan 29 15:30:09 crc kubenswrapper[5008]: I0129 15:30:09.655669 5008 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-fpmxk" podUID="7d5c80c8-4e74-4618-96c0-8e76168ad709" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.6:8443/healthz\": dial tcp 10.217.0.6:8443: connect: connection refused" Jan 29 15:30:09 crc kubenswrapper[5008]: I0129 15:30:09.709735 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qm54x\" (UID: \"30c54800-b443-4da8-9d41-22e8f156a1a1\") " pod="openshift-image-registry/image-registry-697d97f7c8-qm54x" Jan 29 15:30:09 crc kubenswrapper[5008]: E0129 15:30:09.710169 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 15:30:10.210153655 +0000 UTC m=+153.883007902 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qm54x" (UID: "30c54800-b443-4da8-9d41-22e8f156a1a1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:09 crc kubenswrapper[5008]: I0129 15:30:09.798859 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-f9d7485db-g2rk6" Jan 29 15:30:09 crc kubenswrapper[5008]: I0129 15:30:09.800012 5008 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-f9d7485db-g2rk6" Jan 29 15:30:09 crc kubenswrapper[5008]: I0129 15:30:09.802106 5008 patch_prober.go:28] interesting pod/console-f9d7485db-g2rk6 container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.10:8443/health\": dial tcp 10.217.0.10:8443: connect: connection refused" start-of-body= Jan 29 15:30:09 crc kubenswrapper[5008]: I0129 15:30:09.802155 5008 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-g2rk6" podUID="3f7de4a5-3819-41c0-9e2e-766dcff408bb" containerName="console" probeResult="failure" output="Get \"https://10.217.0.10:8443/health\": dial tcp 10.217.0.10:8443: connect: connection refused" Jan 29 15:30:09 crc kubenswrapper[5008]: I0129 15:30:09.810444 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 15:30:09 crc kubenswrapper[5008]: E0129 15:30:09.810813 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 15:30:10.310760949 +0000 UTC m=+153.983615226 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:09 crc kubenswrapper[5008]: I0129 15:30:09.810924 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qm54x\" (UID: \"30c54800-b443-4da8-9d41-22e8f156a1a1\") " pod="openshift-image-registry/image-registry-697d97f7c8-qm54x" Jan 29 15:30:09 crc kubenswrapper[5008]: E0129 15:30:09.811387 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 15:30:10.311374325 +0000 UTC m=+153.984228562 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qm54x" (UID: "30c54800-b443-4da8-9d41-22e8f156a1a1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:09 crc kubenswrapper[5008]: I0129 15:30:09.816147 5008 patch_prober.go:28] interesting pod/route-controller-manager-6576b87f9c-4zwkl container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.31:8443/healthz\": dial tcp 10.217.0.31:8443: connect: connection refused" start-of-body= Jan 29 15:30:09 crc kubenswrapper[5008]: I0129 15:30:09.816202 5008 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-4zwkl" podUID="f56b5e44-f079-4c56-9e19-e09996979003" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.31:8443/healthz\": dial tcp 10.217.0.31:8443: connect: connection refused" Jan 29 15:30:09 crc kubenswrapper[5008]: I0129 15:30:09.911880 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 15:30:09 crc kubenswrapper[5008]: E0129 15:30:09.912080 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 15:30:10.412053011 +0000 UTC m=+154.084907248 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:09 crc kubenswrapper[5008]: I0129 15:30:09.912236 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qm54x\" (UID: \"30c54800-b443-4da8-9d41-22e8f156a1a1\") " pod="openshift-image-registry/image-registry-697d97f7c8-qm54x" Jan 29 15:30:09 crc kubenswrapper[5008]: E0129 15:30:09.912678 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 15:30:10.412658357 +0000 UTC m=+154.085512594 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qm54x" (UID: "30c54800-b443-4da8-9d41-22e8f156a1a1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:09 crc kubenswrapper[5008]: I0129 15:30:09.921257 5008 patch_prober.go:28] interesting pod/oauth-openshift-558db77b4-6zjns container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.36:6443/healthz\": dial tcp 10.217.0.36:6443: connect: connection refused" start-of-body= Jan 29 15:30:09 crc kubenswrapper[5008]: I0129 15:30:09.921344 5008 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-558db77b4-6zjns" podUID="30a4c50c-34f7-4c9c-9cbd-baaf50ed16e1" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.36:6443/healthz\": dial tcp 10.217.0.36:6443: connect: connection refused" Jan 29 15:30:10 crc kubenswrapper[5008]: I0129 15:30:10.013562 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 15:30:10 crc kubenswrapper[5008]: E0129 15:30:10.013811 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 15:30:10.513753954 +0000 UTC m=+154.186608191 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:10 crc kubenswrapper[5008]: I0129 15:30:10.014006 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qm54x\" (UID: \"30c54800-b443-4da8-9d41-22e8f156a1a1\") " pod="openshift-image-registry/image-registry-697d97f7c8-qm54x" Jan 29 15:30:10 crc kubenswrapper[5008]: E0129 15:30:10.014477 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 15:30:10.514463442 +0000 UTC m=+154.187317689 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qm54x" (UID: "30c54800-b443-4da8-9d41-22e8f156a1a1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:10 crc kubenswrapper[5008]: I0129 15:30:10.115482 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 15:30:10 crc kubenswrapper[5008]: E0129 15:30:10.115825 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 15:30:10.615758874 +0000 UTC m=+154.288613151 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:10 crc kubenswrapper[5008]: I0129 15:30:10.115930 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qm54x\" (UID: \"30c54800-b443-4da8-9d41-22e8f156a1a1\") " pod="openshift-image-registry/image-registry-697d97f7c8-qm54x" Jan 29 15:30:10 crc kubenswrapper[5008]: E0129 15:30:10.116386 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 15:30:10.6163678 +0000 UTC m=+154.289222037 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qm54x" (UID: "30c54800-b443-4da8-9d41-22e8f156a1a1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:10 crc kubenswrapper[5008]: I0129 15:30:10.217352 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 15:30:10 crc kubenswrapper[5008]: E0129 15:30:10.217690 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 15:30:10.717643311 +0000 UTC m=+154.390497538 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:10 crc kubenswrapper[5008]: I0129 15:30:10.217899 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qm54x\" (UID: \"30c54800-b443-4da8-9d41-22e8f156a1a1\") " pod="openshift-image-registry/image-registry-697d97f7c8-qm54x" Jan 29 15:30:10 crc kubenswrapper[5008]: E0129 15:30:10.218282 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 15:30:10.718250348 +0000 UTC m=+154.391104585 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qm54x" (UID: "30c54800-b443-4da8-9d41-22e8f156a1a1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:10 crc kubenswrapper[5008]: I0129 15:30:10.319530 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 15:30:10 crc kubenswrapper[5008]: E0129 15:30:10.319850 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 15:30:10.819760605 +0000 UTC m=+154.492614842 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:10 crc kubenswrapper[5008]: I0129 15:30:10.319951 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qm54x\" (UID: \"30c54800-b443-4da8-9d41-22e8f156a1a1\") " pod="openshift-image-registry/image-registry-697d97f7c8-qm54x" Jan 29 15:30:10 crc kubenswrapper[5008]: E0129 15:30:10.320428 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 15:30:10.820418753 +0000 UTC m=+154.493272990 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qm54x" (UID: "30c54800-b443-4da8-9d41-22e8f156a1a1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:10 crc kubenswrapper[5008]: I0129 15:30:10.377931 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-j8wt8" Jan 29 15:30:10 crc kubenswrapper[5008]: I0129 15:30:10.378412 5008 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-4268l container/marketplace-operator namespace/openshift-marketplace: Liveness probe status=failure output="Get \"http://10.217.0.25:8080/healthz\": dial tcp 10.217.0.25:8080: connect: connection refused" start-of-body= Jan 29 15:30:10 crc kubenswrapper[5008]: I0129 15:30:10.378481 5008 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-marketplace/marketplace-operator-79b997595-4268l" podUID="7473d665-3627-4470-a820-ebdbdc113587" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.25:8080/healthz\": dial tcp 10.217.0.25:8080: connect: connection refused" Jan 29 15:30:10 crc kubenswrapper[5008]: I0129 15:30:10.378427 5008 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-4268l container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.25:8080/healthz\": dial tcp 10.217.0.25:8080: connect: connection refused" start-of-body= Jan 29 15:30:10 crc kubenswrapper[5008]: I0129 15:30:10.378531 5008 patch_prober.go:28] interesting pod/console-operator-58897d9998-zs2tk container/console-operator namespace/openshift-console-operator: Liveness probe status=failure output="Get \"https://10.217.0.26:8443/healthz\": dial tcp 10.217.0.26:8443: connect: connection refused" start-of-body= Jan 29 15:30:10 crc kubenswrapper[5008]: I0129 15:30:10.378564 5008 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-4268l" podUID="7473d665-3627-4470-a820-ebdbdc113587" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.25:8080/healthz\": dial tcp 10.217.0.25:8080: connect: connection refused" Jan 29 15:30:10 crc kubenswrapper[5008]: I0129 15:30:10.378644 5008 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console-operator/console-operator-58897d9998-zs2tk" podUID="5b987d67-e424-4286-a25d-11bfc4d1e577" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.26:8443/healthz\": dial tcp 10.217.0.26:8443: connect: connection refused" Jan 29 15:30:10 crc kubenswrapper[5008]: I0129 15:30:10.379245 5008 patch_prober.go:28] interesting pod/console-operator-58897d9998-zs2tk container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.26:8443/readyz\": dial tcp 10.217.0.26:8443: connect: connection refused" start-of-body= Jan 29 15:30:10 crc kubenswrapper[5008]: I0129 15:30:10.379556 5008 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-zs2tk" podUID="5b987d67-e424-4286-a25d-11bfc4d1e577" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.26:8443/readyz\": dial tcp 10.217.0.26:8443: connect: connection refused" Jan 29 15:30:10 crc kubenswrapper[5008]: I0129 15:30:10.380303 5008 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-j8wt8 container/packageserver namespace/openshift-operator-lifecycle-manager: Liveness probe status=failure output="Get \"https://10.217.0.35:5443/healthz\": dial tcp 10.217.0.35:5443: connect: connection refused" start-of-body= Jan 29 15:30:10 crc kubenswrapper[5008]: I0129 15:30:10.380340 5008 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-j8wt8" podUID="c9bc5b93-0c42-401c-8ca5-e5154e8be34d" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.35:5443/healthz\": dial tcp 10.217.0.35:5443: connect: connection refused" Jan 29 15:30:10 crc kubenswrapper[5008]: I0129 15:30:10.380349 5008 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-j8wt8 container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.35:5443/healthz\": dial tcp 10.217.0.35:5443: connect: connection refused" start-of-body= Jan 29 15:30:10 crc kubenswrapper[5008]: I0129 15:30:10.380395 5008 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-j8wt8" podUID="c9bc5b93-0c42-401c-8ca5-e5154e8be34d" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.35:5443/healthz\": dial tcp 10.217.0.35:5443: connect: connection refused" Jan 29 15:30:10 crc kubenswrapper[5008]: I0129 15:30:10.380825 5008 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-j8wt8 container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.35:5443/healthz\": dial tcp 10.217.0.35:5443: connect: connection refused" start-of-body= Jan 29 15:30:10 crc kubenswrapper[5008]: I0129 15:30:10.380850 5008 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-j8wt8" podUID="c9bc5b93-0c42-401c-8ca5-e5154e8be34d" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.35:5443/healthz\": dial tcp 10.217.0.35:5443: connect: connection refused" Jan 29 15:30:10 crc kubenswrapper[5008]: I0129 15:30:10.424630 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 15:30:10 crc kubenswrapper[5008]: E0129 15:30:10.424992 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 15:30:10.924935218 +0000 UTC m=+154.597789455 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:10 crc kubenswrapper[5008]: I0129 15:30:10.425201 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qm54x\" (UID: \"30c54800-b443-4da8-9d41-22e8f156a1a1\") " pod="openshift-image-registry/image-registry-697d97f7c8-qm54x" Jan 29 15:30:10 crc kubenswrapper[5008]: E0129 15:30:10.427901 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 15:30:10.927878596 +0000 UTC m=+154.600732863 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qm54x" (UID: "30c54800-b443-4da8-9d41-22e8f156a1a1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:10 crc kubenswrapper[5008]: I0129 15:30:10.438233 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-5444994796-lkcrp" Jan 29 15:30:10 crc kubenswrapper[5008]: I0129 15:30:10.441234 5008 patch_prober.go:28] interesting pod/router-default-5444994796-lkcrp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 29 15:30:10 crc kubenswrapper[5008]: [-]has-synced failed: reason withheld Jan 29 15:30:10 crc kubenswrapper[5008]: [+]process-running ok Jan 29 15:30:10 crc kubenswrapper[5008]: healthz check failed Jan 29 15:30:10 crc kubenswrapper[5008]: I0129 15:30:10.441292 5008 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-lkcrp" podUID="380625b0-02b5-417a-bd1e-7ccf56f56059" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 29 15:30:10 crc kubenswrapper[5008]: I0129 15:30:10.526157 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 15:30:10 crc kubenswrapper[5008]: E0129 15:30:10.526335 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 15:30:11.026312223 +0000 UTC m=+154.699166470 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:10 crc kubenswrapper[5008]: I0129 15:30:10.526813 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qm54x\" (UID: \"30c54800-b443-4da8-9d41-22e8f156a1a1\") " pod="openshift-image-registry/image-registry-697d97f7c8-qm54x" Jan 29 15:30:10 crc kubenswrapper[5008]: E0129 15:30:10.527154 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 15:30:11.027142884 +0000 UTC m=+154.699997121 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qm54x" (UID: "30c54800-b443-4da8-9d41-22e8f156a1a1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:10 crc kubenswrapper[5008]: I0129 15:30:10.628464 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 15:30:10 crc kubenswrapper[5008]: E0129 15:30:10.629201 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 15:30:11.129178576 +0000 UTC m=+154.802032823 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:10 crc kubenswrapper[5008]: E0129 15:30:10.732259 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 15:30:11.232233424 +0000 UTC m=+154.905087741 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qm54x" (UID: "30c54800-b443-4da8-9d41-22e8f156a1a1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:10 crc kubenswrapper[5008]: I0129 15:30:10.731778 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qm54x\" (UID: \"30c54800-b443-4da8-9d41-22e8f156a1a1\") " pod="openshift-image-registry/image-registry-697d97f7c8-qm54x" Jan 29 15:30:10 crc kubenswrapper[5008]: I0129 15:30:10.834177 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 15:30:10 crc kubenswrapper[5008]: E0129 15:30:10.834355 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 15:30:11.334319127 +0000 UTC m=+155.007173374 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:10 crc kubenswrapper[5008]: I0129 15:30:10.834962 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qm54x\" (UID: \"30c54800-b443-4da8-9d41-22e8f156a1a1\") " pod="openshift-image-registry/image-registry-697d97f7c8-qm54x" Jan 29 15:30:10 crc kubenswrapper[5008]: E0129 15:30:10.835408 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 15:30:11.335396676 +0000 UTC m=+155.008251013 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qm54x" (UID: "30c54800-b443-4da8-9d41-22e8f156a1a1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:10 crc kubenswrapper[5008]: I0129 15:30:10.861815 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 15:30:10 crc kubenswrapper[5008]: I0129 15:30:10.906992 5008 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-mqnz8 container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.29:8443/healthz\": dial tcp 10.217.0.29:8443: connect: connection refused" start-of-body= Jan 29 15:30:10 crc kubenswrapper[5008]: I0129 15:30:10.907079 5008 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-mqnz8" podUID="272fd84c-e1ec-47ce-a8dc-fb0573d1208c" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.29:8443/healthz\": dial tcp 10.217.0.29:8443: connect: connection refused" Jan 29 15:30:10 crc kubenswrapper[5008]: I0129 15:30:10.907163 5008 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-mqnz8 container/olm-operator namespace/openshift-operator-lifecycle-manager: Liveness probe status=failure output="Get \"https://10.217.0.29:8443/healthz\": dial tcp 10.217.0.29:8443: connect: connection refused" start-of-body= Jan 29 15:30:10 crc kubenswrapper[5008]: I0129 15:30:10.907200 5008 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-mqnz8" podUID="272fd84c-e1ec-47ce-a8dc-fb0573d1208c" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.29:8443/healthz\": dial tcp 10.217.0.29:8443: connect: connection refused" Jan 29 15:30:10 crc kubenswrapper[5008]: I0129 15:30:10.920115 5008 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Jan 29 15:30:10 crc kubenswrapper[5008]: I0129 15:30:10.921154 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 29 15:30:10 crc kubenswrapper[5008]: I0129 15:30:10.924469 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager"/"installer-sa-dockercfg-kjl2n" Jan 29 15:30:10 crc kubenswrapper[5008]: I0129 15:30:10.936365 5008 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager"/"kube-root-ca.crt" Jan 29 15:30:10 crc kubenswrapper[5008]: I0129 15:30:10.937839 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 15:30:10 crc kubenswrapper[5008]: E0129 15:30:10.938182 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 15:30:11.438162786 +0000 UTC m=+155.111017033 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:10 crc kubenswrapper[5008]: I0129 15:30:10.946732 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Jan 29 15:30:10 crc kubenswrapper[5008]: I0129 15:30:10.958879 5008 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-zvhxk container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.41:8443/healthz\": dial tcp 10.217.0.41:8443: connect: connection refused" start-of-body= Jan 29 15:30:10 crc kubenswrapper[5008]: I0129 15:30:10.958953 5008 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-zvhxk" podUID="0ba6b3e7-02fc-4ad5-b6f1-8fcbd1940277" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.41:8443/healthz\": dial tcp 10.217.0.41:8443: connect: connection refused" Jan 29 15:30:10 crc kubenswrapper[5008]: I0129 15:30:10.958996 5008 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-zvhxk container/catalog-operator namespace/openshift-operator-lifecycle-manager: Liveness probe status=failure output="Get \"https://10.217.0.41:8443/healthz\": dial tcp 10.217.0.41:8443: connect: connection refused" start-of-body= Jan 29 15:30:10 crc kubenswrapper[5008]: I0129 15:30:10.959074 5008 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-zvhxk" podUID="0ba6b3e7-02fc-4ad5-b6f1-8fcbd1940277" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.41:8443/healthz\": dial tcp 10.217.0.41:8443: connect: connection refused" Jan 29 15:30:11 crc kubenswrapper[5008]: I0129 15:30:11.032152 5008 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Jan 29 15:30:11 crc kubenswrapper[5008]: I0129 15:30:11.033012 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 29 15:30:11 crc kubenswrapper[5008]: I0129 15:30:11.034578 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Jan 29 15:30:11 crc kubenswrapper[5008]: I0129 15:30:11.039323 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5b4af13c-49f7-4c06-840c-6e976b55fabd-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"5b4af13c-49f7-4c06-840c-6e976b55fabd\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 29 15:30:11 crc kubenswrapper[5008]: I0129 15:30:11.039378 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/5b4af13c-49f7-4c06-840c-6e976b55fabd-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"5b4af13c-49f7-4c06-840c-6e976b55fabd\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 29 15:30:11 crc kubenswrapper[5008]: I0129 15:30:11.039435 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qm54x\" (UID: \"30c54800-b443-4da8-9d41-22e8f156a1a1\") " pod="openshift-image-registry/image-registry-697d97f7c8-qm54x" Jan 29 15:30:11 crc kubenswrapper[5008]: E0129 15:30:11.039827 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 15:30:11.539812848 +0000 UTC m=+155.212667085 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qm54x" (UID: "30c54800-b443-4da8-9d41-22e8f156a1a1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:11 crc kubenswrapper[5008]: I0129 15:30:11.044018 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Jan 29 15:30:11 crc kubenswrapper[5008]: I0129 15:30:11.048825 5008 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Jan 29 15:30:11 crc kubenswrapper[5008]: I0129 15:30:11.140940 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 15:30:11 crc kubenswrapper[5008]: I0129 15:30:11.141086 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5b4af13c-49f7-4c06-840c-6e976b55fabd-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"5b4af13c-49f7-4c06-840c-6e976b55fabd\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 29 15:30:11 crc kubenswrapper[5008]: E0129 15:30:11.141395 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 15:30:11.641160721 +0000 UTC m=+155.314014958 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:11 crc kubenswrapper[5008]: I0129 15:30:11.141469 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/5b4af13c-49f7-4c06-840c-6e976b55fabd-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"5b4af13c-49f7-4c06-840c-6e976b55fabd\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 29 15:30:11 crc kubenswrapper[5008]: I0129 15:30:11.141603 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/5b4af13c-49f7-4c06-840c-6e976b55fabd-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"5b4af13c-49f7-4c06-840c-6e976b55fabd\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 29 15:30:11 crc kubenswrapper[5008]: I0129 15:30:11.141553 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/5864482d-142b-4ab3-a5e1-d48e89d3dde0-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"5864482d-142b-4ab3-a5e1-d48e89d3dde0\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 29 15:30:11 crc kubenswrapper[5008]: I0129 15:30:11.141720 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5864482d-142b-4ab3-a5e1-d48e89d3dde0-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"5864482d-142b-4ab3-a5e1-d48e89d3dde0\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 29 15:30:11 crc kubenswrapper[5008]: I0129 15:30:11.141843 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qm54x\" (UID: \"30c54800-b443-4da8-9d41-22e8f156a1a1\") " pod="openshift-image-registry/image-registry-697d97f7c8-qm54x" Jan 29 15:30:11 crc kubenswrapper[5008]: E0129 15:30:11.142178 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 15:30:11.642155627 +0000 UTC m=+155.315009864 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qm54x" (UID: "30c54800-b443-4da8-9d41-22e8f156a1a1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:11 crc kubenswrapper[5008]: I0129 15:30:11.148525 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 15:30:11 crc kubenswrapper[5008]: I0129 15:30:11.175057 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5b4af13c-49f7-4c06-840c-6e976b55fabd-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"5b4af13c-49f7-4c06-840c-6e976b55fabd\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 29 15:30:11 crc kubenswrapper[5008]: I0129 15:30:11.244484 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 15:30:11 crc kubenswrapper[5008]: E0129 15:30:11.244740 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 15:30:11.744692421 +0000 UTC m=+155.417546658 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:11 crc kubenswrapper[5008]: I0129 15:30:11.244820 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/5864482d-142b-4ab3-a5e1-d48e89d3dde0-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"5864482d-142b-4ab3-a5e1-d48e89d3dde0\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 29 15:30:11 crc kubenswrapper[5008]: I0129 15:30:11.244860 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5864482d-142b-4ab3-a5e1-d48e89d3dde0-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"5864482d-142b-4ab3-a5e1-d48e89d3dde0\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 29 15:30:11 crc kubenswrapper[5008]: I0129 15:30:11.244905 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qm54x\" (UID: \"30c54800-b443-4da8-9d41-22e8f156a1a1\") " pod="openshift-image-registry/image-registry-697d97f7c8-qm54x" Jan 29 15:30:11 crc kubenswrapper[5008]: I0129 15:30:11.245348 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/5864482d-142b-4ab3-a5e1-d48e89d3dde0-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"5864482d-142b-4ab3-a5e1-d48e89d3dde0\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 29 15:30:11 crc kubenswrapper[5008]: E0129 15:30:11.245588 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 15:30:11.745560744 +0000 UTC m=+155.418414981 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qm54x" (UID: "30c54800-b443-4da8-9d41-22e8f156a1a1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:11 crc kubenswrapper[5008]: I0129 15:30:11.271309 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 29 15:30:11 crc kubenswrapper[5008]: I0129 15:30:11.271468 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5864482d-142b-4ab3-a5e1-d48e89d3dde0-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"5864482d-142b-4ab3-a5e1-d48e89d3dde0\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 29 15:30:11 crc kubenswrapper[5008]: I0129 15:30:11.348091 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 15:30:11 crc kubenswrapper[5008]: I0129 15:30:11.348974 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 29 15:30:11 crc kubenswrapper[5008]: E0129 15:30:11.350023 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 15:30:11.849984418 +0000 UTC m=+155.522838675 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:11 crc kubenswrapper[5008]: I0129 15:30:11.454930 5008 patch_prober.go:28] interesting pod/router-default-5444994796-lkcrp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 29 15:30:11 crc kubenswrapper[5008]: [-]has-synced failed: reason withheld Jan 29 15:30:11 crc kubenswrapper[5008]: [+]process-running ok Jan 29 15:30:11 crc kubenswrapper[5008]: healthz check failed Jan 29 15:30:11 crc kubenswrapper[5008]: I0129 15:30:11.454997 5008 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-lkcrp" podUID="380625b0-02b5-417a-bd1e-7ccf56f56059" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 29 15:30:11 crc kubenswrapper[5008]: I0129 15:30:11.456047 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qm54x\" (UID: \"30c54800-b443-4da8-9d41-22e8f156a1a1\") " pod="openshift-image-registry/image-registry-697d97f7c8-qm54x" Jan 29 15:30:11 crc kubenswrapper[5008]: E0129 15:30:11.462206 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 15:30:11.962188115 +0000 UTC m=+155.635042352 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qm54x" (UID: "30c54800-b443-4da8-9d41-22e8f156a1a1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:11 crc kubenswrapper[5008]: I0129 15:30:11.560950 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 15:30:11 crc kubenswrapper[5008]: E0129 15:30:11.561276 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 15:30:12.061253159 +0000 UTC m=+155.734107406 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:11 crc kubenswrapper[5008]: I0129 15:30:11.561559 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qm54x\" (UID: \"30c54800-b443-4da8-9d41-22e8f156a1a1\") " pod="openshift-image-registry/image-registry-697d97f7c8-qm54x" Jan 29 15:30:11 crc kubenswrapper[5008]: E0129 15:30:11.561958 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 15:30:12.061947678 +0000 UTC m=+155.734801915 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qm54x" (UID: "30c54800-b443-4da8-9d41-22e8f156a1a1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:11 crc kubenswrapper[5008]: W0129 15:30:11.592630 5008 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5fe485a1_e14f_4c09_b5b9_f252bc42b7e8.slice/crio-9d04fe921496c7bdd7823f1588d749d1c623576ca9dc6e035670fc249e6120ed WatchSource:0}: Error finding container 9d04fe921496c7bdd7823f1588d749d1c623576ca9dc6e035670fc249e6120ed: Status 404 returned error can't find the container with id 9d04fe921496c7bdd7823f1588d749d1c623576ca9dc6e035670fc249e6120ed Jan 29 15:30:11 crc kubenswrapper[5008]: I0129 15:30:11.662968 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 15:30:11 crc kubenswrapper[5008]: E0129 15:30:11.663238 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 15:30:12.163183528 +0000 UTC m=+155.836037785 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:11 crc kubenswrapper[5008]: I0129 15:30:11.663346 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qm54x\" (UID: \"30c54800-b443-4da8-9d41-22e8f156a1a1\") " pod="openshift-image-registry/image-registry-697d97f7c8-qm54x" Jan 29 15:30:11 crc kubenswrapper[5008]: E0129 15:30:11.663701 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 15:30:12.163684801 +0000 UTC m=+155.836539238 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qm54x" (UID: "30c54800-b443-4da8-9d41-22e8f156a1a1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:11 crc kubenswrapper[5008]: I0129 15:30:11.765203 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 15:30:11 crc kubenswrapper[5008]: I0129 15:30:11.765355 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Jan 29 15:30:11 crc kubenswrapper[5008]: E0129 15:30:11.765616 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 15:30:12.265580489 +0000 UTC m=+155.938434726 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:11 crc kubenswrapper[5008]: W0129 15:30:11.778458 5008 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod5864482d_142b_4ab3_a5e1_d48e89d3dde0.slice/crio-aa1577fad78ae8be2b88ef68cf00c8928dcb8476da5d533f937dc579b89d41cc WatchSource:0}: Error finding container aa1577fad78ae8be2b88ef68cf00c8928dcb8476da5d533f937dc579b89d41cc: Status 404 returned error can't find the container with id aa1577fad78ae8be2b88ef68cf00c8928dcb8476da5d533f937dc579b89d41cc Jan 29 15:30:11 crc kubenswrapper[5008]: I0129 15:30:11.816489 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Jan 29 15:30:11 crc kubenswrapper[5008]: I0129 15:30:11.820940 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-n2sqt" event={"ID":"4adf65cb-4f11-4061-bcb5-71c3d9b890f7","Type":"ContainerStarted","Data":"ae0ff5f28e7a513d7ddd669bd2c1d28678a491dedac61848acb0aa0f9238ab51"} Jan 29 15:30:11 crc kubenswrapper[5008]: I0129 15:30:11.821856 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"6e2e89d4dd1bed8000cf6c6ddd761ad75f85e4c768b0da1e57589771bdb83f8e"} Jan 29 15:30:11 crc kubenswrapper[5008]: I0129 15:30:11.823654 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-4l85w" event={"ID":"653b37fe-d452-4111-b27f-ef75530abe41","Type":"ContainerStarted","Data":"ce7657427cf40ffcbee6a3dd4452793f8588fce59d300c77555b154e47a25d54"} Jan 29 15:30:11 crc kubenswrapper[5008]: I0129 15:30:11.824483 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"5864482d-142b-4ab3-a5e1-d48e89d3dde0","Type":"ContainerStarted","Data":"aa1577fad78ae8be2b88ef68cf00c8928dcb8476da5d533f937dc579b89d41cc"} Jan 29 15:30:11 crc kubenswrapper[5008]: I0129 15:30:11.825281 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"9d04fe921496c7bdd7823f1588d749d1c623576ca9dc6e035670fc249e6120ed"} Jan 29 15:30:11 crc kubenswrapper[5008]: I0129 15:30:11.826083 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"0595a070517962b338e483f65eb3819bc102b989f320901899588945e4149f1a"} Jan 29 15:30:11 crc kubenswrapper[5008]: I0129 15:30:11.826201 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-dns/dns-default-tw5d5" Jan 29 15:30:11 crc kubenswrapper[5008]: I0129 15:30:11.840771 5008 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/dns-default-tw5d5" podStartSLOduration=14.840751767 podStartE2EDuration="14.840751767s" podCreationTimestamp="2026-01-29 15:29:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 15:30:11.839125415 +0000 UTC m=+155.511979662" watchObservedRunningTime="2026-01-29 15:30:11.840751767 +0000 UTC m=+155.513606004" Jan 29 15:30:11 crc kubenswrapper[5008]: I0129 15:30:11.866718 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qm54x\" (UID: \"30c54800-b443-4da8-9d41-22e8f156a1a1\") " pod="openshift-image-registry/image-registry-697d97f7c8-qm54x" Jan 29 15:30:11 crc kubenswrapper[5008]: E0129 15:30:11.867314 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 15:30:12.367296792 +0000 UTC m=+156.040151029 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qm54x" (UID: "30c54800-b443-4da8-9d41-22e8f156a1a1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:11 crc kubenswrapper[5008]: I0129 15:30:11.968356 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 15:30:11 crc kubenswrapper[5008]: E0129 15:30:11.970137 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 15:30:12.470110544 +0000 UTC m=+156.142964781 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:12 crc kubenswrapper[5008]: I0129 15:30:12.070383 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qm54x\" (UID: \"30c54800-b443-4da8-9d41-22e8f156a1a1\") " pod="openshift-image-registry/image-registry-697d97f7c8-qm54x" Jan 29 15:30:12 crc kubenswrapper[5008]: E0129 15:30:12.070841 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 15:30:12.57081931 +0000 UTC m=+156.243673547 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qm54x" (UID: "30c54800-b443-4da8-9d41-22e8f156a1a1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:12 crc kubenswrapper[5008]: I0129 15:30:12.171949 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 15:30:12 crc kubenswrapper[5008]: E0129 15:30:12.172313 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 15:30:12.672273306 +0000 UTC m=+156.345127563 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:12 crc kubenswrapper[5008]: I0129 15:30:12.172802 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qm54x\" (UID: \"30c54800-b443-4da8-9d41-22e8f156a1a1\") " pod="openshift-image-registry/image-registry-697d97f7c8-qm54x" Jan 29 15:30:12 crc kubenswrapper[5008]: E0129 15:30:12.173436 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 15:30:12.673422016 +0000 UTC m=+156.346276243 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qm54x" (UID: "30c54800-b443-4da8-9d41-22e8f156a1a1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:12 crc kubenswrapper[5008]: I0129 15:30:12.273755 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 15:30:12 crc kubenswrapper[5008]: E0129 15:30:12.274248 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 15:30:12.774226996 +0000 UTC m=+156.447081243 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:12 crc kubenswrapper[5008]: I0129 15:30:12.375479 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qm54x\" (UID: \"30c54800-b443-4da8-9d41-22e8f156a1a1\") " pod="openshift-image-registry/image-registry-697d97f7c8-qm54x" Jan 29 15:30:12 crc kubenswrapper[5008]: E0129 15:30:12.375806 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 15:30:12.875794625 +0000 UTC m=+156.548648862 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qm54x" (UID: "30c54800-b443-4da8-9d41-22e8f156a1a1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:12 crc kubenswrapper[5008]: I0129 15:30:12.442038 5008 patch_prober.go:28] interesting pod/router-default-5444994796-lkcrp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 29 15:30:12 crc kubenswrapper[5008]: [-]has-synced failed: reason withheld Jan 29 15:30:12 crc kubenswrapper[5008]: [+]process-running ok Jan 29 15:30:12 crc kubenswrapper[5008]: healthz check failed Jan 29 15:30:12 crc kubenswrapper[5008]: I0129 15:30:12.442102 5008 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-lkcrp" podUID="380625b0-02b5-417a-bd1e-7ccf56f56059" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 29 15:30:12 crc kubenswrapper[5008]: I0129 15:30:12.476536 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 15:30:12 crc kubenswrapper[5008]: E0129 15:30:12.476725 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 15:30:12.976699077 +0000 UTC m=+156.649553304 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:12 crc kubenswrapper[5008]: I0129 15:30:12.476837 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qm54x\" (UID: \"30c54800-b443-4da8-9d41-22e8f156a1a1\") " pod="openshift-image-registry/image-registry-697d97f7c8-qm54x" Jan 29 15:30:12 crc kubenswrapper[5008]: E0129 15:30:12.477152 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 15:30:12.977144868 +0000 UTC m=+156.649999105 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qm54x" (UID: "30c54800-b443-4da8-9d41-22e8f156a1a1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:12 crc kubenswrapper[5008]: I0129 15:30:12.505760 5008 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operator-lifecycle-manager_collect-profiles-29494995-x4n8l_b1a4a04b-067c-43f1-b355-46161babe869/collect-profiles/0.log" Jan 29 15:30:12 crc kubenswrapper[5008]: I0129 15:30:12.505834 5008 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29494995-x4n8l" Jan 29 15:30:12 crc kubenswrapper[5008]: I0129 15:30:12.578061 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tsqb8\" (UniqueName: \"kubernetes.io/projected/b1a4a04b-067c-43f1-b355-46161babe869-kube-api-access-tsqb8\") pod \"b1a4a04b-067c-43f1-b355-46161babe869\" (UID: \"b1a4a04b-067c-43f1-b355-46161babe869\") " Jan 29 15:30:12 crc kubenswrapper[5008]: I0129 15:30:12.578181 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 15:30:12 crc kubenswrapper[5008]: I0129 15:30:12.578207 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b1a4a04b-067c-43f1-b355-46161babe869-secret-volume\") pod \"b1a4a04b-067c-43f1-b355-46161babe869\" (UID: \"b1a4a04b-067c-43f1-b355-46161babe869\") " Jan 29 15:30:12 crc kubenswrapper[5008]: I0129 15:30:12.578252 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b1a4a04b-067c-43f1-b355-46161babe869-config-volume\") pod \"b1a4a04b-067c-43f1-b355-46161babe869\" (UID: \"b1a4a04b-067c-43f1-b355-46161babe869\") " Jan 29 15:30:12 crc kubenswrapper[5008]: I0129 15:30:12.579193 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b1a4a04b-067c-43f1-b355-46161babe869-config-volume" (OuterVolumeSpecName: "config-volume") pod "b1a4a04b-067c-43f1-b355-46161babe869" (UID: "b1a4a04b-067c-43f1-b355-46161babe869"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:30:12 crc kubenswrapper[5008]: E0129 15:30:12.579367 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 15:30:13.079328444 +0000 UTC m=+156.752182701 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:12 crc kubenswrapper[5008]: I0129 15:30:12.591371 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b1a4a04b-067c-43f1-b355-46161babe869-kube-api-access-tsqb8" (OuterVolumeSpecName: "kube-api-access-tsqb8") pod "b1a4a04b-067c-43f1-b355-46161babe869" (UID: "b1a4a04b-067c-43f1-b355-46161babe869"). InnerVolumeSpecName "kube-api-access-tsqb8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:30:12 crc kubenswrapper[5008]: I0129 15:30:12.591423 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b1a4a04b-067c-43f1-b355-46161babe869-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "b1a4a04b-067c-43f1-b355-46161babe869" (UID: "b1a4a04b-067c-43f1-b355-46161babe869"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:30:12 crc kubenswrapper[5008]: I0129 15:30:12.679570 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qm54x\" (UID: \"30c54800-b443-4da8-9d41-22e8f156a1a1\") " pod="openshift-image-registry/image-registry-697d97f7c8-qm54x" Jan 29 15:30:12 crc kubenswrapper[5008]: I0129 15:30:12.679677 5008 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tsqb8\" (UniqueName: \"kubernetes.io/projected/b1a4a04b-067c-43f1-b355-46161babe869-kube-api-access-tsqb8\") on node \"crc\" DevicePath \"\"" Jan 29 15:30:12 crc kubenswrapper[5008]: I0129 15:30:12.679694 5008 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b1a4a04b-067c-43f1-b355-46161babe869-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 29 15:30:12 crc kubenswrapper[5008]: I0129 15:30:12.679706 5008 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b1a4a04b-067c-43f1-b355-46161babe869-config-volume\") on node \"crc\" DevicePath \"\"" Jan 29 15:30:12 crc kubenswrapper[5008]: E0129 15:30:12.680082 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 15:30:13.180063721 +0000 UTC m=+156.852918028 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qm54x" (UID: "30c54800-b443-4da8-9d41-22e8f156a1a1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:12 crc kubenswrapper[5008]: I0129 15:30:12.780671 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 15:30:12 crc kubenswrapper[5008]: E0129 15:30:12.780844 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 15:30:13.280813319 +0000 UTC m=+156.953667566 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:12 crc kubenswrapper[5008]: I0129 15:30:12.780941 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qm54x\" (UID: \"30c54800-b443-4da8-9d41-22e8f156a1a1\") " pod="openshift-image-registry/image-registry-697d97f7c8-qm54x" Jan 29 15:30:12 crc kubenswrapper[5008]: E0129 15:30:12.781319 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 15:30:13.281307742 +0000 UTC m=+156.954161979 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qm54x" (UID: "30c54800-b443-4da8-9d41-22e8f156a1a1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:12 crc kubenswrapper[5008]: I0129 15:30:12.832116 5008 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operator-lifecycle-manager_collect-profiles-29494995-x4n8l_b1a4a04b-067c-43f1-b355-46161babe869/collect-profiles/0.log" Jan 29 15:30:12 crc kubenswrapper[5008]: I0129 15:30:12.832506 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29494995-x4n8l" event={"ID":"b1a4a04b-067c-43f1-b355-46161babe869","Type":"ContainerDied","Data":"1e01f1c47448495ee747be64b54e9beedefe2ff7cb0493bf37d8a12ea3bb0a20"} Jan 29 15:30:12 crc kubenswrapper[5008]: I0129 15:30:12.832544 5008 scope.go:117] "RemoveContainer" containerID="3e1d83d49207f7e8ce5235b5d25891dfd2e43340feba1d11402b5242e6b975a7" Jan 29 15:30:12 crc kubenswrapper[5008]: I0129 15:30:12.832557 5008 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29494995-x4n8l" Jan 29 15:30:12 crc kubenswrapper[5008]: I0129 15:30:12.834429 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"5b4af13c-49f7-4c06-840c-6e976b55fabd","Type":"ContainerStarted","Data":"a017c760d370789ae6b77ac576c3c8c398bd726ece1c0385f34120f6300e19d6"} Jan 29 15:30:12 crc kubenswrapper[5008]: I0129 15:30:12.854815 5008 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-config-operator/openshift-config-operator-7777fb866f-468fl" podStartSLOduration=134.854797276 podStartE2EDuration="2m14.854797276s" podCreationTimestamp="2026-01-29 15:27:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 15:30:12.854484698 +0000 UTC m=+156.527338955" watchObservedRunningTime="2026-01-29 15:30:12.854797276 +0000 UTC m=+156.527651513" Jan 29 15:30:12 crc kubenswrapper[5008]: I0129 15:30:12.862114 5008 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29494995-x4n8l"] Jan 29 15:30:12 crc kubenswrapper[5008]: I0129 15:30:12.867007 5008 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29494995-x4n8l"] Jan 29 15:30:12 crc kubenswrapper[5008]: I0129 15:30:12.882595 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 15:30:12 crc kubenswrapper[5008]: E0129 15:30:12.882764 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 15:30:13.382744038 +0000 UTC m=+157.055598275 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:12 crc kubenswrapper[5008]: I0129 15:30:12.882877 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qm54x\" (UID: \"30c54800-b443-4da8-9d41-22e8f156a1a1\") " pod="openshift-image-registry/image-registry-697d97f7c8-qm54x" Jan 29 15:30:12 crc kubenswrapper[5008]: E0129 15:30:12.883146 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 15:30:13.383138188 +0000 UTC m=+157.055992425 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qm54x" (UID: "30c54800-b443-4da8-9d41-22e8f156a1a1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:12 crc kubenswrapper[5008]: I0129 15:30:12.984405 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 15:30:12 crc kubenswrapper[5008]: E0129 15:30:12.984634 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 15:30:13.484601235 +0000 UTC m=+157.157455472 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:12 crc kubenswrapper[5008]: I0129 15:30:12.984827 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qm54x\" (UID: \"30c54800-b443-4da8-9d41-22e8f156a1a1\") " pod="openshift-image-registry/image-registry-697d97f7c8-qm54x" Jan 29 15:30:12 crc kubenswrapper[5008]: E0129 15:30:12.985172 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 15:30:13.485155659 +0000 UTC m=+157.158009896 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qm54x" (UID: "30c54800-b443-4da8-9d41-22e8f156a1a1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:13 crc kubenswrapper[5008]: I0129 15:30:13.005525 5008 patch_prober.go:28] interesting pod/dns-default-tw5d5 container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.217.0.44:8181/ready\": dial tcp 10.217.0.44:8181: connect: connection refused" start-of-body= Jan 29 15:30:13 crc kubenswrapper[5008]: I0129 15:30:13.005649 5008 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-tw5d5" podUID="a161323e-d13e-46da-b8bd-347b56ef5110" containerName="dns" probeResult="failure" output="Get \"http://10.217.0.44:8181/ready\": dial tcp 10.217.0.44:8181: connect: connection refused" Jan 29 15:30:13 crc kubenswrapper[5008]: I0129 15:30:13.085964 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 15:30:13 crc kubenswrapper[5008]: E0129 15:30:13.086202 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 15:30:13.586167053 +0000 UTC m=+157.259021310 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:13 crc kubenswrapper[5008]: I0129 15:30:13.086267 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qm54x\" (UID: \"30c54800-b443-4da8-9d41-22e8f156a1a1\") " pod="openshift-image-registry/image-registry-697d97f7c8-qm54x" Jan 29 15:30:13 crc kubenswrapper[5008]: E0129 15:30:13.086678 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 15:30:13.586668607 +0000 UTC m=+157.259522914 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qm54x" (UID: "30c54800-b443-4da8-9d41-22e8f156a1a1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:13 crc kubenswrapper[5008]: I0129 15:30:13.187657 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 15:30:13 crc kubenswrapper[5008]: E0129 15:30:13.187844 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 15:30:13.687813725 +0000 UTC m=+157.360667972 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:13 crc kubenswrapper[5008]: I0129 15:30:13.187952 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qm54x\" (UID: \"30c54800-b443-4da8-9d41-22e8f156a1a1\") " pod="openshift-image-registry/image-registry-697d97f7c8-qm54x" Jan 29 15:30:13 crc kubenswrapper[5008]: E0129 15:30:13.188265 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 15:30:13.688255376 +0000 UTC m=+157.361109683 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qm54x" (UID: "30c54800-b443-4da8-9d41-22e8f156a1a1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:13 crc kubenswrapper[5008]: I0129 15:30:13.288884 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 15:30:13 crc kubenswrapper[5008]: E0129 15:30:13.289037 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 15:30:13.789013974 +0000 UTC m=+157.461868211 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:13 crc kubenswrapper[5008]: I0129 15:30:13.289213 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qm54x\" (UID: \"30c54800-b443-4da8-9d41-22e8f156a1a1\") " pod="openshift-image-registry/image-registry-697d97f7c8-qm54x" Jan 29 15:30:13 crc kubenswrapper[5008]: E0129 15:30:13.289570 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 15:30:13.789558718 +0000 UTC m=+157.462412955 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qm54x" (UID: "30c54800-b443-4da8-9d41-22e8f156a1a1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:13 crc kubenswrapper[5008]: I0129 15:30:13.338202 5008 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b1a4a04b-067c-43f1-b355-46161babe869" path="/var/lib/kubelet/pods/b1a4a04b-067c-43f1-b355-46161babe869/volumes" Jan 29 15:30:13 crc kubenswrapper[5008]: I0129 15:30:13.390245 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 15:30:13 crc kubenswrapper[5008]: E0129 15:30:13.390455 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 15:30:13.890428829 +0000 UTC m=+157.563283066 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:13 crc kubenswrapper[5008]: I0129 15:30:13.390516 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qm54x\" (UID: \"30c54800-b443-4da8-9d41-22e8f156a1a1\") " pod="openshift-image-registry/image-registry-697d97f7c8-qm54x" Jan 29 15:30:13 crc kubenswrapper[5008]: E0129 15:30:13.390922 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 15:30:13.890905692 +0000 UTC m=+157.563759929 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qm54x" (UID: "30c54800-b443-4da8-9d41-22e8f156a1a1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:13 crc kubenswrapper[5008]: I0129 15:30:13.450766 5008 patch_prober.go:28] interesting pod/router-default-5444994796-lkcrp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 29 15:30:13 crc kubenswrapper[5008]: [-]has-synced failed: reason withheld Jan 29 15:30:13 crc kubenswrapper[5008]: [+]process-running ok Jan 29 15:30:13 crc kubenswrapper[5008]: healthz check failed Jan 29 15:30:13 crc kubenswrapper[5008]: I0129 15:30:13.450854 5008 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-lkcrp" podUID="380625b0-02b5-417a-bd1e-7ccf56f56059" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 29 15:30:13 crc kubenswrapper[5008]: I0129 15:30:13.491817 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 15:30:13 crc kubenswrapper[5008]: E0129 15:30:13.492004 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 15:30:13.991976158 +0000 UTC m=+157.664830405 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:13 crc kubenswrapper[5008]: I0129 15:30:13.492185 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qm54x\" (UID: \"30c54800-b443-4da8-9d41-22e8f156a1a1\") " pod="openshift-image-registry/image-registry-697d97f7c8-qm54x" Jan 29 15:30:13 crc kubenswrapper[5008]: E0129 15:30:13.492510 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 15:30:13.992494652 +0000 UTC m=+157.665348889 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qm54x" (UID: "30c54800-b443-4da8-9d41-22e8f156a1a1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:13 crc kubenswrapper[5008]: I0129 15:30:13.593353 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 15:30:13 crc kubenswrapper[5008]: E0129 15:30:13.593518 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 15:30:14.093488736 +0000 UTC m=+157.766342973 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:13 crc kubenswrapper[5008]: I0129 15:30:13.593698 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qm54x\" (UID: \"30c54800-b443-4da8-9d41-22e8f156a1a1\") " pod="openshift-image-registry/image-registry-697d97f7c8-qm54x" Jan 29 15:30:13 crc kubenswrapper[5008]: E0129 15:30:13.594033 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 15:30:14.09402506 +0000 UTC m=+157.766879297 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qm54x" (UID: "30c54800-b443-4da8-9d41-22e8f156a1a1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:13 crc kubenswrapper[5008]: I0129 15:30:13.694881 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 15:30:13 crc kubenswrapper[5008]: E0129 15:30:13.695013 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 15:30:14.194985643 +0000 UTC m=+157.867839880 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:13 crc kubenswrapper[5008]: I0129 15:30:13.695232 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qm54x\" (UID: \"30c54800-b443-4da8-9d41-22e8f156a1a1\") " pod="openshift-image-registry/image-registry-697d97f7c8-qm54x" Jan 29 15:30:13 crc kubenswrapper[5008]: E0129 15:30:13.695555 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 15:30:14.195548018 +0000 UTC m=+157.868402255 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qm54x" (UID: "30c54800-b443-4da8-9d41-22e8f156a1a1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:13 crc kubenswrapper[5008]: I0129 15:30:13.797025 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 15:30:13 crc kubenswrapper[5008]: E0129 15:30:13.797256 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 15:30:14.29722366 +0000 UTC m=+157.970077907 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:13 crc kubenswrapper[5008]: I0129 15:30:13.797745 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qm54x\" (UID: \"30c54800-b443-4da8-9d41-22e8f156a1a1\") " pod="openshift-image-registry/image-registry-697d97f7c8-qm54x" Jan 29 15:30:13 crc kubenswrapper[5008]: E0129 15:30:13.798207 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 15:30:14.298189936 +0000 UTC m=+157.971044173 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qm54x" (UID: "30c54800-b443-4da8-9d41-22e8f156a1a1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:13 crc kubenswrapper[5008]: I0129 15:30:13.842378 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"5864482d-142b-4ab3-a5e1-d48e89d3dde0","Type":"ContainerStarted","Data":"e632b499faf44559f02951cba34ddb7f268053890e895e7ed971208eb91b44b2"} Jan 29 15:30:13 crc kubenswrapper[5008]: I0129 15:30:13.847552 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"28d436bfdd643f9abcc9f49d58a2cbaeb6a404fe87976cc84ba7055feb5b14d2"} Jan 29 15:30:13 crc kubenswrapper[5008]: I0129 15:30:13.849547 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"70ff31cba9dd56eb2b8af86640fc062e012a297d6820348d5f14dba195688194"} Jan 29 15:30:13 crc kubenswrapper[5008]: I0129 15:30:13.851818 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"5b4af13c-49f7-4c06-840c-6e976b55fabd","Type":"ContainerStarted","Data":"bb0965dd4b0d6c0d8a2795d7d5ac66432f61305e0643d73fc376449b614177d2"} Jan 29 15:30:13 crc kubenswrapper[5008]: I0129 15:30:13.854233 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"90d7c62feea83e4c216393c180437aec60cdde116ef4613c896fbf55aa635e4d"} Jan 29 15:30:13 crc kubenswrapper[5008]: I0129 15:30:13.893250 5008 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-n2sqt" podStartSLOduration=134.893228524 podStartE2EDuration="2m14.893228524s" podCreationTimestamp="2026-01-29 15:27:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 15:30:13.890181163 +0000 UTC m=+157.563035400" watchObservedRunningTime="2026-01-29 15:30:13.893228524 +0000 UTC m=+157.566082761" Jan 29 15:30:13 crc kubenswrapper[5008]: I0129 15:30:13.899532 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 15:30:13 crc kubenswrapper[5008]: E0129 15:30:13.899692 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 15:30:14.399667432 +0000 UTC m=+158.072521679 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:13 crc kubenswrapper[5008]: I0129 15:30:13.899770 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qm54x\" (UID: \"30c54800-b443-4da8-9d41-22e8f156a1a1\") " pod="openshift-image-registry/image-registry-697d97f7c8-qm54x" Jan 29 15:30:13 crc kubenswrapper[5008]: E0129 15:30:13.900084 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 15:30:14.400072332 +0000 UTC m=+158.072926579 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qm54x" (UID: "30c54800-b443-4da8-9d41-22e8f156a1a1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:13 crc kubenswrapper[5008]: I0129 15:30:13.990864 5008 patch_prober.go:28] interesting pod/machine-config-daemon-gk9q8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 15:30:13 crc kubenswrapper[5008]: I0129 15:30:13.990939 5008 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-gk9q8" podUID="ca0fcb2d-733d-4bde-9bbf-3f7082d0e244" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 15:30:14 crc kubenswrapper[5008]: I0129 15:30:14.002432 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 15:30:14 crc kubenswrapper[5008]: E0129 15:30:14.003706 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 15:30:14.503684525 +0000 UTC m=+158.176538772 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:14 crc kubenswrapper[5008]: I0129 15:30:14.104151 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qm54x\" (UID: \"30c54800-b443-4da8-9d41-22e8f156a1a1\") " pod="openshift-image-registry/image-registry-697d97f7c8-qm54x" Jan 29 15:30:14 crc kubenswrapper[5008]: E0129 15:30:14.104580 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 15:30:14.604560627 +0000 UTC m=+158.277414934 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qm54x" (UID: "30c54800-b443-4da8-9d41-22e8f156a1a1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:14 crc kubenswrapper[5008]: I0129 15:30:14.205404 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 15:30:14 crc kubenswrapper[5008]: E0129 15:30:14.205608 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 15:30:14.705571932 +0000 UTC m=+158.378426169 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:14 crc kubenswrapper[5008]: I0129 15:30:14.205754 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qm54x\" (UID: \"30c54800-b443-4da8-9d41-22e8f156a1a1\") " pod="openshift-image-registry/image-registry-697d97f7c8-qm54x" Jan 29 15:30:14 crc kubenswrapper[5008]: E0129 15:30:14.206070 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 15:30:14.706062094 +0000 UTC m=+158.378916331 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qm54x" (UID: "30c54800-b443-4da8-9d41-22e8f156a1a1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:14 crc kubenswrapper[5008]: I0129 15:30:14.307012 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 15:30:14 crc kubenswrapper[5008]: E0129 15:30:14.307212 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 15:30:14.807175211 +0000 UTC m=+158.480029448 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:14 crc kubenswrapper[5008]: I0129 15:30:14.307308 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qm54x\" (UID: \"30c54800-b443-4da8-9d41-22e8f156a1a1\") " pod="openshift-image-registry/image-registry-697d97f7c8-qm54x" Jan 29 15:30:14 crc kubenswrapper[5008]: E0129 15:30:14.307639 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 15:30:14.807630964 +0000 UTC m=+158.480485201 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qm54x" (UID: "30c54800-b443-4da8-9d41-22e8f156a1a1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:14 crc kubenswrapper[5008]: I0129 15:30:14.408427 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 15:30:14 crc kubenswrapper[5008]: E0129 15:30:14.408623 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 15:30:14.908596307 +0000 UTC m=+158.581450544 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:14 crc kubenswrapper[5008]: I0129 15:30:14.408755 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qm54x\" (UID: \"30c54800-b443-4da8-9d41-22e8f156a1a1\") " pod="openshift-image-registry/image-registry-697d97f7c8-qm54x" Jan 29 15:30:14 crc kubenswrapper[5008]: E0129 15:30:14.409133 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 15:30:14.90911873 +0000 UTC m=+158.581972967 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qm54x" (UID: "30c54800-b443-4da8-9d41-22e8f156a1a1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:14 crc kubenswrapper[5008]: I0129 15:30:14.445582 5008 patch_prober.go:28] interesting pod/router-default-5444994796-lkcrp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 29 15:30:14 crc kubenswrapper[5008]: [-]has-synced failed: reason withheld Jan 29 15:30:14 crc kubenswrapper[5008]: [+]process-running ok Jan 29 15:30:14 crc kubenswrapper[5008]: healthz check failed Jan 29 15:30:14 crc kubenswrapper[5008]: I0129 15:30:14.445647 5008 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-lkcrp" podUID="380625b0-02b5-417a-bd1e-7ccf56f56059" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 29 15:30:14 crc kubenswrapper[5008]: I0129 15:30:14.510479 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 15:30:14 crc kubenswrapper[5008]: E0129 15:30:14.510626 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 15:30:15.010599578 +0000 UTC m=+158.683453815 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:14 crc kubenswrapper[5008]: I0129 15:30:14.510764 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qm54x\" (UID: \"30c54800-b443-4da8-9d41-22e8f156a1a1\") " pod="openshift-image-registry/image-registry-697d97f7c8-qm54x" Jan 29 15:30:14 crc kubenswrapper[5008]: E0129 15:30:14.511139 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 15:30:15.011128301 +0000 UTC m=+158.683982538 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qm54x" (UID: "30c54800-b443-4da8-9d41-22e8f156a1a1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:14 crc kubenswrapper[5008]: I0129 15:30:14.612056 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 15:30:14 crc kubenswrapper[5008]: E0129 15:30:14.612532 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 15:30:15.112495745 +0000 UTC m=+158.785349992 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:14 crc kubenswrapper[5008]: I0129 15:30:14.612584 5008 csr.go:261] certificate signing request csr-5kkjf is approved, waiting to be issued Jan 29 15:30:14 crc kubenswrapper[5008]: I0129 15:30:14.623980 5008 csr.go:257] certificate signing request csr-5kkjf is issued Jan 29 15:30:14 crc kubenswrapper[5008]: I0129 15:30:14.716144 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qm54x\" (UID: \"30c54800-b443-4da8-9d41-22e8f156a1a1\") " pod="openshift-image-registry/image-registry-697d97f7c8-qm54x" Jan 29 15:30:14 crc kubenswrapper[5008]: E0129 15:30:14.716536 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 15:30:15.216523089 +0000 UTC m=+158.889377326 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qm54x" (UID: "30c54800-b443-4da8-9d41-22e8f156a1a1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:14 crc kubenswrapper[5008]: I0129 15:30:14.734304 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-n2sqt" Jan 29 15:30:14 crc kubenswrapper[5008]: I0129 15:30:14.734370 5008 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-n2sqt" Jan 29 15:30:14 crc kubenswrapper[5008]: I0129 15:30:14.736684 5008 patch_prober.go:28] interesting pod/apiserver-7bbb656c7d-n2sqt container/oauth-apiserver namespace/openshift-oauth-apiserver: Startup probe status=failure output="Get \"https://10.217.0.9:8443/livez\": dial tcp 10.217.0.9:8443: connect: connection refused" start-of-body= Jan 29 15:30:14 crc kubenswrapper[5008]: I0129 15:30:14.736737 5008 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-n2sqt" podUID="4adf65cb-4f11-4061-bcb5-71c3d9b890f7" containerName="oauth-apiserver" probeResult="failure" output="Get \"https://10.217.0.9:8443/livez\": dial tcp 10.217.0.9:8443: connect: connection refused" Jan 29 15:30:14 crc kubenswrapper[5008]: I0129 15:30:14.817591 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 15:30:14 crc kubenswrapper[5008]: E0129 15:30:14.817739 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 15:30:15.317715768 +0000 UTC m=+158.990570015 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:14 crc kubenswrapper[5008]: I0129 15:30:14.818195 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qm54x\" (UID: \"30c54800-b443-4da8-9d41-22e8f156a1a1\") " pod="openshift-image-registry/image-registry-697d97f7c8-qm54x" Jan 29 15:30:14 crc kubenswrapper[5008]: E0129 15:30:14.818569 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 15:30:15.31855859 +0000 UTC m=+158.991412827 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qm54x" (UID: "30c54800-b443-4da8-9d41-22e8f156a1a1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:14 crc kubenswrapper[5008]: I0129 15:30:14.878138 5008 generic.go:334] "Generic (PLEG): container finished" podID="5b4af13c-49f7-4c06-840c-6e976b55fabd" containerID="bb0965dd4b0d6c0d8a2795d7d5ac66432f61305e0643d73fc376449b614177d2" exitCode=0 Jan 29 15:30:14 crc kubenswrapper[5008]: I0129 15:30:14.878259 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"5b4af13c-49f7-4c06-840c-6e976b55fabd","Type":"ContainerDied","Data":"bb0965dd4b0d6c0d8a2795d7d5ac66432f61305e0643d73fc376449b614177d2"} Jan 29 15:30:14 crc kubenswrapper[5008]: I0129 15:30:14.880218 5008 generic.go:334] "Generic (PLEG): container finished" podID="4a912999-007c-495d-aaa3-857d76158a91" containerID="74e48ee561dff74c0b937607b1d67f636544c839b5dfad578f5c993d847e004b" exitCode=0 Jan 29 15:30:14 crc kubenswrapper[5008]: I0129 15:30:14.880311 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29495010-t7nh4" event={"ID":"4a912999-007c-495d-aaa3-857d76158a91","Type":"ContainerDied","Data":"74e48ee561dff74c0b937607b1d67f636544c839b5dfad578f5c993d847e004b"} Jan 29 15:30:14 crc kubenswrapper[5008]: I0129 15:30:14.883016 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-4l85w" event={"ID":"653b37fe-d452-4111-b27f-ef75530abe41","Type":"ContainerStarted","Data":"c0afd54cc1c889ad21a3bff4c006b825538ef035b544e57d61f4e726cb2a6c30"} Jan 29 15:30:14 crc kubenswrapper[5008]: I0129 15:30:14.883488 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 15:30:14 crc kubenswrapper[5008]: I0129 15:30:14.919185 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 15:30:14 crc kubenswrapper[5008]: E0129 15:30:14.919416 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 15:30:15.419368539 +0000 UTC m=+159.092222776 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:14 crc kubenswrapper[5008]: I0129 15:30:14.919493 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qm54x\" (UID: \"30c54800-b443-4da8-9d41-22e8f156a1a1\") " pod="openshift-image-registry/image-registry-697d97f7c8-qm54x" Jan 29 15:30:14 crc kubenswrapper[5008]: E0129 15:30:14.919829 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 15:30:15.419814262 +0000 UTC m=+159.092668499 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qm54x" (UID: "30c54800-b443-4da8-9d41-22e8f156a1a1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:15 crc kubenswrapper[5008]: I0129 15:30:15.021471 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 15:30:15 crc kubenswrapper[5008]: E0129 15:30:15.022649 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 15:30:15.522631293 +0000 UTC m=+159.195485530 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:15 crc kubenswrapper[5008]: I0129 15:30:15.123758 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qm54x\" (UID: \"30c54800-b443-4da8-9d41-22e8f156a1a1\") " pod="openshift-image-registry/image-registry-697d97f7c8-qm54x" Jan 29 15:30:15 crc kubenswrapper[5008]: E0129 15:30:15.124416 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 15:30:15.624384657 +0000 UTC m=+159.297238994 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qm54x" (UID: "30c54800-b443-4da8-9d41-22e8f156a1a1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:15 crc kubenswrapper[5008]: I0129 15:30:15.225028 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 15:30:15 crc kubenswrapper[5008]: E0129 15:30:15.225309 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 15:30:15.725263429 +0000 UTC m=+159.398117666 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:15 crc kubenswrapper[5008]: I0129 15:30:15.225984 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qm54x\" (UID: \"30c54800-b443-4da8-9d41-22e8f156a1a1\") " pod="openshift-image-registry/image-registry-697d97f7c8-qm54x" Jan 29 15:30:15 crc kubenswrapper[5008]: E0129 15:30:15.226591 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 15:30:15.726568853 +0000 UTC m=+159.399423090 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qm54x" (UID: "30c54800-b443-4da8-9d41-22e8f156a1a1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:15 crc kubenswrapper[5008]: I0129 15:30:15.327334 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 15:30:15 crc kubenswrapper[5008]: E0129 15:30:15.327668 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 15:30:15.827610298 +0000 UTC m=+159.500464535 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:15 crc kubenswrapper[5008]: I0129 15:30:15.429334 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qm54x\" (UID: \"30c54800-b443-4da8-9d41-22e8f156a1a1\") " pod="openshift-image-registry/image-registry-697d97f7c8-qm54x" Jan 29 15:30:15 crc kubenswrapper[5008]: E0129 15:30:15.429966 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 15:30:15.929943717 +0000 UTC m=+159.602798144 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qm54x" (UID: "30c54800-b443-4da8-9d41-22e8f156a1a1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:15 crc kubenswrapper[5008]: I0129 15:30:15.443318 5008 patch_prober.go:28] interesting pod/router-default-5444994796-lkcrp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 29 15:30:15 crc kubenswrapper[5008]: [-]has-synced failed: reason withheld Jan 29 15:30:15 crc kubenswrapper[5008]: [+]process-running ok Jan 29 15:30:15 crc kubenswrapper[5008]: healthz check failed Jan 29 15:30:15 crc kubenswrapper[5008]: I0129 15:30:15.443413 5008 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-lkcrp" podUID="380625b0-02b5-417a-bd1e-7ccf56f56059" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 29 15:30:15 crc kubenswrapper[5008]: I0129 15:30:15.530690 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 15:30:15 crc kubenswrapper[5008]: E0129 15:30:15.530867 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 15:30:16.030826348 +0000 UTC m=+159.703680585 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:15 crc kubenswrapper[5008]: I0129 15:30:15.530983 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qm54x\" (UID: \"30c54800-b443-4da8-9d41-22e8f156a1a1\") " pod="openshift-image-registry/image-registry-697d97f7c8-qm54x" Jan 29 15:30:15 crc kubenswrapper[5008]: E0129 15:30:15.531434 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 15:30:16.031399954 +0000 UTC m=+159.704254191 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qm54x" (UID: "30c54800-b443-4da8-9d41-22e8f156a1a1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:15 crc kubenswrapper[5008]: I0129 15:30:15.625585 5008 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2027-01-29 15:25:14 +0000 UTC, rotation deadline is 2026-12-23 12:16:42.298277108 +0000 UTC Jan 29 15:30:15 crc kubenswrapper[5008]: I0129 15:30:15.626092 5008 certificate_manager.go:356] kubernetes.io/kubelet-serving: Waiting 7868h46m26.672191546s for next certificate rotation Jan 29 15:30:15 crc kubenswrapper[5008]: I0129 15:30:15.632610 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 15:30:15 crc kubenswrapper[5008]: E0129 15:30:15.632872 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 15:30:16.132835419 +0000 UTC m=+159.805689666 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:15 crc kubenswrapper[5008]: I0129 15:30:15.632990 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qm54x\" (UID: \"30c54800-b443-4da8-9d41-22e8f156a1a1\") " pod="openshift-image-registry/image-registry-697d97f7c8-qm54x" Jan 29 15:30:15 crc kubenswrapper[5008]: E0129 15:30:15.633374 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 15:30:16.133361903 +0000 UTC m=+159.806216330 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qm54x" (UID: "30c54800-b443-4da8-9d41-22e8f156a1a1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:15 crc kubenswrapper[5008]: I0129 15:30:15.636129 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-7777fb866f-468fl" Jan 29 15:30:15 crc kubenswrapper[5008]: I0129 15:30:15.638236 5008 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-468fl container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.21:8443/healthz\": dial tcp 10.217.0.21:8443: connect: connection refused" start-of-body= Jan 29 15:30:15 crc kubenswrapper[5008]: I0129 15:30:15.638295 5008 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-468fl" podUID="00332b75-a73b-49c1-9b72-73445baccf6d" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.21:8443/healthz\": dial tcp 10.217.0.21:8443: connect: connection refused" Jan 29 15:30:15 crc kubenswrapper[5008]: I0129 15:30:15.638203 5008 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-468fl container/openshift-config-operator namespace/openshift-config-operator: Liveness probe status=failure output="Get \"https://10.217.0.21:8443/healthz\": dial tcp 10.217.0.21:8443: connect: connection refused" start-of-body= Jan 29 15:30:15 crc kubenswrapper[5008]: I0129 15:30:15.638441 5008 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-468fl" podUID="00332b75-a73b-49c1-9b72-73445baccf6d" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.21:8443/healthz\": dial tcp 10.217.0.21:8443: connect: connection refused" Jan 29 15:30:15 crc kubenswrapper[5008]: I0129 15:30:15.638899 5008 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-468fl container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.21:8443/healthz\": dial tcp 10.217.0.21:8443: connect: connection refused" start-of-body= Jan 29 15:30:15 crc kubenswrapper[5008]: I0129 15:30:15.638973 5008 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-468fl" podUID="00332b75-a73b-49c1-9b72-73445baccf6d" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.21:8443/healthz\": dial tcp 10.217.0.21:8443: connect: connection refused" Jan 29 15:30:15 crc kubenswrapper[5008]: I0129 15:30:15.734566 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 15:30:15 crc kubenswrapper[5008]: E0129 15:30:15.734773 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 15:30:16.234750038 +0000 UTC m=+159.907604275 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:15 crc kubenswrapper[5008]: I0129 15:30:15.735017 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qm54x\" (UID: \"30c54800-b443-4da8-9d41-22e8f156a1a1\") " pod="openshift-image-registry/image-registry-697d97f7c8-qm54x" Jan 29 15:30:15 crc kubenswrapper[5008]: E0129 15:30:15.735388 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 15:30:16.235375994 +0000 UTC m=+159.908230231 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qm54x" (UID: "30c54800-b443-4da8-9d41-22e8f156a1a1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:15 crc kubenswrapper[5008]: I0129 15:30:15.836506 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 15:30:15 crc kubenswrapper[5008]: E0129 15:30:15.836733 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 15:30:16.336700257 +0000 UTC m=+160.009554504 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:15 crc kubenswrapper[5008]: I0129 15:30:15.836867 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qm54x\" (UID: \"30c54800-b443-4da8-9d41-22e8f156a1a1\") " pod="openshift-image-registry/image-registry-697d97f7c8-qm54x" Jan 29 15:30:15 crc kubenswrapper[5008]: E0129 15:30:15.837236 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 15:30:16.33722466 +0000 UTC m=+160.010078987 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qm54x" (UID: "30c54800-b443-4da8-9d41-22e8f156a1a1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:15 crc kubenswrapper[5008]: I0129 15:30:15.891081 5008 generic.go:334] "Generic (PLEG): container finished" podID="5864482d-142b-4ab3-a5e1-d48e89d3dde0" containerID="e632b499faf44559f02951cba34ddb7f268053890e895e7ed971208eb91b44b2" exitCode=0 Jan 29 15:30:15 crc kubenswrapper[5008]: I0129 15:30:15.891167 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"5864482d-142b-4ab3-a5e1-d48e89d3dde0","Type":"ContainerDied","Data":"e632b499faf44559f02951cba34ddb7f268053890e895e7ed971208eb91b44b2"} Jan 29 15:30:15 crc kubenswrapper[5008]: I0129 15:30:15.893001 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-g9x2n" event={"ID":"5ca041e2-baff-40ee-8fc9-e9bc58aee628","Type":"ContainerStarted","Data":"3ef06d541e3be44327ec0ce8f76deb7bf993de18e835a436e7d79c91a5c19e31"} Jan 29 15:30:15 crc kubenswrapper[5008]: I0129 15:30:15.940976 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 15:30:15 crc kubenswrapper[5008]: E0129 15:30:15.941154 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 15:30:16.441128601 +0000 UTC m=+160.113982838 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:15 crc kubenswrapper[5008]: I0129 15:30:15.941243 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qm54x\" (UID: \"30c54800-b443-4da8-9d41-22e8f156a1a1\") " pod="openshift-image-registry/image-registry-697d97f7c8-qm54x" Jan 29 15:30:15 crc kubenswrapper[5008]: E0129 15:30:15.941545 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 15:30:16.441530141 +0000 UTC m=+160.114384378 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qm54x" (UID: "30c54800-b443-4da8-9d41-22e8f156a1a1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:15 crc kubenswrapper[5008]: I0129 15:30:15.976611 5008 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver/apiserver-76f77b778f-4l85w" podStartSLOduration=137.97658789 podStartE2EDuration="2m17.97658789s" podCreationTimestamp="2026-01-29 15:27:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 15:30:15.973250442 +0000 UTC m=+159.646104709" watchObservedRunningTime="2026-01-29 15:30:15.97658789 +0000 UTC m=+159.649442137" Jan 29 15:30:16 crc kubenswrapper[5008]: I0129 15:30:16.006341 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-dns/dns-default-tw5d5" Jan 29 15:30:16 crc kubenswrapper[5008]: I0129 15:30:16.045743 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 15:30:16 crc kubenswrapper[5008]: E0129 15:30:16.047553 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 15:30:16.547523466 +0000 UTC m=+160.220377883 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:16 crc kubenswrapper[5008]: I0129 15:30:16.147669 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qm54x\" (UID: \"30c54800-b443-4da8-9d41-22e8f156a1a1\") " pod="openshift-image-registry/image-registry-697d97f7c8-qm54x" Jan 29 15:30:16 crc kubenswrapper[5008]: E0129 15:30:16.148390 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 15:30:16.648370316 +0000 UTC m=+160.321224563 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qm54x" (UID: "30c54800-b443-4da8-9d41-22e8f156a1a1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:16 crc kubenswrapper[5008]: I0129 15:30:16.248288 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 15:30:16 crc kubenswrapper[5008]: E0129 15:30:16.249015 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 15:30:16.748995831 +0000 UTC m=+160.421850068 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:16 crc kubenswrapper[5008]: I0129 15:30:16.301073 5008 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-cwgw5"] Jan 29 15:30:16 crc kubenswrapper[5008]: E0129 15:30:16.301347 5008 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b1a4a04b-067c-43f1-b355-46161babe869" containerName="collect-profiles" Jan 29 15:30:16 crc kubenswrapper[5008]: I0129 15:30:16.301370 5008 state_mem.go:107] "Deleted CPUSet assignment" podUID="b1a4a04b-067c-43f1-b355-46161babe869" containerName="collect-profiles" Jan 29 15:30:16 crc kubenswrapper[5008]: I0129 15:30:16.301487 5008 memory_manager.go:354] "RemoveStaleState removing state" podUID="b1a4a04b-067c-43f1-b355-46161babe869" containerName="collect-profiles" Jan 29 15:30:16 crc kubenswrapper[5008]: I0129 15:30:16.303184 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-cwgw5" Jan 29 15:30:16 crc kubenswrapper[5008]: I0129 15:30:16.306322 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Jan 29 15:30:16 crc kubenswrapper[5008]: I0129 15:30:16.355817 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qm54x\" (UID: \"30c54800-b443-4da8-9d41-22e8f156a1a1\") " pod="openshift-image-registry/image-registry-697d97f7c8-qm54x" Jan 29 15:30:16 crc kubenswrapper[5008]: E0129 15:30:16.361564 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 15:30:16.861529577 +0000 UTC m=+160.534383814 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qm54x" (UID: "30c54800-b443-4da8-9d41-22e8f156a1a1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:16 crc kubenswrapper[5008]: I0129 15:30:16.378751 5008 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 29 15:30:16 crc kubenswrapper[5008]: I0129 15:30:16.417606 5008 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29495010-t7nh4" Jan 29 15:30:16 crc kubenswrapper[5008]: I0129 15:30:16.434321 5008 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-4dwdf"] Jan 29 15:30:16 crc kubenswrapper[5008]: E0129 15:30:16.434557 5008 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5b4af13c-49f7-4c06-840c-6e976b55fabd" containerName="pruner" Jan 29 15:30:16 crc kubenswrapper[5008]: I0129 15:30:16.434570 5008 state_mem.go:107] "Deleted CPUSet assignment" podUID="5b4af13c-49f7-4c06-840c-6e976b55fabd" containerName="pruner" Jan 29 15:30:16 crc kubenswrapper[5008]: E0129 15:30:16.434581 5008 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4a912999-007c-495d-aaa3-857d76158a91" containerName="collect-profiles" Jan 29 15:30:16 crc kubenswrapper[5008]: I0129 15:30:16.434588 5008 state_mem.go:107] "Deleted CPUSet assignment" podUID="4a912999-007c-495d-aaa3-857d76158a91" containerName="collect-profiles" Jan 29 15:30:16 crc kubenswrapper[5008]: I0129 15:30:16.434689 5008 memory_manager.go:354] "RemoveStaleState removing state" podUID="5b4af13c-49f7-4c06-840c-6e976b55fabd" containerName="pruner" Jan 29 15:30:16 crc kubenswrapper[5008]: I0129 15:30:16.434707 5008 memory_manager.go:354] "RemoveStaleState removing state" podUID="4a912999-007c-495d-aaa3-857d76158a91" containerName="collect-profiles" Jan 29 15:30:16 crc kubenswrapper[5008]: I0129 15:30:16.435436 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-4dwdf" Jan 29 15:30:16 crc kubenswrapper[5008]: I0129 15:30:16.438201 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-cwgw5"] Jan 29 15:30:16 crc kubenswrapper[5008]: I0129 15:30:16.438837 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Jan 29 15:30:16 crc kubenswrapper[5008]: I0129 15:30:16.447608 5008 patch_prober.go:28] interesting pod/router-default-5444994796-lkcrp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 29 15:30:16 crc kubenswrapper[5008]: [-]has-synced failed: reason withheld Jan 29 15:30:16 crc kubenswrapper[5008]: [+]process-running ok Jan 29 15:30:16 crc kubenswrapper[5008]: healthz check failed Jan 29 15:30:16 crc kubenswrapper[5008]: I0129 15:30:16.447680 5008 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-lkcrp" podUID="380625b0-02b5-417a-bd1e-7ccf56f56059" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 29 15:30:16 crc kubenswrapper[5008]: I0129 15:30:16.456262 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-4dwdf"] Jan 29 15:30:16 crc kubenswrapper[5008]: I0129 15:30:16.462542 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 15:30:16 crc kubenswrapper[5008]: I0129 15:30:16.462820 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6aebe040-289b-48c1-a825-f12b471a5ad6-utilities\") pod \"certified-operators-cwgw5\" (UID: \"6aebe040-289b-48c1-a825-f12b471a5ad6\") " pod="openshift-marketplace/certified-operators-cwgw5" Jan 29 15:30:16 crc kubenswrapper[5008]: I0129 15:30:16.462877 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dldqp\" (UniqueName: \"kubernetes.io/projected/6aebe040-289b-48c1-a825-f12b471a5ad6-kube-api-access-dldqp\") pod \"certified-operators-cwgw5\" (UID: \"6aebe040-289b-48c1-a825-f12b471a5ad6\") " pod="openshift-marketplace/certified-operators-cwgw5" Jan 29 15:30:16 crc kubenswrapper[5008]: I0129 15:30:16.462971 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6aebe040-289b-48c1-a825-f12b471a5ad6-catalog-content\") pod \"certified-operators-cwgw5\" (UID: \"6aebe040-289b-48c1-a825-f12b471a5ad6\") " pod="openshift-marketplace/certified-operators-cwgw5" Jan 29 15:30:16 crc kubenswrapper[5008]: E0129 15:30:16.463151 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 15:30:16.963121427 +0000 UTC m=+160.635975664 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:16 crc kubenswrapper[5008]: I0129 15:30:16.564135 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/5b4af13c-49f7-4c06-840c-6e976b55fabd-kubelet-dir\") pod \"5b4af13c-49f7-4c06-840c-6e976b55fabd\" (UID: \"5b4af13c-49f7-4c06-840c-6e976b55fabd\") " Jan 29 15:30:16 crc kubenswrapper[5008]: I0129 15:30:16.564196 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5b4af13c-49f7-4c06-840c-6e976b55fabd-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "5b4af13c-49f7-4c06-840c-6e976b55fabd" (UID: "5b4af13c-49f7-4c06-840c-6e976b55fabd"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 15:30:16 crc kubenswrapper[5008]: I0129 15:30:16.564222 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/4a912999-007c-495d-aaa3-857d76158a91-secret-volume\") pod \"4a912999-007c-495d-aaa3-857d76158a91\" (UID: \"4a912999-007c-495d-aaa3-857d76158a91\") " Jan 29 15:30:16 crc kubenswrapper[5008]: I0129 15:30:16.564249 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4a912999-007c-495d-aaa3-857d76158a91-config-volume\") pod \"4a912999-007c-495d-aaa3-857d76158a91\" (UID: \"4a912999-007c-495d-aaa3-857d76158a91\") " Jan 29 15:30:16 crc kubenswrapper[5008]: I0129 15:30:16.564305 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nhkzq\" (UniqueName: \"kubernetes.io/projected/4a912999-007c-495d-aaa3-857d76158a91-kube-api-access-nhkzq\") pod \"4a912999-007c-495d-aaa3-857d76158a91\" (UID: \"4a912999-007c-495d-aaa3-857d76158a91\") " Jan 29 15:30:16 crc kubenswrapper[5008]: I0129 15:30:16.564341 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5b4af13c-49f7-4c06-840c-6e976b55fabd-kube-api-access\") pod \"5b4af13c-49f7-4c06-840c-6e976b55fabd\" (UID: \"5b4af13c-49f7-4c06-840c-6e976b55fabd\") " Jan 29 15:30:16 crc kubenswrapper[5008]: I0129 15:30:16.564475 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qm54x\" (UID: \"30c54800-b443-4da8-9d41-22e8f156a1a1\") " pod="openshift-image-registry/image-registry-697d97f7c8-qm54x" Jan 29 15:30:16 crc kubenswrapper[5008]: I0129 15:30:16.564502 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s8q2q\" (UniqueName: \"kubernetes.io/projected/d2d42845-cca1-4b60-bc84-4b2baebf702b-kube-api-access-s8q2q\") pod \"community-operators-4dwdf\" (UID: \"d2d42845-cca1-4b60-bc84-4b2baebf702b\") " pod="openshift-marketplace/community-operators-4dwdf" Jan 29 15:30:16 crc kubenswrapper[5008]: I0129 15:30:16.564536 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6aebe040-289b-48c1-a825-f12b471a5ad6-utilities\") pod \"certified-operators-cwgw5\" (UID: \"6aebe040-289b-48c1-a825-f12b471a5ad6\") " pod="openshift-marketplace/certified-operators-cwgw5" Jan 29 15:30:16 crc kubenswrapper[5008]: I0129 15:30:16.564559 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dldqp\" (UniqueName: \"kubernetes.io/projected/6aebe040-289b-48c1-a825-f12b471a5ad6-kube-api-access-dldqp\") pod \"certified-operators-cwgw5\" (UID: \"6aebe040-289b-48c1-a825-f12b471a5ad6\") " pod="openshift-marketplace/certified-operators-cwgw5" Jan 29 15:30:16 crc kubenswrapper[5008]: I0129 15:30:16.564590 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d2d42845-cca1-4b60-bc84-4b2baebf702b-catalog-content\") pod \"community-operators-4dwdf\" (UID: \"d2d42845-cca1-4b60-bc84-4b2baebf702b\") " pod="openshift-marketplace/community-operators-4dwdf" Jan 29 15:30:16 crc kubenswrapper[5008]: I0129 15:30:16.564629 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d2d42845-cca1-4b60-bc84-4b2baebf702b-utilities\") pod \"community-operators-4dwdf\" (UID: \"d2d42845-cca1-4b60-bc84-4b2baebf702b\") " pod="openshift-marketplace/community-operators-4dwdf" Jan 29 15:30:16 crc kubenswrapper[5008]: I0129 15:30:16.564662 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6aebe040-289b-48c1-a825-f12b471a5ad6-catalog-content\") pod \"certified-operators-cwgw5\" (UID: \"6aebe040-289b-48c1-a825-f12b471a5ad6\") " pod="openshift-marketplace/certified-operators-cwgw5" Jan 29 15:30:16 crc kubenswrapper[5008]: I0129 15:30:16.564705 5008 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/5b4af13c-49f7-4c06-840c-6e976b55fabd-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 29 15:30:16 crc kubenswrapper[5008]: I0129 15:30:16.565062 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6aebe040-289b-48c1-a825-f12b471a5ad6-catalog-content\") pod \"certified-operators-cwgw5\" (UID: \"6aebe040-289b-48c1-a825-f12b471a5ad6\") " pod="openshift-marketplace/certified-operators-cwgw5" Jan 29 15:30:16 crc kubenswrapper[5008]: I0129 15:30:16.565247 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4a912999-007c-495d-aaa3-857d76158a91-config-volume" (OuterVolumeSpecName: "config-volume") pod "4a912999-007c-495d-aaa3-857d76158a91" (UID: "4a912999-007c-495d-aaa3-857d76158a91"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:30:16 crc kubenswrapper[5008]: I0129 15:30:16.565542 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6aebe040-289b-48c1-a825-f12b471a5ad6-utilities\") pod \"certified-operators-cwgw5\" (UID: \"6aebe040-289b-48c1-a825-f12b471a5ad6\") " pod="openshift-marketplace/certified-operators-cwgw5" Jan 29 15:30:16 crc kubenswrapper[5008]: E0129 15:30:16.565899 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 15:30:17.065885167 +0000 UTC m=+160.738739404 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qm54x" (UID: "30c54800-b443-4da8-9d41-22e8f156a1a1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:16 crc kubenswrapper[5008]: I0129 15:30:16.573396 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5b4af13c-49f7-4c06-840c-6e976b55fabd-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "5b4af13c-49f7-4c06-840c-6e976b55fabd" (UID: "5b4af13c-49f7-4c06-840c-6e976b55fabd"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:30:16 crc kubenswrapper[5008]: I0129 15:30:16.579050 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4a912999-007c-495d-aaa3-857d76158a91-kube-api-access-nhkzq" (OuterVolumeSpecName: "kube-api-access-nhkzq") pod "4a912999-007c-495d-aaa3-857d76158a91" (UID: "4a912999-007c-495d-aaa3-857d76158a91"). InnerVolumeSpecName "kube-api-access-nhkzq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:30:16 crc kubenswrapper[5008]: I0129 15:30:16.593128 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4a912999-007c-495d-aaa3-857d76158a91-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "4a912999-007c-495d-aaa3-857d76158a91" (UID: "4a912999-007c-495d-aaa3-857d76158a91"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:30:16 crc kubenswrapper[5008]: I0129 15:30:16.603194 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dldqp\" (UniqueName: \"kubernetes.io/projected/6aebe040-289b-48c1-a825-f12b471a5ad6-kube-api-access-dldqp\") pod \"certified-operators-cwgw5\" (UID: \"6aebe040-289b-48c1-a825-f12b471a5ad6\") " pod="openshift-marketplace/certified-operators-cwgw5" Jan 29 15:30:16 crc kubenswrapper[5008]: I0129 15:30:16.627187 5008 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-z9t2h"] Jan 29 15:30:16 crc kubenswrapper[5008]: I0129 15:30:16.629103 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-z9t2h" Jan 29 15:30:16 crc kubenswrapper[5008]: I0129 15:30:16.660030 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-z9t2h"] Jan 29 15:30:16 crc kubenswrapper[5008]: I0129 15:30:16.667546 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 15:30:16 crc kubenswrapper[5008]: E0129 15:30:16.667762 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 15:30:17.167720544 +0000 UTC m=+160.840574791 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:16 crc kubenswrapper[5008]: I0129 15:30:16.667970 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qm54x\" (UID: \"30c54800-b443-4da8-9d41-22e8f156a1a1\") " pod="openshift-image-registry/image-registry-697d97f7c8-qm54x" Jan 29 15:30:16 crc kubenswrapper[5008]: I0129 15:30:16.668002 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s8q2q\" (UniqueName: \"kubernetes.io/projected/d2d42845-cca1-4b60-bc84-4b2baebf702b-kube-api-access-s8q2q\") pod \"community-operators-4dwdf\" (UID: \"d2d42845-cca1-4b60-bc84-4b2baebf702b\") " pod="openshift-marketplace/community-operators-4dwdf" Jan 29 15:30:16 crc kubenswrapper[5008]: I0129 15:30:16.668067 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d2d42845-cca1-4b60-bc84-4b2baebf702b-catalog-content\") pod \"community-operators-4dwdf\" (UID: \"d2d42845-cca1-4b60-bc84-4b2baebf702b\") " pod="openshift-marketplace/community-operators-4dwdf" Jan 29 15:30:16 crc kubenswrapper[5008]: I0129 15:30:16.668104 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d2d42845-cca1-4b60-bc84-4b2baebf702b-utilities\") pod \"community-operators-4dwdf\" (UID: \"d2d42845-cca1-4b60-bc84-4b2baebf702b\") " pod="openshift-marketplace/community-operators-4dwdf" Jan 29 15:30:16 crc kubenswrapper[5008]: I0129 15:30:16.668144 5008 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/4a912999-007c-495d-aaa3-857d76158a91-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 29 15:30:16 crc kubenswrapper[5008]: I0129 15:30:16.668156 5008 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4a912999-007c-495d-aaa3-857d76158a91-config-volume\") on node \"crc\" DevicePath \"\"" Jan 29 15:30:16 crc kubenswrapper[5008]: I0129 15:30:16.668166 5008 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nhkzq\" (UniqueName: \"kubernetes.io/projected/4a912999-007c-495d-aaa3-857d76158a91-kube-api-access-nhkzq\") on node \"crc\" DevicePath \"\"" Jan 29 15:30:16 crc kubenswrapper[5008]: I0129 15:30:16.668175 5008 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5b4af13c-49f7-4c06-840c-6e976b55fabd-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 29 15:30:16 crc kubenswrapper[5008]: E0129 15:30:16.668557 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 15:30:17.168543756 +0000 UTC m=+160.841397993 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qm54x" (UID: "30c54800-b443-4da8-9d41-22e8f156a1a1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:16 crc kubenswrapper[5008]: I0129 15:30:16.668766 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d2d42845-cca1-4b60-bc84-4b2baebf702b-utilities\") pod \"community-operators-4dwdf\" (UID: \"d2d42845-cca1-4b60-bc84-4b2baebf702b\") " pod="openshift-marketplace/community-operators-4dwdf" Jan 29 15:30:16 crc kubenswrapper[5008]: I0129 15:30:16.668970 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d2d42845-cca1-4b60-bc84-4b2baebf702b-catalog-content\") pod \"community-operators-4dwdf\" (UID: \"d2d42845-cca1-4b60-bc84-4b2baebf702b\") " pod="openshift-marketplace/community-operators-4dwdf" Jan 29 15:30:16 crc kubenswrapper[5008]: I0129 15:30:16.718800 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-cwgw5" Jan 29 15:30:16 crc kubenswrapper[5008]: I0129 15:30:16.720864 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s8q2q\" (UniqueName: \"kubernetes.io/projected/d2d42845-cca1-4b60-bc84-4b2baebf702b-kube-api-access-s8q2q\") pod \"community-operators-4dwdf\" (UID: \"d2d42845-cca1-4b60-bc84-4b2baebf702b\") " pod="openshift-marketplace/community-operators-4dwdf" Jan 29 15:30:16 crc kubenswrapper[5008]: I0129 15:30:16.769608 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 15:30:16 crc kubenswrapper[5008]: E0129 15:30:16.769946 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 15:30:17.269917769 +0000 UTC m=+160.942772006 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:16 crc kubenswrapper[5008]: I0129 15:30:16.770202 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/250e7db8-88dd-44fd-8d73-51a6f8f4ba96-utilities\") pod \"certified-operators-z9t2h\" (UID: \"250e7db8-88dd-44fd-8d73-51a6f8f4ba96\") " pod="openshift-marketplace/certified-operators-z9t2h" Jan 29 15:30:16 crc kubenswrapper[5008]: I0129 15:30:16.770246 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qm54x\" (UID: \"30c54800-b443-4da8-9d41-22e8f156a1a1\") " pod="openshift-image-registry/image-registry-697d97f7c8-qm54x" Jan 29 15:30:16 crc kubenswrapper[5008]: I0129 15:30:16.770297 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z5sl4\" (UniqueName: \"kubernetes.io/projected/250e7db8-88dd-44fd-8d73-51a6f8f4ba96-kube-api-access-z5sl4\") pod \"certified-operators-z9t2h\" (UID: \"250e7db8-88dd-44fd-8d73-51a6f8f4ba96\") " pod="openshift-marketplace/certified-operators-z9t2h" Jan 29 15:30:16 crc kubenswrapper[5008]: I0129 15:30:16.770339 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/250e7db8-88dd-44fd-8d73-51a6f8f4ba96-catalog-content\") pod \"certified-operators-z9t2h\" (UID: \"250e7db8-88dd-44fd-8d73-51a6f8f4ba96\") " pod="openshift-marketplace/certified-operators-z9t2h" Jan 29 15:30:16 crc kubenswrapper[5008]: I0129 15:30:16.770461 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-4dwdf" Jan 29 15:30:16 crc kubenswrapper[5008]: E0129 15:30:16.770998 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 15:30:17.270972367 +0000 UTC m=+160.943826604 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qm54x" (UID: "30c54800-b443-4da8-9d41-22e8f156a1a1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:16 crc kubenswrapper[5008]: I0129 15:30:16.828055 5008 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-h7vmc"] Jan 29 15:30:16 crc kubenswrapper[5008]: I0129 15:30:16.829216 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-h7vmc" Jan 29 15:30:16 crc kubenswrapper[5008]: I0129 15:30:16.858743 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-h7vmc"] Jan 29 15:30:16 crc kubenswrapper[5008]: I0129 15:30:16.888952 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 15:30:16 crc kubenswrapper[5008]: I0129 15:30:16.889304 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9bcecb83-1aec-4bd4-9b46-f02deb628018-utilities\") pod \"community-operators-h7vmc\" (UID: \"9bcecb83-1aec-4bd4-9b46-f02deb628018\") " pod="openshift-marketplace/community-operators-h7vmc" Jan 29 15:30:16 crc kubenswrapper[5008]: I0129 15:30:16.889378 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9bcecb83-1aec-4bd4-9b46-f02deb628018-catalog-content\") pod \"community-operators-h7vmc\" (UID: \"9bcecb83-1aec-4bd4-9b46-f02deb628018\") " pod="openshift-marketplace/community-operators-h7vmc" Jan 29 15:30:16 crc kubenswrapper[5008]: I0129 15:30:16.889434 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z5sl4\" (UniqueName: \"kubernetes.io/projected/250e7db8-88dd-44fd-8d73-51a6f8f4ba96-kube-api-access-z5sl4\") pod \"certified-operators-z9t2h\" (UID: \"250e7db8-88dd-44fd-8d73-51a6f8f4ba96\") " pod="openshift-marketplace/certified-operators-z9t2h" Jan 29 15:30:16 crc kubenswrapper[5008]: I0129 15:30:16.889476 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/250e7db8-88dd-44fd-8d73-51a6f8f4ba96-catalog-content\") pod \"certified-operators-z9t2h\" (UID: \"250e7db8-88dd-44fd-8d73-51a6f8f4ba96\") " pod="openshift-marketplace/certified-operators-z9t2h" Jan 29 15:30:16 crc kubenswrapper[5008]: I0129 15:30:16.889541 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-btkm4\" (UniqueName: \"kubernetes.io/projected/9bcecb83-1aec-4bd4-9b46-f02deb628018-kube-api-access-btkm4\") pod \"community-operators-h7vmc\" (UID: \"9bcecb83-1aec-4bd4-9b46-f02deb628018\") " pod="openshift-marketplace/community-operators-h7vmc" Jan 29 15:30:16 crc kubenswrapper[5008]: I0129 15:30:16.889673 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/250e7db8-88dd-44fd-8d73-51a6f8f4ba96-utilities\") pod \"certified-operators-z9t2h\" (UID: \"250e7db8-88dd-44fd-8d73-51a6f8f4ba96\") " pod="openshift-marketplace/certified-operators-z9t2h" Jan 29 15:30:16 crc kubenswrapper[5008]: I0129 15:30:16.890264 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/250e7db8-88dd-44fd-8d73-51a6f8f4ba96-utilities\") pod \"certified-operators-z9t2h\" (UID: \"250e7db8-88dd-44fd-8d73-51a6f8f4ba96\") " pod="openshift-marketplace/certified-operators-z9t2h" Jan 29 15:30:16 crc kubenswrapper[5008]: E0129 15:30:16.890361 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 15:30:17.390341242 +0000 UTC m=+161.063195479 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:16 crc kubenswrapper[5008]: I0129 15:30:16.891302 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/250e7db8-88dd-44fd-8d73-51a6f8f4ba96-catalog-content\") pod \"certified-operators-z9t2h\" (UID: \"250e7db8-88dd-44fd-8d73-51a6f8f4ba96\") " pod="openshift-marketplace/certified-operators-z9t2h" Jan 29 15:30:16 crc kubenswrapper[5008]: I0129 15:30:16.949662 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z5sl4\" (UniqueName: \"kubernetes.io/projected/250e7db8-88dd-44fd-8d73-51a6f8f4ba96-kube-api-access-z5sl4\") pod \"certified-operators-z9t2h\" (UID: \"250e7db8-88dd-44fd-8d73-51a6f8f4ba96\") " pod="openshift-marketplace/certified-operators-z9t2h" Jan 29 15:30:16 crc kubenswrapper[5008]: I0129 15:30:16.955205 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-z9t2h" Jan 29 15:30:16 crc kubenswrapper[5008]: I0129 15:30:16.957660 5008 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 29 15:30:16 crc kubenswrapper[5008]: I0129 15:30:16.958945 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"5b4af13c-49f7-4c06-840c-6e976b55fabd","Type":"ContainerDied","Data":"a017c760d370789ae6b77ac576c3c8c398bd726ece1c0385f34120f6300e19d6"} Jan 29 15:30:16 crc kubenswrapper[5008]: I0129 15:30:16.958992 5008 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a017c760d370789ae6b77ac576c3c8c398bd726ece1c0385f34120f6300e19d6" Jan 29 15:30:16 crc kubenswrapper[5008]: I0129 15:30:16.979281 5008 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29495010-t7nh4" Jan 29 15:30:16 crc kubenswrapper[5008]: I0129 15:30:16.979791 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29495010-t7nh4" event={"ID":"4a912999-007c-495d-aaa3-857d76158a91","Type":"ContainerDied","Data":"e472830b4505664315811f646f65ea00f2b653c72238508aa40d729f5d7fedcb"} Jan 29 15:30:16 crc kubenswrapper[5008]: I0129 15:30:16.979835 5008 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e472830b4505664315811f646f65ea00f2b653c72238508aa40d729f5d7fedcb" Jan 29 15:30:17 crc kubenswrapper[5008]: I0129 15:30:17.001431 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9bcecb83-1aec-4bd4-9b46-f02deb628018-catalog-content\") pod \"community-operators-h7vmc\" (UID: \"9bcecb83-1aec-4bd4-9b46-f02deb628018\") " pod="openshift-marketplace/community-operators-h7vmc" Jan 29 15:30:17 crc kubenswrapper[5008]: I0129 15:30:17.001551 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-btkm4\" (UniqueName: \"kubernetes.io/projected/9bcecb83-1aec-4bd4-9b46-f02deb628018-kube-api-access-btkm4\") pod \"community-operators-h7vmc\" (UID: \"9bcecb83-1aec-4bd4-9b46-f02deb628018\") " pod="openshift-marketplace/community-operators-h7vmc" Jan 29 15:30:17 crc kubenswrapper[5008]: I0129 15:30:17.001701 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qm54x\" (UID: \"30c54800-b443-4da8-9d41-22e8f156a1a1\") " pod="openshift-image-registry/image-registry-697d97f7c8-qm54x" Jan 29 15:30:17 crc kubenswrapper[5008]: I0129 15:30:17.001765 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9bcecb83-1aec-4bd4-9b46-f02deb628018-utilities\") pod \"community-operators-h7vmc\" (UID: \"9bcecb83-1aec-4bd4-9b46-f02deb628018\") " pod="openshift-marketplace/community-operators-h7vmc" Jan 29 15:30:17 crc kubenswrapper[5008]: I0129 15:30:17.002353 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9bcecb83-1aec-4bd4-9b46-f02deb628018-utilities\") pod \"community-operators-h7vmc\" (UID: \"9bcecb83-1aec-4bd4-9b46-f02deb628018\") " pod="openshift-marketplace/community-operators-h7vmc" Jan 29 15:30:17 crc kubenswrapper[5008]: I0129 15:30:17.003035 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9bcecb83-1aec-4bd4-9b46-f02deb628018-catalog-content\") pod \"community-operators-h7vmc\" (UID: \"9bcecb83-1aec-4bd4-9b46-f02deb628018\") " pod="openshift-marketplace/community-operators-h7vmc" Jan 29 15:30:17 crc kubenswrapper[5008]: E0129 15:30:17.003189 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 15:30:17.503166397 +0000 UTC m=+161.176020634 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qm54x" (UID: "30c54800-b443-4da8-9d41-22e8f156a1a1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:17 crc kubenswrapper[5008]: I0129 15:30:17.042987 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-btkm4\" (UniqueName: \"kubernetes.io/projected/9bcecb83-1aec-4bd4-9b46-f02deb628018-kube-api-access-btkm4\") pod \"community-operators-h7vmc\" (UID: \"9bcecb83-1aec-4bd4-9b46-f02deb628018\") " pod="openshift-marketplace/community-operators-h7vmc" Jan 29 15:30:17 crc kubenswrapper[5008]: I0129 15:30:17.107117 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 15:30:17 crc kubenswrapper[5008]: E0129 15:30:17.107533 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 15:30:17.607512738 +0000 UTC m=+161.280366975 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:17 crc kubenswrapper[5008]: I0129 15:30:17.149548 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-h7vmc" Jan 29 15:30:17 crc kubenswrapper[5008]: I0129 15:30:17.198036 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-cwgw5"] Jan 29 15:30:17 crc kubenswrapper[5008]: I0129 15:30:17.212560 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qm54x\" (UID: \"30c54800-b443-4da8-9d41-22e8f156a1a1\") " pod="openshift-image-registry/image-registry-697d97f7c8-qm54x" Jan 29 15:30:17 crc kubenswrapper[5008]: E0129 15:30:17.212908 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 15:30:17.712896688 +0000 UTC m=+161.385750925 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qm54x" (UID: "30c54800-b443-4da8-9d41-22e8f156a1a1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:17 crc kubenswrapper[5008]: I0129 15:30:17.314444 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 15:30:17 crc kubenswrapper[5008]: E0129 15:30:17.314650 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 15:30:17.81461142 +0000 UTC m=+161.487465657 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:17 crc kubenswrapper[5008]: I0129 15:30:17.314800 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qm54x\" (UID: \"30c54800-b443-4da8-9d41-22e8f156a1a1\") " pod="openshift-image-registry/image-registry-697d97f7c8-qm54x" Jan 29 15:30:17 crc kubenswrapper[5008]: E0129 15:30:17.315124 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 15:30:17.815112504 +0000 UTC m=+161.487966741 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qm54x" (UID: "30c54800-b443-4da8-9d41-22e8f156a1a1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:17 crc kubenswrapper[5008]: I0129 15:30:17.339193 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-4dwdf"] Jan 29 15:30:17 crc kubenswrapper[5008]: I0129 15:30:17.415888 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 15:30:17 crc kubenswrapper[5008]: E0129 15:30:17.416191 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 15:30:17.91617544 +0000 UTC m=+161.589029677 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:17 crc kubenswrapper[5008]: I0129 15:30:17.441729 5008 patch_prober.go:28] interesting pod/router-default-5444994796-lkcrp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 29 15:30:17 crc kubenswrapper[5008]: [-]has-synced failed: reason withheld Jan 29 15:30:17 crc kubenswrapper[5008]: [+]process-running ok Jan 29 15:30:17 crc kubenswrapper[5008]: healthz check failed Jan 29 15:30:17 crc kubenswrapper[5008]: I0129 15:30:17.441775 5008 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-lkcrp" podUID="380625b0-02b5-417a-bd1e-7ccf56f56059" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 29 15:30:17 crc kubenswrapper[5008]: I0129 15:30:17.519455 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qm54x\" (UID: \"30c54800-b443-4da8-9d41-22e8f156a1a1\") " pod="openshift-image-registry/image-registry-697d97f7c8-qm54x" Jan 29 15:30:17 crc kubenswrapper[5008]: E0129 15:30:17.519962 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 15:30:18.019945117 +0000 UTC m=+161.692799354 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qm54x" (UID: "30c54800-b443-4da8-9d41-22e8f156a1a1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:17 crc kubenswrapper[5008]: I0129 15:30:17.573950 5008 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 29 15:30:17 crc kubenswrapper[5008]: I0129 15:30:17.578718 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-h7vmc"] Jan 29 15:30:17 crc kubenswrapper[5008]: I0129 15:30:17.622318 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 15:30:17 crc kubenswrapper[5008]: I0129 15:30:17.622450 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/5864482d-142b-4ab3-a5e1-d48e89d3dde0-kubelet-dir\") pod \"5864482d-142b-4ab3-a5e1-d48e89d3dde0\" (UID: \"5864482d-142b-4ab3-a5e1-d48e89d3dde0\") " Jan 29 15:30:17 crc kubenswrapper[5008]: I0129 15:30:17.622477 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5864482d-142b-4ab3-a5e1-d48e89d3dde0-kube-api-access\") pod \"5864482d-142b-4ab3-a5e1-d48e89d3dde0\" (UID: \"5864482d-142b-4ab3-a5e1-d48e89d3dde0\") " Jan 29 15:30:17 crc kubenswrapper[5008]: I0129 15:30:17.623394 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5864482d-142b-4ab3-a5e1-d48e89d3dde0-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "5864482d-142b-4ab3-a5e1-d48e89d3dde0" (UID: "5864482d-142b-4ab3-a5e1-d48e89d3dde0"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 15:30:17 crc kubenswrapper[5008]: E0129 15:30:17.623447 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 15:30:18.123407445 +0000 UTC m=+161.796261682 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:17 crc kubenswrapper[5008]: I0129 15:30:17.653909 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-z9t2h"] Jan 29 15:30:17 crc kubenswrapper[5008]: I0129 15:30:17.723621 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qm54x\" (UID: \"30c54800-b443-4da8-9d41-22e8f156a1a1\") " pod="openshift-image-registry/image-registry-697d97f7c8-qm54x" Jan 29 15:30:17 crc kubenswrapper[5008]: E0129 15:30:17.724021 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 15:30:18.224002059 +0000 UTC m=+161.896856296 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qm54x" (UID: "30c54800-b443-4da8-9d41-22e8f156a1a1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:17 crc kubenswrapper[5008]: I0129 15:30:17.724245 5008 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/5864482d-142b-4ab3-a5e1-d48e89d3dde0-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 29 15:30:17 crc kubenswrapper[5008]: I0129 15:30:17.825318 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 15:30:17 crc kubenswrapper[5008]: E0129 15:30:17.825513 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 15:30:18.325483516 +0000 UTC m=+161.998337753 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:17 crc kubenswrapper[5008]: I0129 15:30:17.825604 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qm54x\" (UID: \"30c54800-b443-4da8-9d41-22e8f156a1a1\") " pod="openshift-image-registry/image-registry-697d97f7c8-qm54x" Jan 29 15:30:17 crc kubenswrapper[5008]: E0129 15:30:17.825899 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 15:30:18.325887616 +0000 UTC m=+161.998741853 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qm54x" (UID: "30c54800-b443-4da8-9d41-22e8f156a1a1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:17 crc kubenswrapper[5008]: I0129 15:30:17.926232 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 15:30:17 crc kubenswrapper[5008]: E0129 15:30:17.926520 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 15:30:18.426505251 +0000 UTC m=+162.099359488 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:17 crc kubenswrapper[5008]: I0129 15:30:17.985322 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4dwdf" event={"ID":"d2d42845-cca1-4b60-bc84-4b2baebf702b","Type":"ContainerStarted","Data":"dd8d6696ceba57808730ee9b74baad13f0f3efae19998fb92ff0c2c357522c56"} Jan 29 15:30:17 crc kubenswrapper[5008]: I0129 15:30:17.986307 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cwgw5" event={"ID":"6aebe040-289b-48c1-a825-f12b471a5ad6","Type":"ContainerStarted","Data":"54d6cf905ba0c9c55baea0b1bbde4338656f4661c2571ae702fdc0067f3ef4cb"} Jan 29 15:30:17 crc kubenswrapper[5008]: I0129 15:30:17.987479 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"5864482d-142b-4ab3-a5e1-d48e89d3dde0","Type":"ContainerDied","Data":"aa1577fad78ae8be2b88ef68cf00c8928dcb8476da5d533f937dc579b89d41cc"} Jan 29 15:30:17 crc kubenswrapper[5008]: I0129 15:30:17.987505 5008 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="aa1577fad78ae8be2b88ef68cf00c8928dcb8476da5d533f937dc579b89d41cc" Jan 29 15:30:17 crc kubenswrapper[5008]: I0129 15:30:17.987554 5008 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 29 15:30:17 crc kubenswrapper[5008]: I0129 15:30:17.988680 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-h7vmc" event={"ID":"9bcecb83-1aec-4bd4-9b46-f02deb628018","Type":"ContainerStarted","Data":"af3e1a3fc6fe6b714e3700dd86c4612e0716f599f6f3f8cae393165561ce5bfe"} Jan 29 15:30:18 crc kubenswrapper[5008]: I0129 15:30:18.027055 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qm54x\" (UID: \"30c54800-b443-4da8-9d41-22e8f156a1a1\") " pod="openshift-image-registry/image-registry-697d97f7c8-qm54x" Jan 29 15:30:18 crc kubenswrapper[5008]: E0129 15:30:18.027354 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 15:30:18.527343641 +0000 UTC m=+162.200197878 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qm54x" (UID: "30c54800-b443-4da8-9d41-22e8f156a1a1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:18 crc kubenswrapper[5008]: I0129 15:30:18.128589 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 15:30:18 crc kubenswrapper[5008]: E0129 15:30:18.128732 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 15:30:18.628712145 +0000 UTC m=+162.301566382 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:18 crc kubenswrapper[5008]: I0129 15:30:18.129232 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qm54x\" (UID: \"30c54800-b443-4da8-9d41-22e8f156a1a1\") " pod="openshift-image-registry/image-registry-697d97f7c8-qm54x" Jan 29 15:30:18 crc kubenswrapper[5008]: E0129 15:30:18.129567 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 15:30:18.629558077 +0000 UTC m=+162.302412314 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qm54x" (UID: "30c54800-b443-4da8-9d41-22e8f156a1a1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:18 crc kubenswrapper[5008]: I0129 15:30:18.168352 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5864482d-142b-4ab3-a5e1-d48e89d3dde0-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "5864482d-142b-4ab3-a5e1-d48e89d3dde0" (UID: "5864482d-142b-4ab3-a5e1-d48e89d3dde0"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:30:18 crc kubenswrapper[5008]: E0129 15:30:18.234872 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 15:30:18.734848093 +0000 UTC m=+162.407702340 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:18 crc kubenswrapper[5008]: I0129 15:30:18.234922 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 15:30:18 crc kubenswrapper[5008]: I0129 15:30:18.235154 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qm54x\" (UID: \"30c54800-b443-4da8-9d41-22e8f156a1a1\") " pod="openshift-image-registry/image-registry-697d97f7c8-qm54x" Jan 29 15:30:18 crc kubenswrapper[5008]: E0129 15:30:18.235617 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 15:30:18.735604283 +0000 UTC m=+162.408458520 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qm54x" (UID: "30c54800-b443-4da8-9d41-22e8f156a1a1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:18 crc kubenswrapper[5008]: I0129 15:30:18.235969 5008 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5864482d-142b-4ab3-a5e1-d48e89d3dde0-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 29 15:30:18 crc kubenswrapper[5008]: I0129 15:30:18.336617 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 15:30:18 crc kubenswrapper[5008]: E0129 15:30:18.337057 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 15:30:18.837017068 +0000 UTC m=+162.509871485 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:18 crc kubenswrapper[5008]: I0129 15:30:18.337812 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qm54x\" (UID: \"30c54800-b443-4da8-9d41-22e8f156a1a1\") " pod="openshift-image-registry/image-registry-697d97f7c8-qm54x" Jan 29 15:30:18 crc kubenswrapper[5008]: E0129 15:30:18.338302 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 15:30:18.838278451 +0000 UTC m=+162.511132688 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qm54x" (UID: "30c54800-b443-4da8-9d41-22e8f156a1a1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:18 crc kubenswrapper[5008]: I0129 15:30:18.426609 5008 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-mkxw5"] Jan 29 15:30:18 crc kubenswrapper[5008]: E0129 15:30:18.426933 5008 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5864482d-142b-4ab3-a5e1-d48e89d3dde0" containerName="pruner" Jan 29 15:30:18 crc kubenswrapper[5008]: I0129 15:30:18.426948 5008 state_mem.go:107] "Deleted CPUSet assignment" podUID="5864482d-142b-4ab3-a5e1-d48e89d3dde0" containerName="pruner" Jan 29 15:30:18 crc kubenswrapper[5008]: I0129 15:30:18.427062 5008 memory_manager.go:354] "RemoveStaleState removing state" podUID="5864482d-142b-4ab3-a5e1-d48e89d3dde0" containerName="pruner" Jan 29 15:30:18 crc kubenswrapper[5008]: I0129 15:30:18.430603 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-mkxw5" Jan 29 15:30:18 crc kubenswrapper[5008]: I0129 15:30:18.433435 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Jan 29 15:30:18 crc kubenswrapper[5008]: I0129 15:30:18.433650 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-mkxw5"] Jan 29 15:30:18 crc kubenswrapper[5008]: I0129 15:30:18.438481 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 15:30:18 crc kubenswrapper[5008]: E0129 15:30:18.438618 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 15:30:18.938597658 +0000 UTC m=+162.611451905 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:18 crc kubenswrapper[5008]: I0129 15:30:18.438736 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qm54x\" (UID: \"30c54800-b443-4da8-9d41-22e8f156a1a1\") " pod="openshift-image-registry/image-registry-697d97f7c8-qm54x" Jan 29 15:30:18 crc kubenswrapper[5008]: E0129 15:30:18.439041 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 15:30:18.939033429 +0000 UTC m=+162.611887666 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qm54x" (UID: "30c54800-b443-4da8-9d41-22e8f156a1a1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:18 crc kubenswrapper[5008]: I0129 15:30:18.440421 5008 patch_prober.go:28] interesting pod/router-default-5444994796-lkcrp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 29 15:30:18 crc kubenswrapper[5008]: [-]has-synced failed: reason withheld Jan 29 15:30:18 crc kubenswrapper[5008]: [+]process-running ok Jan 29 15:30:18 crc kubenswrapper[5008]: healthz check failed Jan 29 15:30:18 crc kubenswrapper[5008]: I0129 15:30:18.440465 5008 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-lkcrp" podUID="380625b0-02b5-417a-bd1e-7ccf56f56059" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 29 15:30:18 crc kubenswrapper[5008]: I0129 15:30:18.539986 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 15:30:18 crc kubenswrapper[5008]: E0129 15:30:18.540193 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 15:30:19.040155076 +0000 UTC m=+162.713009313 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:18 crc kubenswrapper[5008]: I0129 15:30:18.540250 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6aef1830-577d-405c-bb54-6f9fe217ae86-utilities\") pod \"redhat-marketplace-mkxw5\" (UID: \"6aef1830-577d-405c-bb54-6f9fe217ae86\") " pod="openshift-marketplace/redhat-marketplace-mkxw5" Jan 29 15:30:18 crc kubenswrapper[5008]: I0129 15:30:18.540305 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qm54x\" (UID: \"30c54800-b443-4da8-9d41-22e8f156a1a1\") " pod="openshift-image-registry/image-registry-697d97f7c8-qm54x" Jan 29 15:30:18 crc kubenswrapper[5008]: I0129 15:30:18.540442 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ftbd9\" (UniqueName: \"kubernetes.io/projected/6aef1830-577d-405c-bb54-6f9fe217ae86-kube-api-access-ftbd9\") pod \"redhat-marketplace-mkxw5\" (UID: \"6aef1830-577d-405c-bb54-6f9fe217ae86\") " pod="openshift-marketplace/redhat-marketplace-mkxw5" Jan 29 15:30:18 crc kubenswrapper[5008]: I0129 15:30:18.540556 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6aef1830-577d-405c-bb54-6f9fe217ae86-catalog-content\") pod \"redhat-marketplace-mkxw5\" (UID: \"6aef1830-577d-405c-bb54-6f9fe217ae86\") " pod="openshift-marketplace/redhat-marketplace-mkxw5" Jan 29 15:30:18 crc kubenswrapper[5008]: E0129 15:30:18.540885 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 15:30:19.040826204 +0000 UTC m=+162.713680441 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qm54x" (UID: "30c54800-b443-4da8-9d41-22e8f156a1a1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:18 crc kubenswrapper[5008]: I0129 15:30:18.641878 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 15:30:18 crc kubenswrapper[5008]: E0129 15:30:18.642079 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 15:30:19.142047924 +0000 UTC m=+162.814902171 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:18 crc kubenswrapper[5008]: I0129 15:30:18.642113 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6aef1830-577d-405c-bb54-6f9fe217ae86-catalog-content\") pod \"redhat-marketplace-mkxw5\" (UID: \"6aef1830-577d-405c-bb54-6f9fe217ae86\") " pod="openshift-marketplace/redhat-marketplace-mkxw5" Jan 29 15:30:18 crc kubenswrapper[5008]: I0129 15:30:18.642177 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6aef1830-577d-405c-bb54-6f9fe217ae86-utilities\") pod \"redhat-marketplace-mkxw5\" (UID: \"6aef1830-577d-405c-bb54-6f9fe217ae86\") " pod="openshift-marketplace/redhat-marketplace-mkxw5" Jan 29 15:30:18 crc kubenswrapper[5008]: I0129 15:30:18.642231 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qm54x\" (UID: \"30c54800-b443-4da8-9d41-22e8f156a1a1\") " pod="openshift-image-registry/image-registry-697d97f7c8-qm54x" Jan 29 15:30:18 crc kubenswrapper[5008]: I0129 15:30:18.642327 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ftbd9\" (UniqueName: \"kubernetes.io/projected/6aef1830-577d-405c-bb54-6f9fe217ae86-kube-api-access-ftbd9\") pod \"redhat-marketplace-mkxw5\" (UID: \"6aef1830-577d-405c-bb54-6f9fe217ae86\") " pod="openshift-marketplace/redhat-marketplace-mkxw5" Jan 29 15:30:18 crc kubenswrapper[5008]: I0129 15:30:18.642623 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6aef1830-577d-405c-bb54-6f9fe217ae86-utilities\") pod \"redhat-marketplace-mkxw5\" (UID: \"6aef1830-577d-405c-bb54-6f9fe217ae86\") " pod="openshift-marketplace/redhat-marketplace-mkxw5" Jan 29 15:30:18 crc kubenswrapper[5008]: I0129 15:30:18.642713 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6aef1830-577d-405c-bb54-6f9fe217ae86-catalog-content\") pod \"redhat-marketplace-mkxw5\" (UID: \"6aef1830-577d-405c-bb54-6f9fe217ae86\") " pod="openshift-marketplace/redhat-marketplace-mkxw5" Jan 29 15:30:18 crc kubenswrapper[5008]: E0129 15:30:18.642882 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 15:30:19.142850576 +0000 UTC m=+162.815704813 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qm54x" (UID: "30c54800-b443-4da8-9d41-22e8f156a1a1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:18 crc kubenswrapper[5008]: I0129 15:30:18.650523 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-config-operator/openshift-config-operator-7777fb866f-468fl" Jan 29 15:30:18 crc kubenswrapper[5008]: I0129 15:30:18.673245 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ftbd9\" (UniqueName: \"kubernetes.io/projected/6aef1830-577d-405c-bb54-6f9fe217ae86-kube-api-access-ftbd9\") pod \"redhat-marketplace-mkxw5\" (UID: \"6aef1830-577d-405c-bb54-6f9fe217ae86\") " pod="openshift-marketplace/redhat-marketplace-mkxw5" Jan 29 15:30:18 crc kubenswrapper[5008]: I0129 15:30:18.743298 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 15:30:18 crc kubenswrapper[5008]: E0129 15:30:18.743489 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 15:30:19.24345663 +0000 UTC m=+162.916310867 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:18 crc kubenswrapper[5008]: I0129 15:30:18.743550 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qm54x\" (UID: \"30c54800-b443-4da8-9d41-22e8f156a1a1\") " pod="openshift-image-registry/image-registry-697d97f7c8-qm54x" Jan 29 15:30:18 crc kubenswrapper[5008]: E0129 15:30:18.743949 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 15:30:19.243937312 +0000 UTC m=+162.916791549 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qm54x" (UID: "30c54800-b443-4da8-9d41-22e8f156a1a1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:18 crc kubenswrapper[5008]: I0129 15:30:18.817627 5008 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-fd6nq"] Jan 29 15:30:18 crc kubenswrapper[5008]: I0129 15:30:18.818753 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-fd6nq" Jan 29 15:30:18 crc kubenswrapper[5008]: I0129 15:30:18.820455 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-mkxw5" Jan 29 15:30:18 crc kubenswrapper[5008]: I0129 15:30:18.829333 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-fd6nq"] Jan 29 15:30:18 crc kubenswrapper[5008]: I0129 15:30:18.859734 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 15:30:18 crc kubenswrapper[5008]: E0129 15:30:18.859925 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 15:30:19.359898398 +0000 UTC m=+163.032752675 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:18 crc kubenswrapper[5008]: I0129 15:30:18.860182 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/37742fc9-fce4-41f0-ba04-7232b6e647a7-catalog-content\") pod \"redhat-marketplace-fd6nq\" (UID: \"37742fc9-fce4-41f0-ba04-7232b6e647a7\") " pod="openshift-marketplace/redhat-marketplace-fd6nq" Jan 29 15:30:18 crc kubenswrapper[5008]: I0129 15:30:18.860265 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lw6k4\" (UniqueName: \"kubernetes.io/projected/37742fc9-fce4-41f0-ba04-7232b6e647a7-kube-api-access-lw6k4\") pod \"redhat-marketplace-fd6nq\" (UID: \"37742fc9-fce4-41f0-ba04-7232b6e647a7\") " pod="openshift-marketplace/redhat-marketplace-fd6nq" Jan 29 15:30:18 crc kubenswrapper[5008]: I0129 15:30:18.860411 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qm54x\" (UID: \"30c54800-b443-4da8-9d41-22e8f156a1a1\") " pod="openshift-image-registry/image-registry-697d97f7c8-qm54x" Jan 29 15:30:18 crc kubenswrapper[5008]: I0129 15:30:18.860529 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/37742fc9-fce4-41f0-ba04-7232b6e647a7-utilities\") pod \"redhat-marketplace-fd6nq\" (UID: \"37742fc9-fce4-41f0-ba04-7232b6e647a7\") " pod="openshift-marketplace/redhat-marketplace-fd6nq" Jan 29 15:30:18 crc kubenswrapper[5008]: E0129 15:30:18.861170 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 15:30:19.361148521 +0000 UTC m=+163.034002798 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qm54x" (UID: "30c54800-b443-4da8-9d41-22e8f156a1a1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:18 crc kubenswrapper[5008]: I0129 15:30:18.961849 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 15:30:18 crc kubenswrapper[5008]: E0129 15:30:18.962029 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 15:30:19.461997001 +0000 UTC m=+163.134851238 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:18 crc kubenswrapper[5008]: I0129 15:30:18.962095 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qm54x\" (UID: \"30c54800-b443-4da8-9d41-22e8f156a1a1\") " pod="openshift-image-registry/image-registry-697d97f7c8-qm54x" Jan 29 15:30:18 crc kubenswrapper[5008]: E0129 15:30:18.962441 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 15:30:19.462428312 +0000 UTC m=+163.135282539 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qm54x" (UID: "30c54800-b443-4da8-9d41-22e8f156a1a1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:18 crc kubenswrapper[5008]: I0129 15:30:18.963396 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/37742fc9-fce4-41f0-ba04-7232b6e647a7-utilities\") pod \"redhat-marketplace-fd6nq\" (UID: \"37742fc9-fce4-41f0-ba04-7232b6e647a7\") " pod="openshift-marketplace/redhat-marketplace-fd6nq" Jan 29 15:30:18 crc kubenswrapper[5008]: I0129 15:30:18.963469 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/37742fc9-fce4-41f0-ba04-7232b6e647a7-catalog-content\") pod \"redhat-marketplace-fd6nq\" (UID: \"37742fc9-fce4-41f0-ba04-7232b6e647a7\") " pod="openshift-marketplace/redhat-marketplace-fd6nq" Jan 29 15:30:18 crc kubenswrapper[5008]: I0129 15:30:18.963510 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lw6k4\" (UniqueName: \"kubernetes.io/projected/37742fc9-fce4-41f0-ba04-7232b6e647a7-kube-api-access-lw6k4\") pod \"redhat-marketplace-fd6nq\" (UID: \"37742fc9-fce4-41f0-ba04-7232b6e647a7\") " pod="openshift-marketplace/redhat-marketplace-fd6nq" Jan 29 15:30:18 crc kubenswrapper[5008]: I0129 15:30:18.964527 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/37742fc9-fce4-41f0-ba04-7232b6e647a7-catalog-content\") pod \"redhat-marketplace-fd6nq\" (UID: \"37742fc9-fce4-41f0-ba04-7232b6e647a7\") " pod="openshift-marketplace/redhat-marketplace-fd6nq" Jan 29 15:30:18 crc kubenswrapper[5008]: I0129 15:30:18.964752 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/37742fc9-fce4-41f0-ba04-7232b6e647a7-utilities\") pod \"redhat-marketplace-fd6nq\" (UID: \"37742fc9-fce4-41f0-ba04-7232b6e647a7\") " pod="openshift-marketplace/redhat-marketplace-fd6nq" Jan 29 15:30:18 crc kubenswrapper[5008]: I0129 15:30:18.991927 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lw6k4\" (UniqueName: \"kubernetes.io/projected/37742fc9-fce4-41f0-ba04-7232b6e647a7-kube-api-access-lw6k4\") pod \"redhat-marketplace-fd6nq\" (UID: \"37742fc9-fce4-41f0-ba04-7232b6e647a7\") " pod="openshift-marketplace/redhat-marketplace-fd6nq" Jan 29 15:30:18 crc kubenswrapper[5008]: I0129 15:30:18.998168 5008 generic.go:334] "Generic (PLEG): container finished" podID="9bcecb83-1aec-4bd4-9b46-f02deb628018" containerID="2c5bd79fe1383fd09ebd0db5b0a83990cb1f07f4f895a71dc2c671033d14863f" exitCode=0 Jan 29 15:30:18 crc kubenswrapper[5008]: I0129 15:30:18.998236 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-h7vmc" event={"ID":"9bcecb83-1aec-4bd4-9b46-f02deb628018","Type":"ContainerDied","Data":"2c5bd79fe1383fd09ebd0db5b0a83990cb1f07f4f895a71dc2c671033d14863f"} Jan 29 15:30:19 crc kubenswrapper[5008]: I0129 15:30:19.004670 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-g9x2n" event={"ID":"5ca041e2-baff-40ee-8fc9-e9bc58aee628","Type":"ContainerStarted","Data":"0b67f4499cb9c5f59f98a3ab23560adef52655f324dbef45543827963ba1b7c8"} Jan 29 15:30:19 crc kubenswrapper[5008]: I0129 15:30:19.007365 5008 generic.go:334] "Generic (PLEG): container finished" podID="250e7db8-88dd-44fd-8d73-51a6f8f4ba96" containerID="e071e2b226079246f9ca57f9959626bc9e073f0d12b52ede6ad72f288413a3f9" exitCode=0 Jan 29 15:30:19 crc kubenswrapper[5008]: I0129 15:30:19.007446 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-z9t2h" event={"ID":"250e7db8-88dd-44fd-8d73-51a6f8f4ba96","Type":"ContainerDied","Data":"e071e2b226079246f9ca57f9959626bc9e073f0d12b52ede6ad72f288413a3f9"} Jan 29 15:30:19 crc kubenswrapper[5008]: I0129 15:30:19.007496 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-z9t2h" event={"ID":"250e7db8-88dd-44fd-8d73-51a6f8f4ba96","Type":"ContainerStarted","Data":"616df5323044bc3ebd3a98d75f3ea061e944f69d5bc62803ba635bd69dee1996"} Jan 29 15:30:19 crc kubenswrapper[5008]: I0129 15:30:19.011583 5008 generic.go:334] "Generic (PLEG): container finished" podID="d2d42845-cca1-4b60-bc84-4b2baebf702b" containerID="62b0c01ef29dcd7c7957aa7b9fba8ee02c41e66ab0221b57ac7769babd464e8c" exitCode=0 Jan 29 15:30:19 crc kubenswrapper[5008]: I0129 15:30:19.011635 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4dwdf" event={"ID":"d2d42845-cca1-4b60-bc84-4b2baebf702b","Type":"ContainerDied","Data":"62b0c01ef29dcd7c7957aa7b9fba8ee02c41e66ab0221b57ac7769babd464e8c"} Jan 29 15:30:19 crc kubenswrapper[5008]: I0129 15:30:19.014357 5008 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 29 15:30:19 crc kubenswrapper[5008]: I0129 15:30:19.018643 5008 generic.go:334] "Generic (PLEG): container finished" podID="6aebe040-289b-48c1-a825-f12b471a5ad6" containerID="f52329f3f265a1114741db2a28bb35b1a3c05c140e0374037d9b0d6bd838822b" exitCode=0 Jan 29 15:30:19 crc kubenswrapper[5008]: I0129 15:30:19.018698 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cwgw5" event={"ID":"6aebe040-289b-48c1-a825-f12b471a5ad6","Type":"ContainerDied","Data":"f52329f3f265a1114741db2a28bb35b1a3c05c140e0374037d9b0d6bd838822b"} Jan 29 15:30:19 crc kubenswrapper[5008]: I0129 15:30:19.064995 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 15:30:19 crc kubenswrapper[5008]: E0129 15:30:19.066529 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 15:30:19.566483736 +0000 UTC m=+163.239337973 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:19 crc kubenswrapper[5008]: I0129 15:30:19.070283 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-mkxw5"] Jan 29 15:30:19 crc kubenswrapper[5008]: I0129 15:30:19.129915 5008 patch_prober.go:28] interesting pod/downloads-7954f5f757-6wmrp container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.42:8080/\": dial tcp 10.217.0.42:8080: connect: connection refused" start-of-body= Jan 29 15:30:19 crc kubenswrapper[5008]: I0129 15:30:19.129980 5008 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-6wmrp" podUID="64cf2ff9-40f4-48a5-a16c-6513cf0470bd" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.42:8080/\": dial tcp 10.217.0.42:8080: connect: connection refused" Jan 29 15:30:19 crc kubenswrapper[5008]: I0129 15:30:19.130370 5008 patch_prober.go:28] interesting pod/downloads-7954f5f757-6wmrp container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.42:8080/\": dial tcp 10.217.0.42:8080: connect: connection refused" start-of-body= Jan 29 15:30:19 crc kubenswrapper[5008]: I0129 15:30:19.130435 5008 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-6wmrp" podUID="64cf2ff9-40f4-48a5-a16c-6513cf0470bd" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.42:8080/\": dial tcp 10.217.0.42:8080: connect: connection refused" Jan 29 15:30:19 crc kubenswrapper[5008]: I0129 15:30:19.137753 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-fd6nq" Jan 29 15:30:19 crc kubenswrapper[5008]: I0129 15:30:19.166461 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qm54x\" (UID: \"30c54800-b443-4da8-9d41-22e8f156a1a1\") " pod="openshift-image-registry/image-registry-697d97f7c8-qm54x" Jan 29 15:30:19 crc kubenswrapper[5008]: E0129 15:30:19.166840 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 15:30:19.666827384 +0000 UTC m=+163.339681611 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qm54x" (UID: "30c54800-b443-4da8-9d41-22e8f156a1a1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:19 crc kubenswrapper[5008]: E0129 15:30:19.239718 5008 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://registry.redhat.io/redhat/community-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" image="registry.redhat.io/redhat/community-operator-index:v4.18" Jan 29 15:30:19 crc kubenswrapper[5008]: E0129 15:30:19.239985 5008 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-s8q2q,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-4dwdf_openshift-marketplace(d2d42845-cca1-4b60-bc84-4b2baebf702b): ErrImagePull: initializing source docker://registry.redhat.io/redhat/community-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" logger="UnhandledError" Jan 29 15:30:19 crc kubenswrapper[5008]: E0129 15:30:19.241133 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"initializing source docker://registry.redhat.io/redhat/community-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)\"" pod="openshift-marketplace/community-operators-4dwdf" podUID="d2d42845-cca1-4b60-bc84-4b2baebf702b" Jan 29 15:30:19 crc kubenswrapper[5008]: E0129 15:30:19.250461 5008 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://registry.redhat.io/redhat/certified-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Jan 29 15:30:19 crc kubenswrapper[5008]: E0129 15:30:19.250591 5008 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-dldqp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-cwgw5_openshift-marketplace(6aebe040-289b-48c1-a825-f12b471a5ad6): ErrImagePull: initializing source docker://registry.redhat.io/redhat/certified-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" logger="UnhandledError" Jan 29 15:30:19 crc kubenswrapper[5008]: E0129 15:30:19.251962 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"initializing source docker://registry.redhat.io/redhat/certified-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)\"" pod="openshift-marketplace/certified-operators-cwgw5" podUID="6aebe040-289b-48c1-a825-f12b471a5ad6" Jan 29 15:30:19 crc kubenswrapper[5008]: I0129 15:30:19.268196 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 15:30:19 crc kubenswrapper[5008]: E0129 15:30:19.268700 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 15:30:19.768682251 +0000 UTC m=+163.441536488 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:19 crc kubenswrapper[5008]: I0129 15:30:19.362576 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-fd6nq"] Jan 29 15:30:19 crc kubenswrapper[5008]: I0129 15:30:19.369515 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qm54x\" (UID: \"30c54800-b443-4da8-9d41-22e8f156a1a1\") " pod="openshift-image-registry/image-registry-697d97f7c8-qm54x" Jan 29 15:30:19 crc kubenswrapper[5008]: E0129 15:30:19.369928 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 15:30:19.869911861 +0000 UTC m=+163.542766098 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qm54x" (UID: "30c54800-b443-4da8-9d41-22e8f156a1a1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:19 crc kubenswrapper[5008]: W0129 15:30:19.372089 5008 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod37742fc9_fce4_41f0_ba04_7232b6e647a7.slice/crio-335be0a36e05771a7a88d81fee1b61fe29f073571f151738b87168e8e0776f1d WatchSource:0}: Error finding container 335be0a36e05771a7a88d81fee1b61fe29f073571f151738b87168e8e0776f1d: Status 404 returned error can't find the container with id 335be0a36e05771a7a88d81fee1b61fe29f073571f151738b87168e8e0776f1d Jan 29 15:30:19 crc kubenswrapper[5008]: I0129 15:30:19.416656 5008 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-tst9c"] Jan 29 15:30:19 crc kubenswrapper[5008]: I0129 15:30:19.417819 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-tst9c" Jan 29 15:30:19 crc kubenswrapper[5008]: I0129 15:30:19.419623 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Jan 29 15:30:19 crc kubenswrapper[5008]: I0129 15:30:19.431254 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-tst9c"] Jan 29 15:30:19 crc kubenswrapper[5008]: I0129 15:30:19.441418 5008 patch_prober.go:28] interesting pod/router-default-5444994796-lkcrp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 29 15:30:19 crc kubenswrapper[5008]: [-]has-synced failed: reason withheld Jan 29 15:30:19 crc kubenswrapper[5008]: [+]process-running ok Jan 29 15:30:19 crc kubenswrapper[5008]: healthz check failed Jan 29 15:30:19 crc kubenswrapper[5008]: I0129 15:30:19.441482 5008 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-lkcrp" podUID="380625b0-02b5-417a-bd1e-7ccf56f56059" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 29 15:30:19 crc kubenswrapper[5008]: I0129 15:30:19.471273 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 15:30:19 crc kubenswrapper[5008]: I0129 15:30:19.471510 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ea8deba9-72cb-4274-add1-e80591a9e7cc-utilities\") pod \"redhat-operators-tst9c\" (UID: \"ea8deba9-72cb-4274-add1-e80591a9e7cc\") " pod="openshift-marketplace/redhat-operators-tst9c" Jan 29 15:30:19 crc kubenswrapper[5008]: I0129 15:30:19.471546 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-229kp\" (UniqueName: \"kubernetes.io/projected/ea8deba9-72cb-4274-add1-e80591a9e7cc-kube-api-access-229kp\") pod \"redhat-operators-tst9c\" (UID: \"ea8deba9-72cb-4274-add1-e80591a9e7cc\") " pod="openshift-marketplace/redhat-operators-tst9c" Jan 29 15:30:19 crc kubenswrapper[5008]: I0129 15:30:19.471613 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ea8deba9-72cb-4274-add1-e80591a9e7cc-catalog-content\") pod \"redhat-operators-tst9c\" (UID: \"ea8deba9-72cb-4274-add1-e80591a9e7cc\") " pod="openshift-marketplace/redhat-operators-tst9c" Jan 29 15:30:19 crc kubenswrapper[5008]: E0129 15:30:19.471752 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 15:30:19.971734797 +0000 UTC m=+163.644589034 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:19 crc kubenswrapper[5008]: I0129 15:30:19.572853 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ea8deba9-72cb-4274-add1-e80591a9e7cc-catalog-content\") pod \"redhat-operators-tst9c\" (UID: \"ea8deba9-72cb-4274-add1-e80591a9e7cc\") " pod="openshift-marketplace/redhat-operators-tst9c" Jan 29 15:30:19 crc kubenswrapper[5008]: I0129 15:30:19.572957 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ea8deba9-72cb-4274-add1-e80591a9e7cc-utilities\") pod \"redhat-operators-tst9c\" (UID: \"ea8deba9-72cb-4274-add1-e80591a9e7cc\") " pod="openshift-marketplace/redhat-operators-tst9c" Jan 29 15:30:19 crc kubenswrapper[5008]: I0129 15:30:19.572999 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-229kp\" (UniqueName: \"kubernetes.io/projected/ea8deba9-72cb-4274-add1-e80591a9e7cc-kube-api-access-229kp\") pod \"redhat-operators-tst9c\" (UID: \"ea8deba9-72cb-4274-add1-e80591a9e7cc\") " pod="openshift-marketplace/redhat-operators-tst9c" Jan 29 15:30:19 crc kubenswrapper[5008]: I0129 15:30:19.573031 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qm54x\" (UID: \"30c54800-b443-4da8-9d41-22e8f156a1a1\") " pod="openshift-image-registry/image-registry-697d97f7c8-qm54x" Jan 29 15:30:19 crc kubenswrapper[5008]: E0129 15:30:19.573667 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 15:30:20.073653676 +0000 UTC m=+163.746507913 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qm54x" (UID: "30c54800-b443-4da8-9d41-22e8f156a1a1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:19 crc kubenswrapper[5008]: I0129 15:30:19.573905 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ea8deba9-72cb-4274-add1-e80591a9e7cc-utilities\") pod \"redhat-operators-tst9c\" (UID: \"ea8deba9-72cb-4274-add1-e80591a9e7cc\") " pod="openshift-marketplace/redhat-operators-tst9c" Jan 29 15:30:19 crc kubenswrapper[5008]: I0129 15:30:19.573989 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ea8deba9-72cb-4274-add1-e80591a9e7cc-catalog-content\") pod \"redhat-operators-tst9c\" (UID: \"ea8deba9-72cb-4274-add1-e80591a9e7cc\") " pod="openshift-marketplace/redhat-operators-tst9c" Jan 29 15:30:19 crc kubenswrapper[5008]: I0129 15:30:19.598885 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-229kp\" (UniqueName: \"kubernetes.io/projected/ea8deba9-72cb-4274-add1-e80591a9e7cc-kube-api-access-229kp\") pod \"redhat-operators-tst9c\" (UID: \"ea8deba9-72cb-4274-add1-e80591a9e7cc\") " pod="openshift-marketplace/redhat-operators-tst9c" Jan 29 15:30:19 crc kubenswrapper[5008]: I0129 15:30:19.659398 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-879f6c89f-fpmxk" Jan 29 15:30:19 crc kubenswrapper[5008]: I0129 15:30:19.674773 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 15:30:19 crc kubenswrapper[5008]: E0129 15:30:19.674918 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 15:30:20.174887146 +0000 UTC m=+163.847741383 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:19 crc kubenswrapper[5008]: I0129 15:30:19.675065 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qm54x\" (UID: \"30c54800-b443-4da8-9d41-22e8f156a1a1\") " pod="openshift-image-registry/image-registry-697d97f7c8-qm54x" Jan 29 15:30:19 crc kubenswrapper[5008]: E0129 15:30:19.675346 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 15:30:20.175332427 +0000 UTC m=+163.848186744 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qm54x" (UID: "30c54800-b443-4da8-9d41-22e8f156a1a1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:19 crc kubenswrapper[5008]: I0129 15:30:19.741159 5008 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-n2sqt" Jan 29 15:30:19 crc kubenswrapper[5008]: I0129 15:30:19.746516 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-n2sqt" Jan 29 15:30:19 crc kubenswrapper[5008]: I0129 15:30:19.769945 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-apiserver/apiserver-76f77b778f-4l85w" Jan 29 15:30:19 crc kubenswrapper[5008]: I0129 15:30:19.769986 5008 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-apiserver/apiserver-76f77b778f-4l85w" Jan 29 15:30:19 crc kubenswrapper[5008]: I0129 15:30:19.775939 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 15:30:19 crc kubenswrapper[5008]: E0129 15:30:19.776154 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 15:30:20.276114566 +0000 UTC m=+163.948968803 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:19 crc kubenswrapper[5008]: I0129 15:30:19.776388 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qm54x\" (UID: \"30c54800-b443-4da8-9d41-22e8f156a1a1\") " pod="openshift-image-registry/image-registry-697d97f7c8-qm54x" Jan 29 15:30:19 crc kubenswrapper[5008]: E0129 15:30:19.777502 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 15:30:20.277487042 +0000 UTC m=+163.950341279 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qm54x" (UID: "30c54800-b443-4da8-9d41-22e8f156a1a1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:19 crc kubenswrapper[5008]: I0129 15:30:19.798908 5008 patch_prober.go:28] interesting pod/console-f9d7485db-g2rk6 container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.10:8443/health\": dial tcp 10.217.0.10:8443: connect: connection refused" start-of-body= Jan 29 15:30:19 crc kubenswrapper[5008]: I0129 15:30:19.798985 5008 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-g2rk6" podUID="3f7de4a5-3819-41c0-9e2e-766dcff408bb" containerName="console" probeResult="failure" output="Get \"https://10.217.0.10:8443/health\": dial tcp 10.217.0.10:8443: connect: connection refused" Jan 29 15:30:19 crc kubenswrapper[5008]: I0129 15:30:19.803113 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-tst9c" Jan 29 15:30:19 crc kubenswrapper[5008]: I0129 15:30:19.826945 5008 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-lhtht"] Jan 29 15:30:19 crc kubenswrapper[5008]: I0129 15:30:19.831959 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-lhtht" Jan 29 15:30:19 crc kubenswrapper[5008]: I0129 15:30:19.834568 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-4zwkl" Jan 29 15:30:19 crc kubenswrapper[5008]: I0129 15:30:19.849883 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-lhtht"] Jan 29 15:30:19 crc kubenswrapper[5008]: I0129 15:30:19.877281 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 15:30:19 crc kubenswrapper[5008]: I0129 15:30:19.877967 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6pfbb\" (UniqueName: \"kubernetes.io/projected/a954daed-802a-4b46-81ef-7079dcddbaa5-kube-api-access-6pfbb\") pod \"redhat-operators-lhtht\" (UID: \"a954daed-802a-4b46-81ef-7079dcddbaa5\") " pod="openshift-marketplace/redhat-operators-lhtht" Jan 29 15:30:19 crc kubenswrapper[5008]: E0129 15:30:19.878009 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 15:30:20.377982713 +0000 UTC m=+164.050836950 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:19 crc kubenswrapper[5008]: I0129 15:30:19.878067 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qm54x\" (UID: \"30c54800-b443-4da8-9d41-22e8f156a1a1\") " pod="openshift-image-registry/image-registry-697d97f7c8-qm54x" Jan 29 15:30:19 crc kubenswrapper[5008]: I0129 15:30:19.878167 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a954daed-802a-4b46-81ef-7079dcddbaa5-catalog-content\") pod \"redhat-operators-lhtht\" (UID: \"a954daed-802a-4b46-81ef-7079dcddbaa5\") " pod="openshift-marketplace/redhat-operators-lhtht" Jan 29 15:30:19 crc kubenswrapper[5008]: I0129 15:30:19.878203 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a954daed-802a-4b46-81ef-7079dcddbaa5-utilities\") pod \"redhat-operators-lhtht\" (UID: \"a954daed-802a-4b46-81ef-7079dcddbaa5\") " pod="openshift-marketplace/redhat-operators-lhtht" Jan 29 15:30:19 crc kubenswrapper[5008]: E0129 15:30:19.880521 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 15:30:20.380503949 +0000 UTC m=+164.053358246 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qm54x" (UID: "30c54800-b443-4da8-9d41-22e8f156a1a1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:19 crc kubenswrapper[5008]: I0129 15:30:19.940149 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-558db77b4-6zjns" Jan 29 15:30:19 crc kubenswrapper[5008]: I0129 15:30:19.981298 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 15:30:19 crc kubenswrapper[5008]: I0129 15:30:19.981571 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6pfbb\" (UniqueName: \"kubernetes.io/projected/a954daed-802a-4b46-81ef-7079dcddbaa5-kube-api-access-6pfbb\") pod \"redhat-operators-lhtht\" (UID: \"a954daed-802a-4b46-81ef-7079dcddbaa5\") " pod="openshift-marketplace/redhat-operators-lhtht" Jan 29 15:30:19 crc kubenswrapper[5008]: I0129 15:30:19.981740 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a954daed-802a-4b46-81ef-7079dcddbaa5-catalog-content\") pod \"redhat-operators-lhtht\" (UID: \"a954daed-802a-4b46-81ef-7079dcddbaa5\") " pod="openshift-marketplace/redhat-operators-lhtht" Jan 29 15:30:19 crc kubenswrapper[5008]: I0129 15:30:19.981793 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a954daed-802a-4b46-81ef-7079dcddbaa5-utilities\") pod \"redhat-operators-lhtht\" (UID: \"a954daed-802a-4b46-81ef-7079dcddbaa5\") " pod="openshift-marketplace/redhat-operators-lhtht" Jan 29 15:30:19 crc kubenswrapper[5008]: E0129 15:30:19.982893 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 15:30:20.482872739 +0000 UTC m=+164.155726976 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:19 crc kubenswrapper[5008]: I0129 15:30:19.983170 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a954daed-802a-4b46-81ef-7079dcddbaa5-catalog-content\") pod \"redhat-operators-lhtht\" (UID: \"a954daed-802a-4b46-81ef-7079dcddbaa5\") " pod="openshift-marketplace/redhat-operators-lhtht" Jan 29 15:30:20 crc kubenswrapper[5008]: I0129 15:30:20.001016 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a954daed-802a-4b46-81ef-7079dcddbaa5-utilities\") pod \"redhat-operators-lhtht\" (UID: \"a954daed-802a-4b46-81ef-7079dcddbaa5\") " pod="openshift-marketplace/redhat-operators-lhtht" Jan 29 15:30:20 crc kubenswrapper[5008]: I0129 15:30:20.009173 5008 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-fpmxk"] Jan 29 15:30:20 crc kubenswrapper[5008]: I0129 15:30:20.041259 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fd6nq" event={"ID":"37742fc9-fce4-41f0-ba04-7232b6e647a7","Type":"ContainerStarted","Data":"335be0a36e05771a7a88d81fee1b61fe29f073571f151738b87168e8e0776f1d"} Jan 29 15:30:20 crc kubenswrapper[5008]: I0129 15:30:20.043903 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mkxw5" event={"ID":"6aef1830-577d-405c-bb54-6f9fe217ae86","Type":"ContainerStarted","Data":"57f282b94968e79e724bd40448547c7c110b5b3c35e9677aea1eb21b270ed1d9"} Jan 29 15:30:20 crc kubenswrapper[5008]: I0129 15:30:20.053435 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6pfbb\" (UniqueName: \"kubernetes.io/projected/a954daed-802a-4b46-81ef-7079dcddbaa5-kube-api-access-6pfbb\") pod \"redhat-operators-lhtht\" (UID: \"a954daed-802a-4b46-81ef-7079dcddbaa5\") " pod="openshift-marketplace/redhat-operators-lhtht" Jan 29 15:30:20 crc kubenswrapper[5008]: I0129 15:30:20.054296 5008 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-879f6c89f-fpmxk" podUID="7d5c80c8-4e74-4618-96c0-8e76168ad709" containerName="controller-manager" containerID="cri-o://4c0c93394c1503334716279d33aab711196676ea784b3c3aa6166010a6b66a0e" gracePeriod=30 Jan 29 15:30:20 crc kubenswrapper[5008]: E0129 15:30:20.057020 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-4dwdf" podUID="d2d42845-cca1-4b60-bc84-4b2baebf702b" Jan 29 15:30:20 crc kubenswrapper[5008]: I0129 15:30:20.057474 5008 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-4zwkl"] Jan 29 15:30:20 crc kubenswrapper[5008]: I0129 15:30:20.057712 5008 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-4zwkl" podUID="f56b5e44-f079-4c56-9e19-e09996979003" containerName="route-controller-manager" containerID="cri-o://8a58e85619a9d68ab7ca1c73646da4750ac77969c5d738aeb0d3b0851d9dc82e" gracePeriod=30 Jan 29 15:30:20 crc kubenswrapper[5008]: E0129 15:30:20.058884 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-cwgw5" podUID="6aebe040-289b-48c1-a825-f12b471a5ad6" Jan 29 15:30:20 crc kubenswrapper[5008]: I0129 15:30:20.086808 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qm54x\" (UID: \"30c54800-b443-4da8-9d41-22e8f156a1a1\") " pod="openshift-image-registry/image-registry-697d97f7c8-qm54x" Jan 29 15:30:20 crc kubenswrapper[5008]: E0129 15:30:20.089271 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 15:30:20.589241074 +0000 UTC m=+164.262095311 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qm54x" (UID: "30c54800-b443-4da8-9d41-22e8f156a1a1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:20 crc kubenswrapper[5008]: I0129 15:30:20.116857 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-tst9c"] Jan 29 15:30:20 crc kubenswrapper[5008]: I0129 15:30:20.188403 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 15:30:20 crc kubenswrapper[5008]: I0129 15:30:20.189138 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-lhtht" Jan 29 15:30:20 crc kubenswrapper[5008]: E0129 15:30:20.190148 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 15:30:20.688861122 +0000 UTC m=+164.361715359 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:20 crc kubenswrapper[5008]: I0129 15:30:20.290322 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qm54x\" (UID: \"30c54800-b443-4da8-9d41-22e8f156a1a1\") " pod="openshift-image-registry/image-registry-697d97f7c8-qm54x" Jan 29 15:30:20 crc kubenswrapper[5008]: E0129 15:30:20.290702 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 15:30:20.790688068 +0000 UTC m=+164.463542305 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qm54x" (UID: "30c54800-b443-4da8-9d41-22e8f156a1a1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:20 crc kubenswrapper[5008]: E0129 15:30:20.327596 5008 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://registry.redhat.io/redhat/certified-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Jan 29 15:30:20 crc kubenswrapper[5008]: I0129 15:30:20.327725 5008 patch_prober.go:28] interesting pod/apiserver-76f77b778f-4l85w container/openshift-apiserver namespace/openshift-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Jan 29 15:30:20 crc kubenswrapper[5008]: [+]log ok Jan 29 15:30:20 crc kubenswrapper[5008]: [+]etcd ok Jan 29 15:30:20 crc kubenswrapper[5008]: [+]poststarthook/start-apiserver-admission-initializer ok Jan 29 15:30:20 crc kubenswrapper[5008]: [+]poststarthook/generic-apiserver-start-informers ok Jan 29 15:30:20 crc kubenswrapper[5008]: [+]poststarthook/max-in-flight-filter ok Jan 29 15:30:20 crc kubenswrapper[5008]: [+]poststarthook/storage-object-count-tracker-hook ok Jan 29 15:30:20 crc kubenswrapper[5008]: [+]poststarthook/image.openshift.io-apiserver-caches ok Jan 29 15:30:20 crc kubenswrapper[5008]: [-]poststarthook/authorization.openshift.io-bootstrapclusterroles failed: reason withheld Jan 29 15:30:20 crc kubenswrapper[5008]: [-]poststarthook/authorization.openshift.io-ensurenodebootstrap-sa failed: reason withheld Jan 29 15:30:20 crc kubenswrapper[5008]: [+]poststarthook/project.openshift.io-projectcache ok Jan 29 15:30:20 crc kubenswrapper[5008]: [+]poststarthook/project.openshift.io-projectauthorizationcache ok Jan 29 15:30:20 crc kubenswrapper[5008]: [-]poststarthook/openshift.io-startinformers failed: reason withheld Jan 29 15:30:20 crc kubenswrapper[5008]: [+]poststarthook/openshift.io-restmapperupdater ok Jan 29 15:30:20 crc kubenswrapper[5008]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Jan 29 15:30:20 crc kubenswrapper[5008]: livez check failed Jan 29 15:30:20 crc kubenswrapper[5008]: I0129 15:30:20.327814 5008 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-apiserver/apiserver-76f77b778f-4l85w" podUID="653b37fe-d452-4111-b27f-ef75530abe41" containerName="openshift-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 29 15:30:20 crc kubenswrapper[5008]: E0129 15:30:20.327869 5008 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-z5sl4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-z9t2h_openshift-marketplace(250e7db8-88dd-44fd-8d73-51a6f8f4ba96): ErrImagePull: initializing source docker://registry.redhat.io/redhat/certified-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" logger="UnhandledError" Jan 29 15:30:20 crc kubenswrapper[5008]: E0129 15:30:20.327917 5008 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://registry.redhat.io/redhat/community-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" image="registry.redhat.io/redhat/community-operator-index:v4.18" Jan 29 15:30:20 crc kubenswrapper[5008]: E0129 15:30:20.328118 5008 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-btkm4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-h7vmc_openshift-marketplace(9bcecb83-1aec-4bd4-9b46-f02deb628018): ErrImagePull: initializing source docker://registry.redhat.io/redhat/community-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" logger="UnhandledError" Jan 29 15:30:20 crc kubenswrapper[5008]: E0129 15:30:20.330297 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"initializing source docker://registry.redhat.io/redhat/certified-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)\"" pod="openshift-marketplace/certified-operators-z9t2h" podUID="250e7db8-88dd-44fd-8d73-51a6f8f4ba96" Jan 29 15:30:20 crc kubenswrapper[5008]: E0129 15:30:20.331388 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"initializing source docker://registry.redhat.io/redhat/community-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)\"" pod="openshift-marketplace/community-operators-h7vmc" podUID="9bcecb83-1aec-4bd4-9b46-f02deb628018" Jan 29 15:30:20 crc kubenswrapper[5008]: I0129 15:30:20.391152 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 15:30:20 crc kubenswrapper[5008]: E0129 15:30:20.391719 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 15:30:20.891699953 +0000 UTC m=+164.564554190 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:20 crc kubenswrapper[5008]: I0129 15:30:20.391903 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-4268l" Jan 29 15:30:20 crc kubenswrapper[5008]: I0129 15:30:20.394616 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-j8wt8" Jan 29 15:30:20 crc kubenswrapper[5008]: I0129 15:30:20.405587 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console-operator/console-operator-58897d9998-zs2tk" Jan 29 15:30:20 crc kubenswrapper[5008]: I0129 15:30:20.459502 5008 patch_prober.go:28] interesting pod/router-default-5444994796-lkcrp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 29 15:30:20 crc kubenswrapper[5008]: [-]has-synced failed: reason withheld Jan 29 15:30:20 crc kubenswrapper[5008]: [+]process-running ok Jan 29 15:30:20 crc kubenswrapper[5008]: healthz check failed Jan 29 15:30:20 crc kubenswrapper[5008]: I0129 15:30:20.459563 5008 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-lkcrp" podUID="380625b0-02b5-417a-bd1e-7ccf56f56059" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 29 15:30:20 crc kubenswrapper[5008]: I0129 15:30:20.493723 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qm54x\" (UID: \"30c54800-b443-4da8-9d41-22e8f156a1a1\") " pod="openshift-image-registry/image-registry-697d97f7c8-qm54x" Jan 29 15:30:20 crc kubenswrapper[5008]: E0129 15:30:20.495029 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 15:30:20.995016118 +0000 UTC m=+164.667870355 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qm54x" (UID: "30c54800-b443-4da8-9d41-22e8f156a1a1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:20 crc kubenswrapper[5008]: I0129 15:30:20.594616 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 15:30:20 crc kubenswrapper[5008]: E0129 15:30:20.594893 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 15:30:21.094859242 +0000 UTC m=+164.767713479 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:20 crc kubenswrapper[5008]: I0129 15:30:20.595300 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qm54x\" (UID: \"30c54800-b443-4da8-9d41-22e8f156a1a1\") " pod="openshift-image-registry/image-registry-697d97f7c8-qm54x" Jan 29 15:30:20 crc kubenswrapper[5008]: E0129 15:30:20.596012 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 15:30:21.096000582 +0000 UTC m=+164.768854809 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qm54x" (UID: "30c54800-b443-4da8-9d41-22e8f156a1a1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:20 crc kubenswrapper[5008]: I0129 15:30:20.643655 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-lhtht"] Jan 29 15:30:20 crc kubenswrapper[5008]: I0129 15:30:20.696884 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 15:30:20 crc kubenswrapper[5008]: E0129 15:30:20.696998 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 15:30:21.196977926 +0000 UTC m=+164.869832163 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:20 crc kubenswrapper[5008]: I0129 15:30:20.697167 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qm54x\" (UID: \"30c54800-b443-4da8-9d41-22e8f156a1a1\") " pod="openshift-image-registry/image-registry-697d97f7c8-qm54x" Jan 29 15:30:20 crc kubenswrapper[5008]: E0129 15:30:20.697457 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 15:30:21.197449298 +0000 UTC m=+164.870303535 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qm54x" (UID: "30c54800-b443-4da8-9d41-22e8f156a1a1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:20 crc kubenswrapper[5008]: I0129 15:30:20.799384 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 15:30:20 crc kubenswrapper[5008]: E0129 15:30:20.799845 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 15:30:21.299824898 +0000 UTC m=+164.972679145 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:20 crc kubenswrapper[5008]: I0129 15:30:20.882567 5008 plugin_watcher.go:194] "Adding socket path or updating timestamp to desired state cache" path="/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock" Jan 29 15:30:20 crc kubenswrapper[5008]: I0129 15:30:20.901478 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qm54x\" (UID: \"30c54800-b443-4da8-9d41-22e8f156a1a1\") " pod="openshift-image-registry/image-registry-697d97f7c8-qm54x" Jan 29 15:30:20 crc kubenswrapper[5008]: E0129 15:30:20.902000 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 15:30:21.401981022 +0000 UTC m=+165.074835259 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qm54x" (UID: "30c54800-b443-4da8-9d41-22e8f156a1a1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:20 crc kubenswrapper[5008]: I0129 15:30:20.911143 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-mqnz8" Jan 29 15:30:20 crc kubenswrapper[5008]: I0129 15:30:20.954069 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-zvhxk" Jan 29 15:30:21 crc kubenswrapper[5008]: I0129 15:30:21.003041 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 15:30:21 crc kubenswrapper[5008]: E0129 15:30:21.003298 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 15:30:21.503255225 +0000 UTC m=+165.176109462 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:21 crc kubenswrapper[5008]: I0129 15:30:21.003484 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/f3716fd8-7f9b-44e2-ac3c-e907d8793dc9-metrics-certs\") pod \"network-metrics-daemon-kkc6c\" (UID: \"f3716fd8-7f9b-44e2-ac3c-e907d8793dc9\") " pod="openshift-multus/network-metrics-daemon-kkc6c" Jan 29 15:30:21 crc kubenswrapper[5008]: I0129 15:30:21.003794 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qm54x\" (UID: \"30c54800-b443-4da8-9d41-22e8f156a1a1\") " pod="openshift-image-registry/image-registry-697d97f7c8-qm54x" Jan 29 15:30:21 crc kubenswrapper[5008]: E0129 15:30:21.005225 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 15:30:21.505207755 +0000 UTC m=+165.178061992 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qm54x" (UID: "30c54800-b443-4da8-9d41-22e8f156a1a1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:21 crc kubenswrapper[5008]: I0129 15:30:21.015119 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/f3716fd8-7f9b-44e2-ac3c-e907d8793dc9-metrics-certs\") pod \"network-metrics-daemon-kkc6c\" (UID: \"f3716fd8-7f9b-44e2-ac3c-e907d8793dc9\") " pod="openshift-multus/network-metrics-daemon-kkc6c" Jan 29 15:30:21 crc kubenswrapper[5008]: I0129 15:30:21.051594 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lhtht" event={"ID":"a954daed-802a-4b46-81ef-7079dcddbaa5","Type":"ContainerStarted","Data":"c7bb2d8d5dfc5bd460b51cbe8abe72fb7d9bc5d3e8c022f6997fb845b267cc34"} Jan 29 15:30:21 crc kubenswrapper[5008]: I0129 15:30:21.053527 5008 generic.go:334] "Generic (PLEG): container finished" podID="6aef1830-577d-405c-bb54-6f9fe217ae86" containerID="b4ed1901a1ac7d83b698c4d263db5514ae2a4bf0aab0e1f9032c155913f5bd2d" exitCode=0 Jan 29 15:30:21 crc kubenswrapper[5008]: I0129 15:30:21.053627 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mkxw5" event={"ID":"6aef1830-577d-405c-bb54-6f9fe217ae86","Type":"ContainerDied","Data":"b4ed1901a1ac7d83b698c4d263db5514ae2a4bf0aab0e1f9032c155913f5bd2d"} Jan 29 15:30:21 crc kubenswrapper[5008]: I0129 15:30:21.055378 5008 generic.go:334] "Generic (PLEG): container finished" podID="37742fc9-fce4-41f0-ba04-7232b6e647a7" containerID="07a2fa9e941811bcc7892420659a52c45d0ac131e896badbed2f3faf0a10a2bc" exitCode=0 Jan 29 15:30:21 crc kubenswrapper[5008]: I0129 15:30:21.055451 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fd6nq" event={"ID":"37742fc9-fce4-41f0-ba04-7232b6e647a7","Type":"ContainerDied","Data":"07a2fa9e941811bcc7892420659a52c45d0ac131e896badbed2f3faf0a10a2bc"} Jan 29 15:30:21 crc kubenswrapper[5008]: I0129 15:30:21.057602 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-g9x2n" event={"ID":"5ca041e2-baff-40ee-8fc9-e9bc58aee628","Type":"ContainerStarted","Data":"3a220e753ea80972106fad12775f162fefbbbe237c9a00a237aa821badcac191"} Jan 29 15:30:21 crc kubenswrapper[5008]: I0129 15:30:21.058654 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tst9c" event={"ID":"ea8deba9-72cb-4274-add1-e80591a9e7cc","Type":"ContainerStarted","Data":"add0ef656328b3411c8246a1cffa7e2baeefc91f711bf33d67c37a176e10eb38"} Jan 29 15:30:21 crc kubenswrapper[5008]: I0129 15:30:21.072404 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kkc6c" Jan 29 15:30:21 crc kubenswrapper[5008]: I0129 15:30:21.104811 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 15:30:21 crc kubenswrapper[5008]: E0129 15:30:21.105187 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 15:30:21.605166122 +0000 UTC m=+165.278020359 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:21 crc kubenswrapper[5008]: I0129 15:30:21.206518 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qm54x\" (UID: \"30c54800-b443-4da8-9d41-22e8f156a1a1\") " pod="openshift-image-registry/image-registry-697d97f7c8-qm54x" Jan 29 15:30:21 crc kubenswrapper[5008]: E0129 15:30:21.206887 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 15:30:21.706874825 +0000 UTC m=+165.379729062 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qm54x" (UID: "30c54800-b443-4da8-9d41-22e8f156a1a1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:21 crc kubenswrapper[5008]: I0129 15:30:21.308171 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 15:30:21 crc kubenswrapper[5008]: E0129 15:30:21.308398 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 15:30:21.808369433 +0000 UTC m=+165.481223710 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:21 crc kubenswrapper[5008]: I0129 15:30:21.308600 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qm54x\" (UID: \"30c54800-b443-4da8-9d41-22e8f156a1a1\") " pod="openshift-image-registry/image-registry-697d97f7c8-qm54x" Jan 29 15:30:21 crc kubenswrapper[5008]: E0129 15:30:21.309033 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 15:30:21.809014629 +0000 UTC m=+165.481868906 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qm54x" (UID: "30c54800-b443-4da8-9d41-22e8f156a1a1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:21 crc kubenswrapper[5008]: I0129 15:30:21.410233 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 15:30:21 crc kubenswrapper[5008]: E0129 15:30:21.410425 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 15:30:21.910398973 +0000 UTC m=+165.583253210 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:21 crc kubenswrapper[5008]: I0129 15:30:21.410597 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qm54x\" (UID: \"30c54800-b443-4da8-9d41-22e8f156a1a1\") " pod="openshift-image-registry/image-registry-697d97f7c8-qm54x" Jan 29 15:30:21 crc kubenswrapper[5008]: E0129 15:30:21.410888 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 15:30:21.910876647 +0000 UTC m=+165.583730874 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qm54x" (UID: "30c54800-b443-4da8-9d41-22e8f156a1a1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:21 crc kubenswrapper[5008]: I0129 15:30:21.441122 5008 patch_prober.go:28] interesting pod/router-default-5444994796-lkcrp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 29 15:30:21 crc kubenswrapper[5008]: [-]has-synced failed: reason withheld Jan 29 15:30:21 crc kubenswrapper[5008]: [+]process-running ok Jan 29 15:30:21 crc kubenswrapper[5008]: healthz check failed Jan 29 15:30:21 crc kubenswrapper[5008]: I0129 15:30:21.441185 5008 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-lkcrp" podUID="380625b0-02b5-417a-bd1e-7ccf56f56059" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 29 15:30:21 crc kubenswrapper[5008]: I0129 15:30:21.457797 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-kkc6c"] Jan 29 15:30:21 crc kubenswrapper[5008]: W0129 15:30:21.464840 5008 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf3716fd8_7f9b_44e2_ac3c_e907d8793dc9.slice/crio-f6ca26aae8c21f99e453dab95f84213192c172c0cca557c67f4aaaa7a2e1e57a WatchSource:0}: Error finding container f6ca26aae8c21f99e453dab95f84213192c172c0cca557c67f4aaaa7a2e1e57a: Status 404 returned error can't find the container with id f6ca26aae8c21f99e453dab95f84213192c172c0cca557c67f4aaaa7a2e1e57a Jan 29 15:30:21 crc kubenswrapper[5008]: I0129 15:30:21.512437 5008 reconciler.go:161] "OperationExecutor.RegisterPlugin started" plugin={"SocketPath":"/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock","Timestamp":"2026-01-29T15:30:20.882622245Z","Handler":null,"Name":""} Jan 29 15:30:21 crc kubenswrapper[5008]: I0129 15:30:21.512991 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 15:30:21 crc kubenswrapper[5008]: E0129 15:30:21.513315 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 15:30:22.013297228 +0000 UTC m=+165.686151465 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:21 crc kubenswrapper[5008]: I0129 15:30:21.520773 5008 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: kubevirt.io.hostpath-provisioner endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock versions: 1.0.0 Jan 29 15:30:21 crc kubenswrapper[5008]: I0129 15:30:21.520848 5008 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: kubevirt.io.hostpath-provisioner at endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock Jan 29 15:30:21 crc kubenswrapper[5008]: I0129 15:30:21.614568 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qm54x\" (UID: \"30c54800-b443-4da8-9d41-22e8f156a1a1\") " pod="openshift-image-registry/image-registry-697d97f7c8-qm54x" Jan 29 15:30:21 crc kubenswrapper[5008]: I0129 15:30:21.725164 5008 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 29 15:30:21 crc kubenswrapper[5008]: I0129 15:30:21.725453 5008 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qm54x\" (UID: \"30c54800-b443-4da8-9d41-22e8f156a1a1\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount\"" pod="openshift-image-registry/image-registry-697d97f7c8-qm54x" Jan 29 15:30:21 crc kubenswrapper[5008]: I0129 15:30:21.919486 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qm54x\" (UID: \"30c54800-b443-4da8-9d41-22e8f156a1a1\") " pod="openshift-image-registry/image-registry-697d97f7c8-qm54x" Jan 29 15:30:22 crc kubenswrapper[5008]: I0129 15:30:22.024026 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 15:30:22 crc kubenswrapper[5008]: I0129 15:30:22.032484 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Jan 29 15:30:22 crc kubenswrapper[5008]: I0129 15:30:22.065299 5008 generic.go:334] "Generic (PLEG): container finished" podID="f56b5e44-f079-4c56-9e19-e09996979003" containerID="8a58e85619a9d68ab7ca1c73646da4750ac77969c5d738aeb0d3b0851d9dc82e" exitCode=0 Jan 29 15:30:22 crc kubenswrapper[5008]: I0129 15:30:22.065356 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-4zwkl" event={"ID":"f56b5e44-f079-4c56-9e19-e09996979003","Type":"ContainerDied","Data":"8a58e85619a9d68ab7ca1c73646da4750ac77969c5d738aeb0d3b0851d9dc82e"} Jan 29 15:30:22 crc kubenswrapper[5008]: I0129 15:30:22.066625 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-kkc6c" event={"ID":"f3716fd8-7f9b-44e2-ac3c-e907d8793dc9","Type":"ContainerStarted","Data":"f6ca26aae8c21f99e453dab95f84213192c172c0cca557c67f4aaaa7a2e1e57a"} Jan 29 15:30:22 crc kubenswrapper[5008]: I0129 15:30:22.069002 5008 generic.go:334] "Generic (PLEG): container finished" podID="ea8deba9-72cb-4274-add1-e80591a9e7cc" containerID="4b51ccd27d29592df8a7bede95816e1b7ee7978e1541458bdd34bb868c6e0912" exitCode=0 Jan 29 15:30:22 crc kubenswrapper[5008]: I0129 15:30:22.069059 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tst9c" event={"ID":"ea8deba9-72cb-4274-add1-e80591a9e7cc","Type":"ContainerDied","Data":"4b51ccd27d29592df8a7bede95816e1b7ee7978e1541458bdd34bb868c6e0912"} Jan 29 15:30:22 crc kubenswrapper[5008]: I0129 15:30:22.070483 5008 generic.go:334] "Generic (PLEG): container finished" podID="a954daed-802a-4b46-81ef-7079dcddbaa5" containerID="01e163bc6a4525960ce048e49dcc3353c6751e2f22fe5f912048f843ee4812a5" exitCode=0 Jan 29 15:30:22 crc kubenswrapper[5008]: I0129 15:30:22.070528 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lhtht" event={"ID":"a954daed-802a-4b46-81ef-7079dcddbaa5","Type":"ContainerDied","Data":"01e163bc6a4525960ce048e49dcc3353c6751e2f22fe5f912048f843ee4812a5"} Jan 29 15:30:22 crc kubenswrapper[5008]: I0129 15:30:22.072108 5008 generic.go:334] "Generic (PLEG): container finished" podID="7d5c80c8-4e74-4618-96c0-8e76168ad709" containerID="4c0c93394c1503334716279d33aab711196676ea784b3c3aa6166010a6b66a0e" exitCode=0 Jan 29 15:30:22 crc kubenswrapper[5008]: I0129 15:30:22.073078 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-fpmxk" event={"ID":"7d5c80c8-4e74-4618-96c0-8e76168ad709","Type":"ContainerDied","Data":"4c0c93394c1503334716279d33aab711196676ea784b3c3aa6166010a6b66a0e"} Jan 29 15:30:22 crc kubenswrapper[5008]: I0129 15:30:22.112150 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-qm54x" Jan 29 15:30:22 crc kubenswrapper[5008]: E0129 15:30:22.233681 5008 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://registry.redhat.io/redhat/redhat-marketplace-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Jan 29 15:30:22 crc kubenswrapper[5008]: E0129 15:30:22.233710 5008 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://registry.redhat.io/redhat/redhat-marketplace-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Jan 29 15:30:22 crc kubenswrapper[5008]: E0129 15:30:22.233840 5008 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-lw6k4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-fd6nq_openshift-marketplace(37742fc9-fce4-41f0-ba04-7232b6e647a7): ErrImagePull: initializing source docker://registry.redhat.io/redhat/redhat-marketplace-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" logger="UnhandledError" Jan 29 15:30:22 crc kubenswrapper[5008]: E0129 15:30:22.233887 5008 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-ftbd9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-mkxw5_openshift-marketplace(6aef1830-577d-405c-bb54-6f9fe217ae86): ErrImagePull: initializing source docker://registry.redhat.io/redhat/redhat-marketplace-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" logger="UnhandledError" Jan 29 15:30:22 crc kubenswrapper[5008]: E0129 15:30:22.235228 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"initializing source docker://registry.redhat.io/redhat/redhat-marketplace-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)\"" pod="openshift-marketplace/redhat-marketplace-fd6nq" podUID="37742fc9-fce4-41f0-ba04-7232b6e647a7" Jan 29 15:30:22 crc kubenswrapper[5008]: E0129 15:30:22.235354 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"initializing source docker://registry.redhat.io/redhat/redhat-marketplace-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)\"" pod="openshift-marketplace/redhat-marketplace-mkxw5" podUID="6aef1830-577d-405c-bb54-6f9fe217ae86" Jan 29 15:30:22 crc kubenswrapper[5008]: I0129 15:30:22.359002 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-qm54x"] Jan 29 15:30:22 crc kubenswrapper[5008]: W0129 15:30:22.363874 5008 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod30c54800_b443_4da8_9d41_22e8f156a1a1.slice/crio-59462ccb837299ee29a72d7df21357033cdf6b013812c469de4c5ef1edbad70d WatchSource:0}: Error finding container 59462ccb837299ee29a72d7df21357033cdf6b013812c469de4c5ef1edbad70d: Status 404 returned error can't find the container with id 59462ccb837299ee29a72d7df21357033cdf6b013812c469de4c5ef1edbad70d Jan 29 15:30:22 crc kubenswrapper[5008]: I0129 15:30:22.443275 5008 patch_prober.go:28] interesting pod/router-default-5444994796-lkcrp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 29 15:30:22 crc kubenswrapper[5008]: [-]has-synced failed: reason withheld Jan 29 15:30:22 crc kubenswrapper[5008]: [+]process-running ok Jan 29 15:30:22 crc kubenswrapper[5008]: healthz check failed Jan 29 15:30:22 crc kubenswrapper[5008]: I0129 15:30:22.443338 5008 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-lkcrp" podUID="380625b0-02b5-417a-bd1e-7ccf56f56059" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 29 15:30:23 crc kubenswrapper[5008]: I0129 15:30:23.037717 5008 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-4zwkl" Jan 29 15:30:23 crc kubenswrapper[5008]: I0129 15:30:23.066085 5008 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-64b449df99-q9t46"] Jan 29 15:30:23 crc kubenswrapper[5008]: E0129 15:30:23.066361 5008 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f56b5e44-f079-4c56-9e19-e09996979003" containerName="route-controller-manager" Jan 29 15:30:23 crc kubenswrapper[5008]: I0129 15:30:23.066378 5008 state_mem.go:107] "Deleted CPUSet assignment" podUID="f56b5e44-f079-4c56-9e19-e09996979003" containerName="route-controller-manager" Jan 29 15:30:23 crc kubenswrapper[5008]: I0129 15:30:23.066588 5008 memory_manager.go:354] "RemoveStaleState removing state" podUID="f56b5e44-f079-4c56-9e19-e09996979003" containerName="route-controller-manager" Jan 29 15:30:23 crc kubenswrapper[5008]: I0129 15:30:23.067093 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-64b449df99-q9t46" Jan 29 15:30:23 crc kubenswrapper[5008]: I0129 15:30:23.076567 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-64b449df99-q9t46"] Jan 29 15:30:23 crc kubenswrapper[5008]: I0129 15:30:23.087107 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-4zwkl" event={"ID":"f56b5e44-f079-4c56-9e19-e09996979003","Type":"ContainerDied","Data":"283a3b198b8ebcea901bee24ad0194d994a822693f8e2f8f5e5b86077a5737c1"} Jan 29 15:30:23 crc kubenswrapper[5008]: I0129 15:30:23.087178 5008 scope.go:117] "RemoveContainer" containerID="8a58e85619a9d68ab7ca1c73646da4750ac77969c5d738aeb0d3b0851d9dc82e" Jan 29 15:30:23 crc kubenswrapper[5008]: I0129 15:30:23.088120 5008 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-4zwkl" Jan 29 15:30:23 crc kubenswrapper[5008]: I0129 15:30:23.090706 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-kkc6c" event={"ID":"f3716fd8-7f9b-44e2-ac3c-e907d8793dc9","Type":"ContainerStarted","Data":"3973d52fa588768002d1f544c8d86d854d2542c7e734d160c088f88e6ba4e231"} Jan 29 15:30:23 crc kubenswrapper[5008]: I0129 15:30:23.093308 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-g9x2n" event={"ID":"5ca041e2-baff-40ee-8fc9-e9bc58aee628","Type":"ContainerStarted","Data":"03dc70e8eaf7dffe3a41b4db12c793f0ddba7b43611b8a4e8388ec0d7320f21b"} Jan 29 15:30:23 crc kubenswrapper[5008]: I0129 15:30:23.094168 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-qm54x" event={"ID":"30c54800-b443-4da8-9d41-22e8f156a1a1","Type":"ContainerStarted","Data":"59462ccb837299ee29a72d7df21357033cdf6b013812c469de4c5ef1edbad70d"} Jan 29 15:30:23 crc kubenswrapper[5008]: I0129 15:30:23.124610 5008 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-fpmxk" Jan 29 15:30:23 crc kubenswrapper[5008]: I0129 15:30:23.140278 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f56b5e44-f079-4c56-9e19-e09996979003-config\") pod \"f56b5e44-f079-4c56-9e19-e09996979003\" (UID: \"f56b5e44-f079-4c56-9e19-e09996979003\") " Jan 29 15:30:23 crc kubenswrapper[5008]: I0129 15:30:23.140342 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4cdqj\" (UniqueName: \"kubernetes.io/projected/f56b5e44-f079-4c56-9e19-e09996979003-kube-api-access-4cdqj\") pod \"f56b5e44-f079-4c56-9e19-e09996979003\" (UID: \"f56b5e44-f079-4c56-9e19-e09996979003\") " Jan 29 15:30:23 crc kubenswrapper[5008]: I0129 15:30:23.140409 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f56b5e44-f079-4c56-9e19-e09996979003-serving-cert\") pod \"f56b5e44-f079-4c56-9e19-e09996979003\" (UID: \"f56b5e44-f079-4c56-9e19-e09996979003\") " Jan 29 15:30:23 crc kubenswrapper[5008]: I0129 15:30:23.140444 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f56b5e44-f079-4c56-9e19-e09996979003-client-ca\") pod \"f56b5e44-f079-4c56-9e19-e09996979003\" (UID: \"f56b5e44-f079-4c56-9e19-e09996979003\") " Jan 29 15:30:23 crc kubenswrapper[5008]: I0129 15:30:23.140636 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7dqd9\" (UniqueName: \"kubernetes.io/projected/afb7e8b5-ea3c-41ae-89da-ab5ec7171600-kube-api-access-7dqd9\") pod \"route-controller-manager-64b449df99-q9t46\" (UID: \"afb7e8b5-ea3c-41ae-89da-ab5ec7171600\") " pod="openshift-route-controller-manager/route-controller-manager-64b449df99-q9t46" Jan 29 15:30:23 crc kubenswrapper[5008]: I0129 15:30:23.140700 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/afb7e8b5-ea3c-41ae-89da-ab5ec7171600-config\") pod \"route-controller-manager-64b449df99-q9t46\" (UID: \"afb7e8b5-ea3c-41ae-89da-ab5ec7171600\") " pod="openshift-route-controller-manager/route-controller-manager-64b449df99-q9t46" Jan 29 15:30:23 crc kubenswrapper[5008]: I0129 15:30:23.140811 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/afb7e8b5-ea3c-41ae-89da-ab5ec7171600-client-ca\") pod \"route-controller-manager-64b449df99-q9t46\" (UID: \"afb7e8b5-ea3c-41ae-89da-ab5ec7171600\") " pod="openshift-route-controller-manager/route-controller-manager-64b449df99-q9t46" Jan 29 15:30:23 crc kubenswrapper[5008]: I0129 15:30:23.140849 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/afb7e8b5-ea3c-41ae-89da-ab5ec7171600-serving-cert\") pod \"route-controller-manager-64b449df99-q9t46\" (UID: \"afb7e8b5-ea3c-41ae-89da-ab5ec7171600\") " pod="openshift-route-controller-manager/route-controller-manager-64b449df99-q9t46" Jan 29 15:30:23 crc kubenswrapper[5008]: I0129 15:30:23.147372 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f56b5e44-f079-4c56-9e19-e09996979003-config" (OuterVolumeSpecName: "config") pod "f56b5e44-f079-4c56-9e19-e09996979003" (UID: "f56b5e44-f079-4c56-9e19-e09996979003"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:30:23 crc kubenswrapper[5008]: I0129 15:30:23.147374 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f56b5e44-f079-4c56-9e19-e09996979003-client-ca" (OuterVolumeSpecName: "client-ca") pod "f56b5e44-f079-4c56-9e19-e09996979003" (UID: "f56b5e44-f079-4c56-9e19-e09996979003"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:30:23 crc kubenswrapper[5008]: I0129 15:30:23.147918 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f56b5e44-f079-4c56-9e19-e09996979003-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "f56b5e44-f079-4c56-9e19-e09996979003" (UID: "f56b5e44-f079-4c56-9e19-e09996979003"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:30:23 crc kubenswrapper[5008]: I0129 15:30:23.153064 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f56b5e44-f079-4c56-9e19-e09996979003-kube-api-access-4cdqj" (OuterVolumeSpecName: "kube-api-access-4cdqj") pod "f56b5e44-f079-4c56-9e19-e09996979003" (UID: "f56b5e44-f079-4c56-9e19-e09996979003"). InnerVolumeSpecName "kube-api-access-4cdqj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:30:23 crc kubenswrapper[5008]: E0129 15:30:23.220612 5008 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://registry.redhat.io/redhat/redhat-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Jan 29 15:30:23 crc kubenswrapper[5008]: E0129 15:30:23.220762 5008 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-229kp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-tst9c_openshift-marketplace(ea8deba9-72cb-4274-add1-e80591a9e7cc): ErrImagePull: initializing source docker://registry.redhat.io/redhat/redhat-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" logger="UnhandledError" Jan 29 15:30:23 crc kubenswrapper[5008]: E0129 15:30:23.221998 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"initializing source docker://registry.redhat.io/redhat/redhat-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)\"" pod="openshift-marketplace/redhat-operators-tst9c" podUID="ea8deba9-72cb-4274-add1-e80591a9e7cc" Jan 29 15:30:23 crc kubenswrapper[5008]: E0129 15:30:23.222215 5008 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://registry.redhat.io/redhat/redhat-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Jan 29 15:30:23 crc kubenswrapper[5008]: E0129 15:30:23.222303 5008 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-6pfbb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-lhtht_openshift-marketplace(a954daed-802a-4b46-81ef-7079dcddbaa5): ErrImagePull: initializing source docker://registry.redhat.io/redhat/redhat-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" logger="UnhandledError" Jan 29 15:30:23 crc kubenswrapper[5008]: E0129 15:30:23.223355 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"initializing source docker://registry.redhat.io/redhat/redhat-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)\"" pod="openshift-marketplace/redhat-operators-lhtht" podUID="a954daed-802a-4b46-81ef-7079dcddbaa5" Jan 29 15:30:23 crc kubenswrapper[5008]: I0129 15:30:23.242379 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7d5c80c8-4e74-4618-96c0-8e76168ad709-proxy-ca-bundles\") pod \"7d5c80c8-4e74-4618-96c0-8e76168ad709\" (UID: \"7d5c80c8-4e74-4618-96c0-8e76168ad709\") " Jan 29 15:30:23 crc kubenswrapper[5008]: I0129 15:30:23.242517 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dqdxf\" (UniqueName: \"kubernetes.io/projected/7d5c80c8-4e74-4618-96c0-8e76168ad709-kube-api-access-dqdxf\") pod \"7d5c80c8-4e74-4618-96c0-8e76168ad709\" (UID: \"7d5c80c8-4e74-4618-96c0-8e76168ad709\") " Jan 29 15:30:23 crc kubenswrapper[5008]: I0129 15:30:23.242548 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7d5c80c8-4e74-4618-96c0-8e76168ad709-config\") pod \"7d5c80c8-4e74-4618-96c0-8e76168ad709\" (UID: \"7d5c80c8-4e74-4618-96c0-8e76168ad709\") " Jan 29 15:30:23 crc kubenswrapper[5008]: I0129 15:30:23.242572 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7d5c80c8-4e74-4618-96c0-8e76168ad709-client-ca\") pod \"7d5c80c8-4e74-4618-96c0-8e76168ad709\" (UID: \"7d5c80c8-4e74-4618-96c0-8e76168ad709\") " Jan 29 15:30:23 crc kubenswrapper[5008]: I0129 15:30:23.242595 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7d5c80c8-4e74-4618-96c0-8e76168ad709-serving-cert\") pod \"7d5c80c8-4e74-4618-96c0-8e76168ad709\" (UID: \"7d5c80c8-4e74-4618-96c0-8e76168ad709\") " Jan 29 15:30:23 crc kubenswrapper[5008]: I0129 15:30:23.242836 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7dqd9\" (UniqueName: \"kubernetes.io/projected/afb7e8b5-ea3c-41ae-89da-ab5ec7171600-kube-api-access-7dqd9\") pod \"route-controller-manager-64b449df99-q9t46\" (UID: \"afb7e8b5-ea3c-41ae-89da-ab5ec7171600\") " pod="openshift-route-controller-manager/route-controller-manager-64b449df99-q9t46" Jan 29 15:30:23 crc kubenswrapper[5008]: I0129 15:30:23.242881 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/afb7e8b5-ea3c-41ae-89da-ab5ec7171600-config\") pod \"route-controller-manager-64b449df99-q9t46\" (UID: \"afb7e8b5-ea3c-41ae-89da-ab5ec7171600\") " pod="openshift-route-controller-manager/route-controller-manager-64b449df99-q9t46" Jan 29 15:30:23 crc kubenswrapper[5008]: I0129 15:30:23.242942 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/afb7e8b5-ea3c-41ae-89da-ab5ec7171600-client-ca\") pod \"route-controller-manager-64b449df99-q9t46\" (UID: \"afb7e8b5-ea3c-41ae-89da-ab5ec7171600\") " pod="openshift-route-controller-manager/route-controller-manager-64b449df99-q9t46" Jan 29 15:30:23 crc kubenswrapper[5008]: I0129 15:30:23.242966 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/afb7e8b5-ea3c-41ae-89da-ab5ec7171600-serving-cert\") pod \"route-controller-manager-64b449df99-q9t46\" (UID: \"afb7e8b5-ea3c-41ae-89da-ab5ec7171600\") " pod="openshift-route-controller-manager/route-controller-manager-64b449df99-q9t46" Jan 29 15:30:23 crc kubenswrapper[5008]: I0129 15:30:23.243034 5008 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f56b5e44-f079-4c56-9e19-e09996979003-config\") on node \"crc\" DevicePath \"\"" Jan 29 15:30:23 crc kubenswrapper[5008]: I0129 15:30:23.243046 5008 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4cdqj\" (UniqueName: \"kubernetes.io/projected/f56b5e44-f079-4c56-9e19-e09996979003-kube-api-access-4cdqj\") on node \"crc\" DevicePath \"\"" Jan 29 15:30:23 crc kubenswrapper[5008]: I0129 15:30:23.243060 5008 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f56b5e44-f079-4c56-9e19-e09996979003-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 29 15:30:23 crc kubenswrapper[5008]: I0129 15:30:23.243073 5008 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f56b5e44-f079-4c56-9e19-e09996979003-client-ca\") on node \"crc\" DevicePath \"\"" Jan 29 15:30:23 crc kubenswrapper[5008]: I0129 15:30:23.243207 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7d5c80c8-4e74-4618-96c0-8e76168ad709-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "7d5c80c8-4e74-4618-96c0-8e76168ad709" (UID: "7d5c80c8-4e74-4618-96c0-8e76168ad709"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:30:23 crc kubenswrapper[5008]: I0129 15:30:23.243879 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7d5c80c8-4e74-4618-96c0-8e76168ad709-client-ca" (OuterVolumeSpecName: "client-ca") pod "7d5c80c8-4e74-4618-96c0-8e76168ad709" (UID: "7d5c80c8-4e74-4618-96c0-8e76168ad709"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:30:23 crc kubenswrapper[5008]: I0129 15:30:23.243971 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/afb7e8b5-ea3c-41ae-89da-ab5ec7171600-client-ca\") pod \"route-controller-manager-64b449df99-q9t46\" (UID: \"afb7e8b5-ea3c-41ae-89da-ab5ec7171600\") " pod="openshift-route-controller-manager/route-controller-manager-64b449df99-q9t46" Jan 29 15:30:23 crc kubenswrapper[5008]: I0129 15:30:23.244037 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/afb7e8b5-ea3c-41ae-89da-ab5ec7171600-config\") pod \"route-controller-manager-64b449df99-q9t46\" (UID: \"afb7e8b5-ea3c-41ae-89da-ab5ec7171600\") " pod="openshift-route-controller-manager/route-controller-manager-64b449df99-q9t46" Jan 29 15:30:23 crc kubenswrapper[5008]: I0129 15:30:23.244310 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7d5c80c8-4e74-4618-96c0-8e76168ad709-config" (OuterVolumeSpecName: "config") pod "7d5c80c8-4e74-4618-96c0-8e76168ad709" (UID: "7d5c80c8-4e74-4618-96c0-8e76168ad709"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:30:23 crc kubenswrapper[5008]: I0129 15:30:23.245977 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7d5c80c8-4e74-4618-96c0-8e76168ad709-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7d5c80c8-4e74-4618-96c0-8e76168ad709" (UID: "7d5c80c8-4e74-4618-96c0-8e76168ad709"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:30:23 crc kubenswrapper[5008]: I0129 15:30:23.246375 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7d5c80c8-4e74-4618-96c0-8e76168ad709-kube-api-access-dqdxf" (OuterVolumeSpecName: "kube-api-access-dqdxf") pod "7d5c80c8-4e74-4618-96c0-8e76168ad709" (UID: "7d5c80c8-4e74-4618-96c0-8e76168ad709"). InnerVolumeSpecName "kube-api-access-dqdxf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:30:23 crc kubenswrapper[5008]: I0129 15:30:23.249136 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/afb7e8b5-ea3c-41ae-89da-ab5ec7171600-serving-cert\") pod \"route-controller-manager-64b449df99-q9t46\" (UID: \"afb7e8b5-ea3c-41ae-89da-ab5ec7171600\") " pod="openshift-route-controller-manager/route-controller-manager-64b449df99-q9t46" Jan 29 15:30:23 crc kubenswrapper[5008]: I0129 15:30:23.257145 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7dqd9\" (UniqueName: \"kubernetes.io/projected/afb7e8b5-ea3c-41ae-89da-ab5ec7171600-kube-api-access-7dqd9\") pod \"route-controller-manager-64b449df99-q9t46\" (UID: \"afb7e8b5-ea3c-41ae-89da-ab5ec7171600\") " pod="openshift-route-controller-manager/route-controller-manager-64b449df99-q9t46" Jan 29 15:30:23 crc kubenswrapper[5008]: I0129 15:30:23.331327 5008 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8f668bae-612b-4b75-9490-919e737c6a3b" path="/var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes" Jan 29 15:30:23 crc kubenswrapper[5008]: I0129 15:30:23.344213 5008 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dqdxf\" (UniqueName: \"kubernetes.io/projected/7d5c80c8-4e74-4618-96c0-8e76168ad709-kube-api-access-dqdxf\") on node \"crc\" DevicePath \"\"" Jan 29 15:30:23 crc kubenswrapper[5008]: I0129 15:30:23.344248 5008 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7d5c80c8-4e74-4618-96c0-8e76168ad709-config\") on node \"crc\" DevicePath \"\"" Jan 29 15:30:23 crc kubenswrapper[5008]: I0129 15:30:23.344260 5008 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7d5c80c8-4e74-4618-96c0-8e76168ad709-client-ca\") on node \"crc\" DevicePath \"\"" Jan 29 15:30:23 crc kubenswrapper[5008]: I0129 15:30:23.344271 5008 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7d5c80c8-4e74-4618-96c0-8e76168ad709-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 29 15:30:23 crc kubenswrapper[5008]: I0129 15:30:23.344283 5008 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7d5c80c8-4e74-4618-96c0-8e76168ad709-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 29 15:30:23 crc kubenswrapper[5008]: I0129 15:30:23.423133 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-64b449df99-q9t46" Jan 29 15:30:23 crc kubenswrapper[5008]: I0129 15:30:23.427728 5008 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-4zwkl"] Jan 29 15:30:23 crc kubenswrapper[5008]: I0129 15:30:23.431875 5008 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-4zwkl"] Jan 29 15:30:23 crc kubenswrapper[5008]: I0129 15:30:23.441585 5008 patch_prober.go:28] interesting pod/router-default-5444994796-lkcrp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 29 15:30:23 crc kubenswrapper[5008]: [-]has-synced failed: reason withheld Jan 29 15:30:23 crc kubenswrapper[5008]: [+]process-running ok Jan 29 15:30:23 crc kubenswrapper[5008]: healthz check failed Jan 29 15:30:23 crc kubenswrapper[5008]: I0129 15:30:23.441650 5008 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-lkcrp" podUID="380625b0-02b5-417a-bd1e-7ccf56f56059" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 29 15:30:23 crc kubenswrapper[5008]: I0129 15:30:23.632588 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-64b449df99-q9t46"] Jan 29 15:30:24 crc kubenswrapper[5008]: I0129 15:30:24.107849 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-kkc6c" event={"ID":"f3716fd8-7f9b-44e2-ac3c-e907d8793dc9","Type":"ContainerStarted","Data":"a1e1e230de516adb80a0bc23e6ccd4421ec96f5e899ddf60854c3cf44cd677da"} Jan 29 15:30:24 crc kubenswrapper[5008]: I0129 15:30:24.109438 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-qm54x" event={"ID":"30c54800-b443-4da8-9d41-22e8f156a1a1","Type":"ContainerStarted","Data":"30e2e1673271910cbbe5ac685fc8d9b9256d07c42ba932c22e18da6b153ba5d5"} Jan 29 15:30:24 crc kubenswrapper[5008]: I0129 15:30:24.109616 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-697d97f7c8-qm54x" Jan 29 15:30:24 crc kubenswrapper[5008]: I0129 15:30:24.111136 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-fpmxk" event={"ID":"7d5c80c8-4e74-4618-96c0-8e76168ad709","Type":"ContainerDied","Data":"877a7a5331b5add1273bcb856b0a6b558e22fc4ee16ab1f101067f85b3c64f92"} Jan 29 15:30:24 crc kubenswrapper[5008]: I0129 15:30:24.111175 5008 scope.go:117] "RemoveContainer" containerID="4c0c93394c1503334716279d33aab711196676ea784b3c3aa6166010a6b66a0e" Jan 29 15:30:24 crc kubenswrapper[5008]: I0129 15:30:24.111302 5008 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-fpmxk" Jan 29 15:30:24 crc kubenswrapper[5008]: I0129 15:30:24.116380 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-64b449df99-q9t46" event={"ID":"afb7e8b5-ea3c-41ae-89da-ab5ec7171600","Type":"ContainerStarted","Data":"3b02507460795f19821a392cda839dd09d546d1c9003a8fa34c584311783c49f"} Jan 29 15:30:24 crc kubenswrapper[5008]: I0129 15:30:24.116421 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-64b449df99-q9t46" event={"ID":"afb7e8b5-ea3c-41ae-89da-ab5ec7171600","Type":"ContainerStarted","Data":"046a17c590098826d0a5eac7cd1935848d5c2b4be0940c2d3316db2e124ab690"} Jan 29 15:30:24 crc kubenswrapper[5008]: I0129 15:30:24.116437 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-64b449df99-q9t46" Jan 29 15:30:24 crc kubenswrapper[5008]: I0129 15:30:24.125816 5008 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/network-metrics-daemon-kkc6c" podStartSLOduration=146.125777617 podStartE2EDuration="2m26.125777617s" podCreationTimestamp="2026-01-29 15:27:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 15:30:24.123172109 +0000 UTC m=+167.796026346" watchObservedRunningTime="2026-01-29 15:30:24.125777617 +0000 UTC m=+167.798631874" Jan 29 15:30:24 crc kubenswrapper[5008]: I0129 15:30:24.143491 5008 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-fpmxk"] Jan 29 15:30:24 crc kubenswrapper[5008]: I0129 15:30:24.148939 5008 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-fpmxk"] Jan 29 15:30:24 crc kubenswrapper[5008]: I0129 15:30:24.159034 5008 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-697d97f7c8-qm54x" podStartSLOduration=146.159015597 podStartE2EDuration="2m26.159015597s" podCreationTimestamp="2026-01-29 15:27:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 15:30:24.158460132 +0000 UTC m=+167.831314379" watchObservedRunningTime="2026-01-29 15:30:24.159015597 +0000 UTC m=+167.831869844" Jan 29 15:30:24 crc kubenswrapper[5008]: I0129 15:30:24.179560 5008 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="hostpath-provisioner/csi-hostpathplugin-g9x2n" podStartSLOduration=27.179540054 podStartE2EDuration="27.179540054s" podCreationTimestamp="2026-01-29 15:29:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 15:30:24.175631392 +0000 UTC m=+167.848485649" watchObservedRunningTime="2026-01-29 15:30:24.179540054 +0000 UTC m=+167.852394291" Jan 29 15:30:24 crc kubenswrapper[5008]: I0129 15:30:24.191845 5008 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-64b449df99-q9t46" podStartSLOduration=4.191822706 podStartE2EDuration="4.191822706s" podCreationTimestamp="2026-01-29 15:30:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 15:30:24.191234811 +0000 UTC m=+167.864089068" watchObservedRunningTime="2026-01-29 15:30:24.191822706 +0000 UTC m=+167.864676963" Jan 29 15:30:24 crc kubenswrapper[5008]: I0129 15:30:24.220157 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-64b449df99-q9t46" Jan 29 15:30:24 crc kubenswrapper[5008]: I0129 15:30:24.441132 5008 patch_prober.go:28] interesting pod/router-default-5444994796-lkcrp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 29 15:30:24 crc kubenswrapper[5008]: [-]has-synced failed: reason withheld Jan 29 15:30:24 crc kubenswrapper[5008]: [+]process-running ok Jan 29 15:30:24 crc kubenswrapper[5008]: healthz check failed Jan 29 15:30:24 crc kubenswrapper[5008]: I0129 15:30:24.441209 5008 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-lkcrp" podUID="380625b0-02b5-417a-bd1e-7ccf56f56059" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 29 15:30:24 crc kubenswrapper[5008]: I0129 15:30:24.775205 5008 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-apiserver/apiserver-76f77b778f-4l85w" Jan 29 15:30:24 crc kubenswrapper[5008]: I0129 15:30:24.780792 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-apiserver/apiserver-76f77b778f-4l85w" Jan 29 15:30:25 crc kubenswrapper[5008]: I0129 15:30:25.341491 5008 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7d5c80c8-4e74-4618-96c0-8e76168ad709" path="/var/lib/kubelet/pods/7d5c80c8-4e74-4618-96c0-8e76168ad709/volumes" Jan 29 15:30:25 crc kubenswrapper[5008]: I0129 15:30:25.342730 5008 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f56b5e44-f079-4c56-9e19-e09996979003" path="/var/lib/kubelet/pods/f56b5e44-f079-4c56-9e19-e09996979003/volumes" Jan 29 15:30:25 crc kubenswrapper[5008]: I0129 15:30:25.441848 5008 patch_prober.go:28] interesting pod/router-default-5444994796-lkcrp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 29 15:30:25 crc kubenswrapper[5008]: [-]has-synced failed: reason withheld Jan 29 15:30:25 crc kubenswrapper[5008]: [+]process-running ok Jan 29 15:30:25 crc kubenswrapper[5008]: healthz check failed Jan 29 15:30:25 crc kubenswrapper[5008]: I0129 15:30:25.441945 5008 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-lkcrp" podUID="380625b0-02b5-417a-bd1e-7ccf56f56059" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 29 15:30:26 crc kubenswrapper[5008]: I0129 15:30:26.000444 5008 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-d7649699d-6xx6r"] Jan 29 15:30:26 crc kubenswrapper[5008]: E0129 15:30:26.000725 5008 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7d5c80c8-4e74-4618-96c0-8e76168ad709" containerName="controller-manager" Jan 29 15:30:26 crc kubenswrapper[5008]: I0129 15:30:26.000742 5008 state_mem.go:107] "Deleted CPUSet assignment" podUID="7d5c80c8-4e74-4618-96c0-8e76168ad709" containerName="controller-manager" Jan 29 15:30:26 crc kubenswrapper[5008]: I0129 15:30:26.000881 5008 memory_manager.go:354] "RemoveStaleState removing state" podUID="7d5c80c8-4e74-4618-96c0-8e76168ad709" containerName="controller-manager" Jan 29 15:30:26 crc kubenswrapper[5008]: I0129 15:30:26.001995 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-d7649699d-6xx6r" Jan 29 15:30:26 crc kubenswrapper[5008]: I0129 15:30:26.005692 5008 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 29 15:30:26 crc kubenswrapper[5008]: I0129 15:30:26.005708 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 29 15:30:26 crc kubenswrapper[5008]: I0129 15:30:26.006261 5008 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 29 15:30:26 crc kubenswrapper[5008]: I0129 15:30:26.006258 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 29 15:30:26 crc kubenswrapper[5008]: I0129 15:30:26.006420 5008 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 29 15:30:26 crc kubenswrapper[5008]: I0129 15:30:26.007541 5008 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 29 15:30:26 crc kubenswrapper[5008]: I0129 15:30:26.027407 5008 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 29 15:30:26 crc kubenswrapper[5008]: I0129 15:30:26.038533 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-d7649699d-6xx6r"] Jan 29 15:30:26 crc kubenswrapper[5008]: I0129 15:30:26.080256 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7dbbd108-38e5-44c9-a6a8-efaec064d3f0-serving-cert\") pod \"controller-manager-d7649699d-6xx6r\" (UID: \"7dbbd108-38e5-44c9-a6a8-efaec064d3f0\") " pod="openshift-controller-manager/controller-manager-d7649699d-6xx6r" Jan 29 15:30:26 crc kubenswrapper[5008]: I0129 15:30:26.080311 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7dbbd108-38e5-44c9-a6a8-efaec064d3f0-config\") pod \"controller-manager-d7649699d-6xx6r\" (UID: \"7dbbd108-38e5-44c9-a6a8-efaec064d3f0\") " pod="openshift-controller-manager/controller-manager-d7649699d-6xx6r" Jan 29 15:30:26 crc kubenswrapper[5008]: I0129 15:30:26.080331 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7dbbd108-38e5-44c9-a6a8-efaec064d3f0-client-ca\") pod \"controller-manager-d7649699d-6xx6r\" (UID: \"7dbbd108-38e5-44c9-a6a8-efaec064d3f0\") " pod="openshift-controller-manager/controller-manager-d7649699d-6xx6r" Jan 29 15:30:26 crc kubenswrapper[5008]: I0129 15:30:26.080349 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7dbbd108-38e5-44c9-a6a8-efaec064d3f0-proxy-ca-bundles\") pod \"controller-manager-d7649699d-6xx6r\" (UID: \"7dbbd108-38e5-44c9-a6a8-efaec064d3f0\") " pod="openshift-controller-manager/controller-manager-d7649699d-6xx6r" Jan 29 15:30:26 crc kubenswrapper[5008]: I0129 15:30:26.080416 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cw9rs\" (UniqueName: \"kubernetes.io/projected/7dbbd108-38e5-44c9-a6a8-efaec064d3f0-kube-api-access-cw9rs\") pod \"controller-manager-d7649699d-6xx6r\" (UID: \"7dbbd108-38e5-44c9-a6a8-efaec064d3f0\") " pod="openshift-controller-manager/controller-manager-d7649699d-6xx6r" Jan 29 15:30:26 crc kubenswrapper[5008]: I0129 15:30:26.182049 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7dbbd108-38e5-44c9-a6a8-efaec064d3f0-serving-cert\") pod \"controller-manager-d7649699d-6xx6r\" (UID: \"7dbbd108-38e5-44c9-a6a8-efaec064d3f0\") " pod="openshift-controller-manager/controller-manager-d7649699d-6xx6r" Jan 29 15:30:26 crc kubenswrapper[5008]: I0129 15:30:26.182192 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7dbbd108-38e5-44c9-a6a8-efaec064d3f0-config\") pod \"controller-manager-d7649699d-6xx6r\" (UID: \"7dbbd108-38e5-44c9-a6a8-efaec064d3f0\") " pod="openshift-controller-manager/controller-manager-d7649699d-6xx6r" Jan 29 15:30:26 crc kubenswrapper[5008]: I0129 15:30:26.182240 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7dbbd108-38e5-44c9-a6a8-efaec064d3f0-client-ca\") pod \"controller-manager-d7649699d-6xx6r\" (UID: \"7dbbd108-38e5-44c9-a6a8-efaec064d3f0\") " pod="openshift-controller-manager/controller-manager-d7649699d-6xx6r" Jan 29 15:30:26 crc kubenswrapper[5008]: I0129 15:30:26.182287 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7dbbd108-38e5-44c9-a6a8-efaec064d3f0-proxy-ca-bundles\") pod \"controller-manager-d7649699d-6xx6r\" (UID: \"7dbbd108-38e5-44c9-a6a8-efaec064d3f0\") " pod="openshift-controller-manager/controller-manager-d7649699d-6xx6r" Jan 29 15:30:26 crc kubenswrapper[5008]: I0129 15:30:26.182349 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cw9rs\" (UniqueName: \"kubernetes.io/projected/7dbbd108-38e5-44c9-a6a8-efaec064d3f0-kube-api-access-cw9rs\") pod \"controller-manager-d7649699d-6xx6r\" (UID: \"7dbbd108-38e5-44c9-a6a8-efaec064d3f0\") " pod="openshift-controller-manager/controller-manager-d7649699d-6xx6r" Jan 29 15:30:26 crc kubenswrapper[5008]: I0129 15:30:26.183690 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7dbbd108-38e5-44c9-a6a8-efaec064d3f0-proxy-ca-bundles\") pod \"controller-manager-d7649699d-6xx6r\" (UID: \"7dbbd108-38e5-44c9-a6a8-efaec064d3f0\") " pod="openshift-controller-manager/controller-manager-d7649699d-6xx6r" Jan 29 15:30:26 crc kubenswrapper[5008]: I0129 15:30:26.183707 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7dbbd108-38e5-44c9-a6a8-efaec064d3f0-client-ca\") pod \"controller-manager-d7649699d-6xx6r\" (UID: \"7dbbd108-38e5-44c9-a6a8-efaec064d3f0\") " pod="openshift-controller-manager/controller-manager-d7649699d-6xx6r" Jan 29 15:30:26 crc kubenswrapper[5008]: I0129 15:30:26.183953 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7dbbd108-38e5-44c9-a6a8-efaec064d3f0-config\") pod \"controller-manager-d7649699d-6xx6r\" (UID: \"7dbbd108-38e5-44c9-a6a8-efaec064d3f0\") " pod="openshift-controller-manager/controller-manager-d7649699d-6xx6r" Jan 29 15:30:26 crc kubenswrapper[5008]: I0129 15:30:26.190510 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7dbbd108-38e5-44c9-a6a8-efaec064d3f0-serving-cert\") pod \"controller-manager-d7649699d-6xx6r\" (UID: \"7dbbd108-38e5-44c9-a6a8-efaec064d3f0\") " pod="openshift-controller-manager/controller-manager-d7649699d-6xx6r" Jan 29 15:30:26 crc kubenswrapper[5008]: I0129 15:30:26.203727 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cw9rs\" (UniqueName: \"kubernetes.io/projected/7dbbd108-38e5-44c9-a6a8-efaec064d3f0-kube-api-access-cw9rs\") pod \"controller-manager-d7649699d-6xx6r\" (UID: \"7dbbd108-38e5-44c9-a6a8-efaec064d3f0\") " pod="openshift-controller-manager/controller-manager-d7649699d-6xx6r" Jan 29 15:30:26 crc kubenswrapper[5008]: I0129 15:30:26.326496 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-d7649699d-6xx6r" Jan 29 15:30:26 crc kubenswrapper[5008]: I0129 15:30:26.451897 5008 patch_prober.go:28] interesting pod/router-default-5444994796-lkcrp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 29 15:30:26 crc kubenswrapper[5008]: [-]has-synced failed: reason withheld Jan 29 15:30:26 crc kubenswrapper[5008]: [+]process-running ok Jan 29 15:30:26 crc kubenswrapper[5008]: healthz check failed Jan 29 15:30:26 crc kubenswrapper[5008]: I0129 15:30:26.452311 5008 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-lkcrp" podUID="380625b0-02b5-417a-bd1e-7ccf56f56059" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 29 15:30:26 crc kubenswrapper[5008]: I0129 15:30:26.733589 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-d7649699d-6xx6r"] Jan 29 15:30:27 crc kubenswrapper[5008]: I0129 15:30:27.139213 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-d7649699d-6xx6r" event={"ID":"7dbbd108-38e5-44c9-a6a8-efaec064d3f0","Type":"ContainerStarted","Data":"a89ad9ebedb6a41ee71edf80b0a6e1658e17f7834cb3f34aa4f8d7ca83f8b7f5"} Jan 29 15:30:27 crc kubenswrapper[5008]: I0129 15:30:27.139286 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-d7649699d-6xx6r" event={"ID":"7dbbd108-38e5-44c9-a6a8-efaec064d3f0","Type":"ContainerStarted","Data":"c89ec15a06edbf1f2377f00b216c0feab1fc55200bf490f013cd333af9148873"} Jan 29 15:30:27 crc kubenswrapper[5008]: I0129 15:30:27.139876 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-d7649699d-6xx6r" Jan 29 15:30:27 crc kubenswrapper[5008]: I0129 15:30:27.142029 5008 patch_prober.go:28] interesting pod/controller-manager-d7649699d-6xx6r container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.57:8443/healthz\": dial tcp 10.217.0.57:8443: connect: connection refused" start-of-body= Jan 29 15:30:27 crc kubenswrapper[5008]: I0129 15:30:27.142105 5008 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-d7649699d-6xx6r" podUID="7dbbd108-38e5-44c9-a6a8-efaec064d3f0" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.57:8443/healthz\": dial tcp 10.217.0.57:8443: connect: connection refused" Jan 29 15:30:27 crc kubenswrapper[5008]: I0129 15:30:27.165053 5008 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-d7649699d-6xx6r" podStartSLOduration=7.165030899 podStartE2EDuration="7.165030899s" podCreationTimestamp="2026-01-29 15:30:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 15:30:27.160168422 +0000 UTC m=+170.833022669" watchObservedRunningTime="2026-01-29 15:30:27.165030899 +0000 UTC m=+170.837885126" Jan 29 15:30:27 crc kubenswrapper[5008]: I0129 15:30:27.444682 5008 patch_prober.go:28] interesting pod/router-default-5444994796-lkcrp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 29 15:30:27 crc kubenswrapper[5008]: [-]has-synced failed: reason withheld Jan 29 15:30:27 crc kubenswrapper[5008]: [+]process-running ok Jan 29 15:30:27 crc kubenswrapper[5008]: healthz check failed Jan 29 15:30:27 crc kubenswrapper[5008]: I0129 15:30:27.444738 5008 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-lkcrp" podUID="380625b0-02b5-417a-bd1e-7ccf56f56059" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 29 15:30:28 crc kubenswrapper[5008]: I0129 15:30:28.150733 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-d7649699d-6xx6r" Jan 29 15:30:28 crc kubenswrapper[5008]: I0129 15:30:28.440100 5008 patch_prober.go:28] interesting pod/router-default-5444994796-lkcrp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 29 15:30:28 crc kubenswrapper[5008]: [-]has-synced failed: reason withheld Jan 29 15:30:28 crc kubenswrapper[5008]: [+]process-running ok Jan 29 15:30:28 crc kubenswrapper[5008]: healthz check failed Jan 29 15:30:28 crc kubenswrapper[5008]: I0129 15:30:28.440198 5008 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-lkcrp" podUID="380625b0-02b5-417a-bd1e-7ccf56f56059" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 29 15:30:29 crc kubenswrapper[5008]: I0129 15:30:29.124650 5008 patch_prober.go:28] interesting pod/downloads-7954f5f757-6wmrp container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.42:8080/\": dial tcp 10.217.0.42:8080: connect: connection refused" start-of-body= Jan 29 15:30:29 crc kubenswrapper[5008]: I0129 15:30:29.124720 5008 patch_prober.go:28] interesting pod/downloads-7954f5f757-6wmrp container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.42:8080/\": dial tcp 10.217.0.42:8080: connect: connection refused" start-of-body= Jan 29 15:30:29 crc kubenswrapper[5008]: I0129 15:30:29.124724 5008 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-6wmrp" podUID="64cf2ff9-40f4-48a5-a16c-6513cf0470bd" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.42:8080/\": dial tcp 10.217.0.42:8080: connect: connection refused" Jan 29 15:30:29 crc kubenswrapper[5008]: I0129 15:30:29.124824 5008 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-6wmrp" podUID="64cf2ff9-40f4-48a5-a16c-6513cf0470bd" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.42:8080/\": dial tcp 10.217.0.42:8080: connect: connection refused" Jan 29 15:30:29 crc kubenswrapper[5008]: I0129 15:30:29.124915 5008 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-console/downloads-7954f5f757-6wmrp" Jan 29 15:30:29 crc kubenswrapper[5008]: I0129 15:30:29.125590 5008 patch_prober.go:28] interesting pod/downloads-7954f5f757-6wmrp container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.42:8080/\": dial tcp 10.217.0.42:8080: connect: connection refused" start-of-body= Jan 29 15:30:29 crc kubenswrapper[5008]: I0129 15:30:29.125627 5008 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-6wmrp" podUID="64cf2ff9-40f4-48a5-a16c-6513cf0470bd" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.42:8080/\": dial tcp 10.217.0.42:8080: connect: connection refused" Jan 29 15:30:29 crc kubenswrapper[5008]: I0129 15:30:29.126041 5008 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="download-server" containerStatusID={"Type":"cri-o","ID":"b7c6360486afb3695d7f0cab5e94240be2d35122a76f5d2f164ac0cff78e316c"} pod="openshift-console/downloads-7954f5f757-6wmrp" containerMessage="Container download-server failed liveness probe, will be restarted" Jan 29 15:30:29 crc kubenswrapper[5008]: I0129 15:30:29.126171 5008 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/downloads-7954f5f757-6wmrp" podUID="64cf2ff9-40f4-48a5-a16c-6513cf0470bd" containerName="download-server" containerID="cri-o://b7c6360486afb3695d7f0cab5e94240be2d35122a76f5d2f164ac0cff78e316c" gracePeriod=2 Jan 29 15:30:29 crc kubenswrapper[5008]: I0129 15:30:29.442632 5008 patch_prober.go:28] interesting pod/router-default-5444994796-lkcrp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 29 15:30:29 crc kubenswrapper[5008]: [-]has-synced failed: reason withheld Jan 29 15:30:29 crc kubenswrapper[5008]: [+]process-running ok Jan 29 15:30:29 crc kubenswrapper[5008]: healthz check failed Jan 29 15:30:29 crc kubenswrapper[5008]: I0129 15:30:29.443179 5008 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-lkcrp" podUID="380625b0-02b5-417a-bd1e-7ccf56f56059" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 29 15:30:29 crc kubenswrapper[5008]: I0129 15:30:29.800245 5008 patch_prober.go:28] interesting pod/console-f9d7485db-g2rk6 container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.10:8443/health\": dial tcp 10.217.0.10:8443: connect: connection refused" start-of-body= Jan 29 15:30:29 crc kubenswrapper[5008]: I0129 15:30:29.800336 5008 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-g2rk6" podUID="3f7de4a5-3819-41c0-9e2e-766dcff408bb" containerName="console" probeResult="failure" output="Get \"https://10.217.0.10:8443/health\": dial tcp 10.217.0.10:8443: connect: connection refused" Jan 29 15:30:30 crc kubenswrapper[5008]: I0129 15:30:30.157883 5008 generic.go:334] "Generic (PLEG): container finished" podID="64cf2ff9-40f4-48a5-a16c-6513cf0470bd" containerID="b7c6360486afb3695d7f0cab5e94240be2d35122a76f5d2f164ac0cff78e316c" exitCode=0 Jan 29 15:30:30 crc kubenswrapper[5008]: I0129 15:30:30.157924 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-6wmrp" event={"ID":"64cf2ff9-40f4-48a5-a16c-6513cf0470bd","Type":"ContainerDied","Data":"b7c6360486afb3695d7f0cab5e94240be2d35122a76f5d2f164ac0cff78e316c"} Jan 29 15:30:30 crc kubenswrapper[5008]: I0129 15:30:30.157984 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-6wmrp" event={"ID":"64cf2ff9-40f4-48a5-a16c-6513cf0470bd","Type":"ContainerStarted","Data":"04c64d72761a6b02c0284552d691c629d23e97b7073e08ff256271e0b02d6962"} Jan 29 15:30:30 crc kubenswrapper[5008]: I0129 15:30:30.158524 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/downloads-7954f5f757-6wmrp" Jan 29 15:30:30 crc kubenswrapper[5008]: I0129 15:30:30.158690 5008 patch_prober.go:28] interesting pod/downloads-7954f5f757-6wmrp container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.42:8080/\": dial tcp 10.217.0.42:8080: connect: connection refused" start-of-body= Jan 29 15:30:30 crc kubenswrapper[5008]: I0129 15:30:30.158734 5008 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-6wmrp" podUID="64cf2ff9-40f4-48a5-a16c-6513cf0470bd" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.42:8080/\": dial tcp 10.217.0.42:8080: connect: connection refused" Jan 29 15:30:30 crc kubenswrapper[5008]: I0129 15:30:30.442104 5008 patch_prober.go:28] interesting pod/router-default-5444994796-lkcrp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 29 15:30:30 crc kubenswrapper[5008]: [-]has-synced failed: reason withheld Jan 29 15:30:30 crc kubenswrapper[5008]: [+]process-running ok Jan 29 15:30:30 crc kubenswrapper[5008]: healthz check failed Jan 29 15:30:30 crc kubenswrapper[5008]: I0129 15:30:30.442175 5008 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-lkcrp" podUID="380625b0-02b5-417a-bd1e-7ccf56f56059" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 29 15:30:31 crc kubenswrapper[5008]: I0129 15:30:31.165693 5008 patch_prober.go:28] interesting pod/downloads-7954f5f757-6wmrp container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.42:8080/\": dial tcp 10.217.0.42:8080: connect: connection refused" start-of-body= Jan 29 15:30:31 crc kubenswrapper[5008]: I0129 15:30:31.165751 5008 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-6wmrp" podUID="64cf2ff9-40f4-48a5-a16c-6513cf0470bd" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.42:8080/\": dial tcp 10.217.0.42:8080: connect: connection refused" Jan 29 15:30:31 crc kubenswrapper[5008]: I0129 15:30:31.440909 5008 patch_prober.go:28] interesting pod/router-default-5444994796-lkcrp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 29 15:30:31 crc kubenswrapper[5008]: [-]has-synced failed: reason withheld Jan 29 15:30:31 crc kubenswrapper[5008]: [+]process-running ok Jan 29 15:30:31 crc kubenswrapper[5008]: healthz check failed Jan 29 15:30:31 crc kubenswrapper[5008]: I0129 15:30:31.441184 5008 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-lkcrp" podUID="380625b0-02b5-417a-bd1e-7ccf56f56059" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 29 15:30:32 crc kubenswrapper[5008]: I0129 15:30:32.442096 5008 patch_prober.go:28] interesting pod/router-default-5444994796-lkcrp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 29 15:30:32 crc kubenswrapper[5008]: [-]has-synced failed: reason withheld Jan 29 15:30:32 crc kubenswrapper[5008]: [+]process-running ok Jan 29 15:30:32 crc kubenswrapper[5008]: healthz check failed Jan 29 15:30:32 crc kubenswrapper[5008]: I0129 15:30:32.442200 5008 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-lkcrp" podUID="380625b0-02b5-417a-bd1e-7ccf56f56059" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 29 15:30:33 crc kubenswrapper[5008]: I0129 15:30:33.440335 5008 patch_prober.go:28] interesting pod/router-default-5444994796-lkcrp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 29 15:30:33 crc kubenswrapper[5008]: [-]has-synced failed: reason withheld Jan 29 15:30:33 crc kubenswrapper[5008]: [+]process-running ok Jan 29 15:30:33 crc kubenswrapper[5008]: healthz check failed Jan 29 15:30:33 crc kubenswrapper[5008]: I0129 15:30:33.440400 5008 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-lkcrp" podUID="380625b0-02b5-417a-bd1e-7ccf56f56059" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 29 15:30:34 crc kubenswrapper[5008]: I0129 15:30:34.446050 5008 patch_prober.go:28] interesting pod/router-default-5444994796-lkcrp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 29 15:30:34 crc kubenswrapper[5008]: [-]has-synced failed: reason withheld Jan 29 15:30:34 crc kubenswrapper[5008]: [+]process-running ok Jan 29 15:30:34 crc kubenswrapper[5008]: healthz check failed Jan 29 15:30:34 crc kubenswrapper[5008]: I0129 15:30:34.446126 5008 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-lkcrp" podUID="380625b0-02b5-417a-bd1e-7ccf56f56059" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 29 15:30:35 crc kubenswrapper[5008]: I0129 15:30:35.457104 5008 patch_prober.go:28] interesting pod/router-default-5444994796-lkcrp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 29 15:30:35 crc kubenswrapper[5008]: [-]has-synced failed: reason withheld Jan 29 15:30:35 crc kubenswrapper[5008]: [+]process-running ok Jan 29 15:30:35 crc kubenswrapper[5008]: healthz check failed Jan 29 15:30:35 crc kubenswrapper[5008]: I0129 15:30:35.457498 5008 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-lkcrp" podUID="380625b0-02b5-417a-bd1e-7ccf56f56059" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 29 15:30:36 crc kubenswrapper[5008]: I0129 15:30:36.441719 5008 patch_prober.go:28] interesting pod/router-default-5444994796-lkcrp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 29 15:30:36 crc kubenswrapper[5008]: [-]has-synced failed: reason withheld Jan 29 15:30:36 crc kubenswrapper[5008]: [+]process-running ok Jan 29 15:30:36 crc kubenswrapper[5008]: healthz check failed Jan 29 15:30:36 crc kubenswrapper[5008]: I0129 15:30:36.441932 5008 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-lkcrp" podUID="380625b0-02b5-417a-bd1e-7ccf56f56059" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 29 15:30:36 crc kubenswrapper[5008]: I0129 15:30:36.844240 5008 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-d7649699d-6xx6r"] Jan 29 15:30:36 crc kubenswrapper[5008]: I0129 15:30:36.844558 5008 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-d7649699d-6xx6r" podUID="7dbbd108-38e5-44c9-a6a8-efaec064d3f0" containerName="controller-manager" containerID="cri-o://a89ad9ebedb6a41ee71edf80b0a6e1658e17f7834cb3f34aa4f8d7ca83f8b7f5" gracePeriod=30 Jan 29 15:30:36 crc kubenswrapper[5008]: I0129 15:30:36.866789 5008 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-64b449df99-q9t46"] Jan 29 15:30:36 crc kubenswrapper[5008]: I0129 15:30:36.867004 5008 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-64b449df99-q9t46" podUID="afb7e8b5-ea3c-41ae-89da-ab5ec7171600" containerName="route-controller-manager" containerID="cri-o://3b02507460795f19821a392cda839dd09d546d1c9003a8fa34c584311783c49f" gracePeriod=30 Jan 29 15:30:37 crc kubenswrapper[5008]: I0129 15:30:37.441692 5008 patch_prober.go:28] interesting pod/router-default-5444994796-lkcrp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 29 15:30:37 crc kubenswrapper[5008]: [-]has-synced failed: reason withheld Jan 29 15:30:37 crc kubenswrapper[5008]: [+]process-running ok Jan 29 15:30:37 crc kubenswrapper[5008]: healthz check failed Jan 29 15:30:37 crc kubenswrapper[5008]: I0129 15:30:37.441747 5008 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-lkcrp" podUID="380625b0-02b5-417a-bd1e-7ccf56f56059" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 29 15:30:38 crc kubenswrapper[5008]: I0129 15:30:38.231173 5008 generic.go:334] "Generic (PLEG): container finished" podID="afb7e8b5-ea3c-41ae-89da-ab5ec7171600" containerID="3b02507460795f19821a392cda839dd09d546d1c9003a8fa34c584311783c49f" exitCode=0 Jan 29 15:30:38 crc kubenswrapper[5008]: I0129 15:30:38.231262 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-64b449df99-q9t46" event={"ID":"afb7e8b5-ea3c-41ae-89da-ab5ec7171600","Type":"ContainerDied","Data":"3b02507460795f19821a392cda839dd09d546d1c9003a8fa34c584311783c49f"} Jan 29 15:30:38 crc kubenswrapper[5008]: I0129 15:30:38.233927 5008 generic.go:334] "Generic (PLEG): container finished" podID="7dbbd108-38e5-44c9-a6a8-efaec064d3f0" containerID="a89ad9ebedb6a41ee71edf80b0a6e1658e17f7834cb3f34aa4f8d7ca83f8b7f5" exitCode=0 Jan 29 15:30:38 crc kubenswrapper[5008]: I0129 15:30:38.233976 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-d7649699d-6xx6r" event={"ID":"7dbbd108-38e5-44c9-a6a8-efaec064d3f0","Type":"ContainerDied","Data":"a89ad9ebedb6a41ee71edf80b0a6e1658e17f7834cb3f34aa4f8d7ca83f8b7f5"} Jan 29 15:30:38 crc kubenswrapper[5008]: I0129 15:30:38.441792 5008 patch_prober.go:28] interesting pod/router-default-5444994796-lkcrp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 29 15:30:38 crc kubenswrapper[5008]: [-]has-synced failed: reason withheld Jan 29 15:30:38 crc kubenswrapper[5008]: [+]process-running ok Jan 29 15:30:38 crc kubenswrapper[5008]: healthz check failed Jan 29 15:30:38 crc kubenswrapper[5008]: I0129 15:30:38.441854 5008 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-lkcrp" podUID="380625b0-02b5-417a-bd1e-7ccf56f56059" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 29 15:30:39 crc kubenswrapper[5008]: I0129 15:30:39.125315 5008 patch_prober.go:28] interesting pod/downloads-7954f5f757-6wmrp container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.42:8080/\": dial tcp 10.217.0.42:8080: connect: connection refused" start-of-body= Jan 29 15:30:39 crc kubenswrapper[5008]: I0129 15:30:39.125723 5008 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-6wmrp" podUID="64cf2ff9-40f4-48a5-a16c-6513cf0470bd" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.42:8080/\": dial tcp 10.217.0.42:8080: connect: connection refused" Jan 29 15:30:39 crc kubenswrapper[5008]: I0129 15:30:39.126070 5008 patch_prober.go:28] interesting pod/downloads-7954f5f757-6wmrp container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.42:8080/\": dial tcp 10.217.0.42:8080: connect: connection refused" start-of-body= Jan 29 15:30:39 crc kubenswrapper[5008]: I0129 15:30:39.126133 5008 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-6wmrp" podUID="64cf2ff9-40f4-48a5-a16c-6513cf0470bd" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.42:8080/\": dial tcp 10.217.0.42:8080: connect: connection refused" Jan 29 15:30:39 crc kubenswrapper[5008]: I0129 15:30:39.442333 5008 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-ingress/router-default-5444994796-lkcrp" Jan 29 15:30:39 crc kubenswrapper[5008]: I0129 15:30:39.446221 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ingress/router-default-5444994796-lkcrp" Jan 29 15:30:39 crc kubenswrapper[5008]: I0129 15:30:39.799655 5008 patch_prober.go:28] interesting pod/console-f9d7485db-g2rk6 container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.10:8443/health\": dial tcp 10.217.0.10:8443: connect: connection refused" start-of-body= Jan 29 15:30:39 crc kubenswrapper[5008]: I0129 15:30:39.799871 5008 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-g2rk6" podUID="3f7de4a5-3819-41c0-9e2e-766dcff408bb" containerName="console" probeResult="failure" output="Get \"https://10.217.0.10:8443/health\": dial tcp 10.217.0.10:8443: connect: connection refused" Jan 29 15:30:40 crc kubenswrapper[5008]: I0129 15:30:40.935548 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-w5jbk" Jan 29 15:30:42 crc kubenswrapper[5008]: I0129 15:30:42.119626 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-697d97f7c8-qm54x" Jan 29 15:30:43 crc kubenswrapper[5008]: I0129 15:30:43.990405 5008 patch_prober.go:28] interesting pod/machine-config-daemon-gk9q8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 15:30:43 crc kubenswrapper[5008]: I0129 15:30:43.990521 5008 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-gk9q8" podUID="ca0fcb2d-733d-4bde-9bbf-3f7082d0e244" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 15:30:44 crc kubenswrapper[5008]: I0129 15:30:44.424988 5008 patch_prober.go:28] interesting pod/route-controller-manager-64b449df99-q9t46 container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.56:8443/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 29 15:30:44 crc kubenswrapper[5008]: I0129 15:30:44.425140 5008 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-64b449df99-q9t46" podUID="afb7e8b5-ea3c-41ae-89da-ab5ec7171600" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.56:8443/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 29 15:30:46 crc kubenswrapper[5008]: I0129 15:30:46.783748 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 15:30:47 crc kubenswrapper[5008]: I0129 15:30:47.327201 5008 patch_prober.go:28] interesting pod/controller-manager-d7649699d-6xx6r container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.57:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 29 15:30:47 crc kubenswrapper[5008]: I0129 15:30:47.327273 5008 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-d7649699d-6xx6r" podUID="7dbbd108-38e5-44c9-a6a8-efaec064d3f0" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.57:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 29 15:30:47 crc kubenswrapper[5008]: I0129 15:30:47.928177 5008 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-64b449df99-q9t46" Jan 29 15:30:47 crc kubenswrapper[5008]: I0129 15:30:47.943056 5008 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-d7649699d-6xx6r" Jan 29 15:30:47 crc kubenswrapper[5008]: I0129 15:30:47.986452 5008 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-65dbd47846-qgvzb"] Jan 29 15:30:47 crc kubenswrapper[5008]: E0129 15:30:47.986836 5008 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="afb7e8b5-ea3c-41ae-89da-ab5ec7171600" containerName="route-controller-manager" Jan 29 15:30:47 crc kubenswrapper[5008]: I0129 15:30:47.986854 5008 state_mem.go:107] "Deleted CPUSet assignment" podUID="afb7e8b5-ea3c-41ae-89da-ab5ec7171600" containerName="route-controller-manager" Jan 29 15:30:47 crc kubenswrapper[5008]: E0129 15:30:47.986893 5008 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7dbbd108-38e5-44c9-a6a8-efaec064d3f0" containerName="controller-manager" Jan 29 15:30:47 crc kubenswrapper[5008]: I0129 15:30:47.986899 5008 state_mem.go:107] "Deleted CPUSet assignment" podUID="7dbbd108-38e5-44c9-a6a8-efaec064d3f0" containerName="controller-manager" Jan 29 15:30:47 crc kubenswrapper[5008]: I0129 15:30:47.987011 5008 memory_manager.go:354] "RemoveStaleState removing state" podUID="7dbbd108-38e5-44c9-a6a8-efaec064d3f0" containerName="controller-manager" Jan 29 15:30:47 crc kubenswrapper[5008]: I0129 15:30:47.987048 5008 memory_manager.go:354] "RemoveStaleState removing state" podUID="afb7e8b5-ea3c-41ae-89da-ab5ec7171600" containerName="route-controller-manager" Jan 29 15:30:47 crc kubenswrapper[5008]: I0129 15:30:47.987453 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-65dbd47846-qgvzb" Jan 29 15:30:47 crc kubenswrapper[5008]: I0129 15:30:47.988720 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b46f1f12-a290-441c-a3bb-4584cc2a3102-client-ca\") pod \"route-controller-manager-65dbd47846-qgvzb\" (UID: \"b46f1f12-a290-441c-a3bb-4584cc2a3102\") " pod="openshift-route-controller-manager/route-controller-manager-65dbd47846-qgvzb" Jan 29 15:30:47 crc kubenswrapper[5008]: I0129 15:30:47.988823 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b46f1f12-a290-441c-a3bb-4584cc2a3102-config\") pod \"route-controller-manager-65dbd47846-qgvzb\" (UID: \"b46f1f12-a290-441c-a3bb-4584cc2a3102\") " pod="openshift-route-controller-manager/route-controller-manager-65dbd47846-qgvzb" Jan 29 15:30:47 crc kubenswrapper[5008]: I0129 15:30:47.989009 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b46f1f12-a290-441c-a3bb-4584cc2a3102-serving-cert\") pod \"route-controller-manager-65dbd47846-qgvzb\" (UID: \"b46f1f12-a290-441c-a3bb-4584cc2a3102\") " pod="openshift-route-controller-manager/route-controller-manager-65dbd47846-qgvzb" Jan 29 15:30:47 crc kubenswrapper[5008]: I0129 15:30:47.991509 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-65dbd47846-qgvzb"] Jan 29 15:30:48 crc kubenswrapper[5008]: I0129 15:30:48.090172 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cw9rs\" (UniqueName: \"kubernetes.io/projected/7dbbd108-38e5-44c9-a6a8-efaec064d3f0-kube-api-access-cw9rs\") pod \"7dbbd108-38e5-44c9-a6a8-efaec064d3f0\" (UID: \"7dbbd108-38e5-44c9-a6a8-efaec064d3f0\") " Jan 29 15:30:48 crc kubenswrapper[5008]: I0129 15:30:48.090256 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7dbbd108-38e5-44c9-a6a8-efaec064d3f0-serving-cert\") pod \"7dbbd108-38e5-44c9-a6a8-efaec064d3f0\" (UID: \"7dbbd108-38e5-44c9-a6a8-efaec064d3f0\") " Jan 29 15:30:48 crc kubenswrapper[5008]: I0129 15:30:48.090293 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7dbbd108-38e5-44c9-a6a8-efaec064d3f0-client-ca\") pod \"7dbbd108-38e5-44c9-a6a8-efaec064d3f0\" (UID: \"7dbbd108-38e5-44c9-a6a8-efaec064d3f0\") " Jan 29 15:30:48 crc kubenswrapper[5008]: I0129 15:30:48.090359 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/afb7e8b5-ea3c-41ae-89da-ab5ec7171600-serving-cert\") pod \"afb7e8b5-ea3c-41ae-89da-ab5ec7171600\" (UID: \"afb7e8b5-ea3c-41ae-89da-ab5ec7171600\") " Jan 29 15:30:48 crc kubenswrapper[5008]: I0129 15:30:48.090386 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7dqd9\" (UniqueName: \"kubernetes.io/projected/afb7e8b5-ea3c-41ae-89da-ab5ec7171600-kube-api-access-7dqd9\") pod \"afb7e8b5-ea3c-41ae-89da-ab5ec7171600\" (UID: \"afb7e8b5-ea3c-41ae-89da-ab5ec7171600\") " Jan 29 15:30:48 crc kubenswrapper[5008]: I0129 15:30:48.090413 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/afb7e8b5-ea3c-41ae-89da-ab5ec7171600-client-ca\") pod \"afb7e8b5-ea3c-41ae-89da-ab5ec7171600\" (UID: \"afb7e8b5-ea3c-41ae-89da-ab5ec7171600\") " Jan 29 15:30:48 crc kubenswrapper[5008]: I0129 15:30:48.090469 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/afb7e8b5-ea3c-41ae-89da-ab5ec7171600-config\") pod \"afb7e8b5-ea3c-41ae-89da-ab5ec7171600\" (UID: \"afb7e8b5-ea3c-41ae-89da-ab5ec7171600\") " Jan 29 15:30:48 crc kubenswrapper[5008]: I0129 15:30:48.090489 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7dbbd108-38e5-44c9-a6a8-efaec064d3f0-config\") pod \"7dbbd108-38e5-44c9-a6a8-efaec064d3f0\" (UID: \"7dbbd108-38e5-44c9-a6a8-efaec064d3f0\") " Jan 29 15:30:48 crc kubenswrapper[5008]: I0129 15:30:48.090517 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7dbbd108-38e5-44c9-a6a8-efaec064d3f0-proxy-ca-bundles\") pod \"7dbbd108-38e5-44c9-a6a8-efaec064d3f0\" (UID: \"7dbbd108-38e5-44c9-a6a8-efaec064d3f0\") " Jan 29 15:30:48 crc kubenswrapper[5008]: I0129 15:30:48.090738 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b46f1f12-a290-441c-a3bb-4584cc2a3102-config\") pod \"route-controller-manager-65dbd47846-qgvzb\" (UID: \"b46f1f12-a290-441c-a3bb-4584cc2a3102\") " pod="openshift-route-controller-manager/route-controller-manager-65dbd47846-qgvzb" Jan 29 15:30:48 crc kubenswrapper[5008]: I0129 15:30:48.090799 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b46f1f12-a290-441c-a3bb-4584cc2a3102-serving-cert\") pod \"route-controller-manager-65dbd47846-qgvzb\" (UID: \"b46f1f12-a290-441c-a3bb-4584cc2a3102\") " pod="openshift-route-controller-manager/route-controller-manager-65dbd47846-qgvzb" Jan 29 15:30:48 crc kubenswrapper[5008]: I0129 15:30:48.090876 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f5jnw\" (UniqueName: \"kubernetes.io/projected/b46f1f12-a290-441c-a3bb-4584cc2a3102-kube-api-access-f5jnw\") pod \"route-controller-manager-65dbd47846-qgvzb\" (UID: \"b46f1f12-a290-441c-a3bb-4584cc2a3102\") " pod="openshift-route-controller-manager/route-controller-manager-65dbd47846-qgvzb" Jan 29 15:30:48 crc kubenswrapper[5008]: I0129 15:30:48.090910 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b46f1f12-a290-441c-a3bb-4584cc2a3102-client-ca\") pod \"route-controller-manager-65dbd47846-qgvzb\" (UID: \"b46f1f12-a290-441c-a3bb-4584cc2a3102\") " pod="openshift-route-controller-manager/route-controller-manager-65dbd47846-qgvzb" Jan 29 15:30:48 crc kubenswrapper[5008]: I0129 15:30:48.091576 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/afb7e8b5-ea3c-41ae-89da-ab5ec7171600-client-ca" (OuterVolumeSpecName: "client-ca") pod "afb7e8b5-ea3c-41ae-89da-ab5ec7171600" (UID: "afb7e8b5-ea3c-41ae-89da-ab5ec7171600"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:30:48 crc kubenswrapper[5008]: I0129 15:30:48.091969 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7dbbd108-38e5-44c9-a6a8-efaec064d3f0-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "7dbbd108-38e5-44c9-a6a8-efaec064d3f0" (UID: "7dbbd108-38e5-44c9-a6a8-efaec064d3f0"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:30:48 crc kubenswrapper[5008]: I0129 15:30:48.092148 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7dbbd108-38e5-44c9-a6a8-efaec064d3f0-client-ca" (OuterVolumeSpecName: "client-ca") pod "7dbbd108-38e5-44c9-a6a8-efaec064d3f0" (UID: "7dbbd108-38e5-44c9-a6a8-efaec064d3f0"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:30:48 crc kubenswrapper[5008]: I0129 15:30:48.092294 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b46f1f12-a290-441c-a3bb-4584cc2a3102-client-ca\") pod \"route-controller-manager-65dbd47846-qgvzb\" (UID: \"b46f1f12-a290-441c-a3bb-4584cc2a3102\") " pod="openshift-route-controller-manager/route-controller-manager-65dbd47846-qgvzb" Jan 29 15:30:48 crc kubenswrapper[5008]: I0129 15:30:48.092346 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/afb7e8b5-ea3c-41ae-89da-ab5ec7171600-config" (OuterVolumeSpecName: "config") pod "afb7e8b5-ea3c-41ae-89da-ab5ec7171600" (UID: "afb7e8b5-ea3c-41ae-89da-ab5ec7171600"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:30:48 crc kubenswrapper[5008]: I0129 15:30:48.092350 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7dbbd108-38e5-44c9-a6a8-efaec064d3f0-config" (OuterVolumeSpecName: "config") pod "7dbbd108-38e5-44c9-a6a8-efaec064d3f0" (UID: "7dbbd108-38e5-44c9-a6a8-efaec064d3f0"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:30:48 crc kubenswrapper[5008]: I0129 15:30:48.092766 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b46f1f12-a290-441c-a3bb-4584cc2a3102-config\") pod \"route-controller-manager-65dbd47846-qgvzb\" (UID: \"b46f1f12-a290-441c-a3bb-4584cc2a3102\") " pod="openshift-route-controller-manager/route-controller-manager-65dbd47846-qgvzb" Jan 29 15:30:48 crc kubenswrapper[5008]: I0129 15:30:48.096906 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/afb7e8b5-ea3c-41ae-89da-ab5ec7171600-kube-api-access-7dqd9" (OuterVolumeSpecName: "kube-api-access-7dqd9") pod "afb7e8b5-ea3c-41ae-89da-ab5ec7171600" (UID: "afb7e8b5-ea3c-41ae-89da-ab5ec7171600"). InnerVolumeSpecName "kube-api-access-7dqd9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:30:48 crc kubenswrapper[5008]: I0129 15:30:48.097005 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/afb7e8b5-ea3c-41ae-89da-ab5ec7171600-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "afb7e8b5-ea3c-41ae-89da-ab5ec7171600" (UID: "afb7e8b5-ea3c-41ae-89da-ab5ec7171600"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:30:48 crc kubenswrapper[5008]: I0129 15:30:48.099904 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7dbbd108-38e5-44c9-a6a8-efaec064d3f0-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7dbbd108-38e5-44c9-a6a8-efaec064d3f0" (UID: "7dbbd108-38e5-44c9-a6a8-efaec064d3f0"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:30:48 crc kubenswrapper[5008]: I0129 15:30:48.112139 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7dbbd108-38e5-44c9-a6a8-efaec064d3f0-kube-api-access-cw9rs" (OuterVolumeSpecName: "kube-api-access-cw9rs") pod "7dbbd108-38e5-44c9-a6a8-efaec064d3f0" (UID: "7dbbd108-38e5-44c9-a6a8-efaec064d3f0"). InnerVolumeSpecName "kube-api-access-cw9rs". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:30:48 crc kubenswrapper[5008]: I0129 15:30:48.124574 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b46f1f12-a290-441c-a3bb-4584cc2a3102-serving-cert\") pod \"route-controller-manager-65dbd47846-qgvzb\" (UID: \"b46f1f12-a290-441c-a3bb-4584cc2a3102\") " pod="openshift-route-controller-manager/route-controller-manager-65dbd47846-qgvzb" Jan 29 15:30:48 crc kubenswrapper[5008]: I0129 15:30:48.191905 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f5jnw\" (UniqueName: \"kubernetes.io/projected/b46f1f12-a290-441c-a3bb-4584cc2a3102-kube-api-access-f5jnw\") pod \"route-controller-manager-65dbd47846-qgvzb\" (UID: \"b46f1f12-a290-441c-a3bb-4584cc2a3102\") " pod="openshift-route-controller-manager/route-controller-manager-65dbd47846-qgvzb" Jan 29 15:30:48 crc kubenswrapper[5008]: I0129 15:30:48.192013 5008 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7dbbd108-38e5-44c9-a6a8-efaec064d3f0-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 29 15:30:48 crc kubenswrapper[5008]: I0129 15:30:48.192031 5008 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cw9rs\" (UniqueName: \"kubernetes.io/projected/7dbbd108-38e5-44c9-a6a8-efaec064d3f0-kube-api-access-cw9rs\") on node \"crc\" DevicePath \"\"" Jan 29 15:30:48 crc kubenswrapper[5008]: I0129 15:30:48.192045 5008 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7dbbd108-38e5-44c9-a6a8-efaec064d3f0-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 29 15:30:48 crc kubenswrapper[5008]: I0129 15:30:48.192059 5008 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7dbbd108-38e5-44c9-a6a8-efaec064d3f0-client-ca\") on node \"crc\" DevicePath \"\"" Jan 29 15:30:48 crc kubenswrapper[5008]: I0129 15:30:48.192070 5008 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/afb7e8b5-ea3c-41ae-89da-ab5ec7171600-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 29 15:30:48 crc kubenswrapper[5008]: I0129 15:30:48.192082 5008 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7dqd9\" (UniqueName: \"kubernetes.io/projected/afb7e8b5-ea3c-41ae-89da-ab5ec7171600-kube-api-access-7dqd9\") on node \"crc\" DevicePath \"\"" Jan 29 15:30:48 crc kubenswrapper[5008]: I0129 15:30:48.192093 5008 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/afb7e8b5-ea3c-41ae-89da-ab5ec7171600-client-ca\") on node \"crc\" DevicePath \"\"" Jan 29 15:30:48 crc kubenswrapper[5008]: I0129 15:30:48.192102 5008 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/afb7e8b5-ea3c-41ae-89da-ab5ec7171600-config\") on node \"crc\" DevicePath \"\"" Jan 29 15:30:48 crc kubenswrapper[5008]: I0129 15:30:48.192113 5008 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7dbbd108-38e5-44c9-a6a8-efaec064d3f0-config\") on node \"crc\" DevicePath \"\"" Jan 29 15:30:48 crc kubenswrapper[5008]: I0129 15:30:48.215664 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f5jnw\" (UniqueName: \"kubernetes.io/projected/b46f1f12-a290-441c-a3bb-4584cc2a3102-kube-api-access-f5jnw\") pod \"route-controller-manager-65dbd47846-qgvzb\" (UID: \"b46f1f12-a290-441c-a3bb-4584cc2a3102\") " pod="openshift-route-controller-manager/route-controller-manager-65dbd47846-qgvzb" Jan 29 15:30:48 crc kubenswrapper[5008]: I0129 15:30:48.290738 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-64b449df99-q9t46" event={"ID":"afb7e8b5-ea3c-41ae-89da-ab5ec7171600","Type":"ContainerDied","Data":"046a17c590098826d0a5eac7cd1935848d5c2b4be0940c2d3316db2e124ab690"} Jan 29 15:30:48 crc kubenswrapper[5008]: I0129 15:30:48.290834 5008 scope.go:117] "RemoveContainer" containerID="3b02507460795f19821a392cda839dd09d546d1c9003a8fa34c584311783c49f" Jan 29 15:30:48 crc kubenswrapper[5008]: I0129 15:30:48.290855 5008 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-64b449df99-q9t46" Jan 29 15:30:48 crc kubenswrapper[5008]: I0129 15:30:48.293917 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-d7649699d-6xx6r" event={"ID":"7dbbd108-38e5-44c9-a6a8-efaec064d3f0","Type":"ContainerDied","Data":"c89ec15a06edbf1f2377f00b216c0feab1fc55200bf490f013cd333af9148873"} Jan 29 15:30:48 crc kubenswrapper[5008]: I0129 15:30:48.294018 5008 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-d7649699d-6xx6r" Jan 29 15:30:48 crc kubenswrapper[5008]: I0129 15:30:48.313938 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-65dbd47846-qgvzb" Jan 29 15:30:48 crc kubenswrapper[5008]: I0129 15:30:48.324546 5008 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-64b449df99-q9t46"] Jan 29 15:30:48 crc kubenswrapper[5008]: I0129 15:30:48.332807 5008 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-64b449df99-q9t46"] Jan 29 15:30:48 crc kubenswrapper[5008]: I0129 15:30:48.337210 5008 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-d7649699d-6xx6r"] Jan 29 15:30:48 crc kubenswrapper[5008]: I0129 15:30:48.341238 5008 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-d7649699d-6xx6r"] Jan 29 15:30:49 crc kubenswrapper[5008]: I0129 15:30:49.134860 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/downloads-7954f5f757-6wmrp" Jan 29 15:30:49 crc kubenswrapper[5008]: I0129 15:30:49.334328 5008 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7dbbd108-38e5-44c9-a6a8-efaec064d3f0" path="/var/lib/kubelet/pods/7dbbd108-38e5-44c9-a6a8-efaec064d3f0/volumes" Jan 29 15:30:49 crc kubenswrapper[5008]: I0129 15:30:49.335240 5008 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="afb7e8b5-ea3c-41ae-89da-ab5ec7171600" path="/var/lib/kubelet/pods/afb7e8b5-ea3c-41ae-89da-ab5ec7171600/volumes" Jan 29 15:30:49 crc kubenswrapper[5008]: I0129 15:30:49.805488 5008 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-f9d7485db-g2rk6" Jan 29 15:30:49 crc kubenswrapper[5008]: I0129 15:30:49.810419 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-f9d7485db-g2rk6" Jan 29 15:30:50 crc kubenswrapper[5008]: I0129 15:30:50.021730 5008 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-58c6d6bbf4-dzqxt"] Jan 29 15:30:50 crc kubenswrapper[5008]: I0129 15:30:50.025557 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-58c6d6bbf4-dzqxt" Jan 29 15:30:50 crc kubenswrapper[5008]: I0129 15:30:50.028164 5008 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 29 15:30:50 crc kubenswrapper[5008]: I0129 15:30:50.028187 5008 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 29 15:30:50 crc kubenswrapper[5008]: I0129 15:30:50.028396 5008 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 29 15:30:50 crc kubenswrapper[5008]: I0129 15:30:50.028868 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 29 15:30:50 crc kubenswrapper[5008]: I0129 15:30:50.031371 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-58c6d6bbf4-dzqxt"] Jan 29 15:30:50 crc kubenswrapper[5008]: I0129 15:30:50.039628 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 29 15:30:50 crc kubenswrapper[5008]: I0129 15:30:50.039871 5008 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 29 15:30:50 crc kubenswrapper[5008]: I0129 15:30:50.046939 5008 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 29 15:30:50 crc kubenswrapper[5008]: I0129 15:30:50.126509 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2f3f8688-c937-4724-83ec-494dcce5177d-config\") pod \"controller-manager-58c6d6bbf4-dzqxt\" (UID: \"2f3f8688-c937-4724-83ec-494dcce5177d\") " pod="openshift-controller-manager/controller-manager-58c6d6bbf4-dzqxt" Jan 29 15:30:50 crc kubenswrapper[5008]: I0129 15:30:50.126591 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/2f3f8688-c937-4724-83ec-494dcce5177d-proxy-ca-bundles\") pod \"controller-manager-58c6d6bbf4-dzqxt\" (UID: \"2f3f8688-c937-4724-83ec-494dcce5177d\") " pod="openshift-controller-manager/controller-manager-58c6d6bbf4-dzqxt" Jan 29 15:30:50 crc kubenswrapper[5008]: I0129 15:30:50.126632 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xjbjx\" (UniqueName: \"kubernetes.io/projected/2f3f8688-c937-4724-83ec-494dcce5177d-kube-api-access-xjbjx\") pod \"controller-manager-58c6d6bbf4-dzqxt\" (UID: \"2f3f8688-c937-4724-83ec-494dcce5177d\") " pod="openshift-controller-manager/controller-manager-58c6d6bbf4-dzqxt" Jan 29 15:30:50 crc kubenswrapper[5008]: I0129 15:30:50.126650 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2f3f8688-c937-4724-83ec-494dcce5177d-serving-cert\") pod \"controller-manager-58c6d6bbf4-dzqxt\" (UID: \"2f3f8688-c937-4724-83ec-494dcce5177d\") " pod="openshift-controller-manager/controller-manager-58c6d6bbf4-dzqxt" Jan 29 15:30:50 crc kubenswrapper[5008]: I0129 15:30:50.126683 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/2f3f8688-c937-4724-83ec-494dcce5177d-client-ca\") pod \"controller-manager-58c6d6bbf4-dzqxt\" (UID: \"2f3f8688-c937-4724-83ec-494dcce5177d\") " pod="openshift-controller-manager/controller-manager-58c6d6bbf4-dzqxt" Jan 29 15:30:50 crc kubenswrapper[5008]: I0129 15:30:50.229839 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/2f3f8688-c937-4724-83ec-494dcce5177d-proxy-ca-bundles\") pod \"controller-manager-58c6d6bbf4-dzqxt\" (UID: \"2f3f8688-c937-4724-83ec-494dcce5177d\") " pod="openshift-controller-manager/controller-manager-58c6d6bbf4-dzqxt" Jan 29 15:30:50 crc kubenswrapper[5008]: I0129 15:30:50.229910 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xjbjx\" (UniqueName: \"kubernetes.io/projected/2f3f8688-c937-4724-83ec-494dcce5177d-kube-api-access-xjbjx\") pod \"controller-manager-58c6d6bbf4-dzqxt\" (UID: \"2f3f8688-c937-4724-83ec-494dcce5177d\") " pod="openshift-controller-manager/controller-manager-58c6d6bbf4-dzqxt" Jan 29 15:30:50 crc kubenswrapper[5008]: I0129 15:30:50.229940 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2f3f8688-c937-4724-83ec-494dcce5177d-serving-cert\") pod \"controller-manager-58c6d6bbf4-dzqxt\" (UID: \"2f3f8688-c937-4724-83ec-494dcce5177d\") " pod="openshift-controller-manager/controller-manager-58c6d6bbf4-dzqxt" Jan 29 15:30:50 crc kubenswrapper[5008]: I0129 15:30:50.229978 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/2f3f8688-c937-4724-83ec-494dcce5177d-client-ca\") pod \"controller-manager-58c6d6bbf4-dzqxt\" (UID: \"2f3f8688-c937-4724-83ec-494dcce5177d\") " pod="openshift-controller-manager/controller-manager-58c6d6bbf4-dzqxt" Jan 29 15:30:50 crc kubenswrapper[5008]: I0129 15:30:50.230013 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2f3f8688-c937-4724-83ec-494dcce5177d-config\") pod \"controller-manager-58c6d6bbf4-dzqxt\" (UID: \"2f3f8688-c937-4724-83ec-494dcce5177d\") " pod="openshift-controller-manager/controller-manager-58c6d6bbf4-dzqxt" Jan 29 15:30:50 crc kubenswrapper[5008]: I0129 15:30:50.231147 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/2f3f8688-c937-4724-83ec-494dcce5177d-proxy-ca-bundles\") pod \"controller-manager-58c6d6bbf4-dzqxt\" (UID: \"2f3f8688-c937-4724-83ec-494dcce5177d\") " pod="openshift-controller-manager/controller-manager-58c6d6bbf4-dzqxt" Jan 29 15:30:50 crc kubenswrapper[5008]: I0129 15:30:50.231441 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2f3f8688-c937-4724-83ec-494dcce5177d-config\") pod \"controller-manager-58c6d6bbf4-dzqxt\" (UID: \"2f3f8688-c937-4724-83ec-494dcce5177d\") " pod="openshift-controller-manager/controller-manager-58c6d6bbf4-dzqxt" Jan 29 15:30:50 crc kubenswrapper[5008]: I0129 15:30:50.237298 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2f3f8688-c937-4724-83ec-494dcce5177d-serving-cert\") pod \"controller-manager-58c6d6bbf4-dzqxt\" (UID: \"2f3f8688-c937-4724-83ec-494dcce5177d\") " pod="openshift-controller-manager/controller-manager-58c6d6bbf4-dzqxt" Jan 29 15:30:50 crc kubenswrapper[5008]: I0129 15:30:50.250232 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xjbjx\" (UniqueName: \"kubernetes.io/projected/2f3f8688-c937-4724-83ec-494dcce5177d-kube-api-access-xjbjx\") pod \"controller-manager-58c6d6bbf4-dzqxt\" (UID: \"2f3f8688-c937-4724-83ec-494dcce5177d\") " pod="openshift-controller-manager/controller-manager-58c6d6bbf4-dzqxt" Jan 29 15:30:50 crc kubenswrapper[5008]: I0129 15:30:50.258954 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/2f3f8688-c937-4724-83ec-494dcce5177d-client-ca\") pod \"controller-manager-58c6d6bbf4-dzqxt\" (UID: \"2f3f8688-c937-4724-83ec-494dcce5177d\") " pod="openshift-controller-manager/controller-manager-58c6d6bbf4-dzqxt" Jan 29 15:30:50 crc kubenswrapper[5008]: I0129 15:30:50.347921 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-58c6d6bbf4-dzqxt" Jan 29 15:30:52 crc kubenswrapper[5008]: I0129 15:30:52.309614 5008 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Jan 29 15:30:52 crc kubenswrapper[5008]: I0129 15:30:52.311435 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 29 15:30:52 crc kubenswrapper[5008]: I0129 15:30:52.315648 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Jan 29 15:30:52 crc kubenswrapper[5008]: I0129 15:30:52.316150 5008 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Jan 29 15:30:52 crc kubenswrapper[5008]: I0129 15:30:52.319913 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Jan 29 15:30:52 crc kubenswrapper[5008]: I0129 15:30:52.457220 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/70797d1b-2554-4595-aaed-29539196bbd1-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"70797d1b-2554-4595-aaed-29539196bbd1\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 29 15:30:52 crc kubenswrapper[5008]: I0129 15:30:52.457305 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/70797d1b-2554-4595-aaed-29539196bbd1-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"70797d1b-2554-4595-aaed-29539196bbd1\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 29 15:30:52 crc kubenswrapper[5008]: I0129 15:30:52.558395 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/70797d1b-2554-4595-aaed-29539196bbd1-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"70797d1b-2554-4595-aaed-29539196bbd1\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 29 15:30:52 crc kubenswrapper[5008]: I0129 15:30:52.558492 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/70797d1b-2554-4595-aaed-29539196bbd1-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"70797d1b-2554-4595-aaed-29539196bbd1\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 29 15:30:52 crc kubenswrapper[5008]: I0129 15:30:52.558613 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/70797d1b-2554-4595-aaed-29539196bbd1-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"70797d1b-2554-4595-aaed-29539196bbd1\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 29 15:30:52 crc kubenswrapper[5008]: I0129 15:30:52.585276 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/70797d1b-2554-4595-aaed-29539196bbd1-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"70797d1b-2554-4595-aaed-29539196bbd1\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 29 15:30:52 crc kubenswrapper[5008]: I0129 15:30:52.635825 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 29 15:30:56 crc kubenswrapper[5008]: I0129 15:30:56.830013 5008 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-58c6d6bbf4-dzqxt"] Jan 29 15:30:56 crc kubenswrapper[5008]: I0129 15:30:56.926244 5008 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-65dbd47846-qgvzb"] Jan 29 15:30:57 crc kubenswrapper[5008]: I0129 15:30:57.526645 5008 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Jan 29 15:30:57 crc kubenswrapper[5008]: I0129 15:30:57.527914 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 29 15:30:57 crc kubenswrapper[5008]: I0129 15:30:57.530499 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Jan 29 15:30:57 crc kubenswrapper[5008]: I0129 15:30:57.726759 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/af4b11bc-2d2f-4e68-ab59-cbc08fecba52-kubelet-dir\") pod \"installer-9-crc\" (UID: \"af4b11bc-2d2f-4e68-ab59-cbc08fecba52\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 29 15:30:57 crc kubenswrapper[5008]: I0129 15:30:57.726975 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/af4b11bc-2d2f-4e68-ab59-cbc08fecba52-kube-api-access\") pod \"installer-9-crc\" (UID: \"af4b11bc-2d2f-4e68-ab59-cbc08fecba52\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 29 15:30:57 crc kubenswrapper[5008]: I0129 15:30:57.727095 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/af4b11bc-2d2f-4e68-ab59-cbc08fecba52-var-lock\") pod \"installer-9-crc\" (UID: \"af4b11bc-2d2f-4e68-ab59-cbc08fecba52\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 29 15:30:57 crc kubenswrapper[5008]: I0129 15:30:57.828142 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/af4b11bc-2d2f-4e68-ab59-cbc08fecba52-var-lock\") pod \"installer-9-crc\" (UID: \"af4b11bc-2d2f-4e68-ab59-cbc08fecba52\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 29 15:30:57 crc kubenswrapper[5008]: I0129 15:30:57.828208 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/af4b11bc-2d2f-4e68-ab59-cbc08fecba52-kubelet-dir\") pod \"installer-9-crc\" (UID: \"af4b11bc-2d2f-4e68-ab59-cbc08fecba52\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 29 15:30:57 crc kubenswrapper[5008]: I0129 15:30:57.828264 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/af4b11bc-2d2f-4e68-ab59-cbc08fecba52-kube-api-access\") pod \"installer-9-crc\" (UID: \"af4b11bc-2d2f-4e68-ab59-cbc08fecba52\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 29 15:30:57 crc kubenswrapper[5008]: I0129 15:30:57.828294 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/af4b11bc-2d2f-4e68-ab59-cbc08fecba52-var-lock\") pod \"installer-9-crc\" (UID: \"af4b11bc-2d2f-4e68-ab59-cbc08fecba52\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 29 15:30:57 crc kubenswrapper[5008]: I0129 15:30:57.828360 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/af4b11bc-2d2f-4e68-ab59-cbc08fecba52-kubelet-dir\") pod \"installer-9-crc\" (UID: \"af4b11bc-2d2f-4e68-ab59-cbc08fecba52\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 29 15:30:57 crc kubenswrapper[5008]: I0129 15:30:57.856361 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/af4b11bc-2d2f-4e68-ab59-cbc08fecba52-kube-api-access\") pod \"installer-9-crc\" (UID: \"af4b11bc-2d2f-4e68-ab59-cbc08fecba52\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 29 15:30:58 crc kubenswrapper[5008]: I0129 15:30:58.155621 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 29 15:31:13 crc kubenswrapper[5008]: I0129 15:31:13.990367 5008 patch_prober.go:28] interesting pod/machine-config-daemon-gk9q8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 15:31:13 crc kubenswrapper[5008]: I0129 15:31:13.991159 5008 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-gk9q8" podUID="ca0fcb2d-733d-4bde-9bbf-3f7082d0e244" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 15:31:13 crc kubenswrapper[5008]: I0129 15:31:13.991225 5008 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-gk9q8" Jan 29 15:31:13 crc kubenswrapper[5008]: I0129 15:31:13.992167 5008 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"b4781ea933d8ce868cf1da4b2890797c16012b434ce074870a59307d61a3c731"} pod="openshift-machine-config-operator/machine-config-daemon-gk9q8" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 29 15:31:13 crc kubenswrapper[5008]: I0129 15:31:13.992285 5008 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-gk9q8" podUID="ca0fcb2d-733d-4bde-9bbf-3f7082d0e244" containerName="machine-config-daemon" containerID="cri-o://b4781ea933d8ce868cf1da4b2890797c16012b434ce074870a59307d61a3c731" gracePeriod=600 Jan 29 15:31:19 crc kubenswrapper[5008]: I0129 15:31:19.477998 5008 generic.go:334] "Generic (PLEG): container finished" podID="ca0fcb2d-733d-4bde-9bbf-3f7082d0e244" containerID="b4781ea933d8ce868cf1da4b2890797c16012b434ce074870a59307d61a3c731" exitCode=0 Jan 29 15:31:19 crc kubenswrapper[5008]: I0129 15:31:19.478138 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-gk9q8" event={"ID":"ca0fcb2d-733d-4bde-9bbf-3f7082d0e244","Type":"ContainerDied","Data":"b4781ea933d8ce868cf1da4b2890797c16012b434ce074870a59307d61a3c731"} Jan 29 15:31:19 crc kubenswrapper[5008]: I0129 15:31:19.839248 5008 scope.go:117] "RemoveContainer" containerID="a89ad9ebedb6a41ee71edf80b0a6e1658e17f7834cb3f34aa4f8d7ca83f8b7f5" Jan 29 15:31:27 crc kubenswrapper[5008]: E0129 15:31:27.522717 5008 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Jan 29 15:31:27 crc kubenswrapper[5008]: E0129 15:31:27.523321 5008 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-s8q2q,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-4dwdf_openshift-marketplace(d2d42845-cca1-4b60-bc84-4b2baebf702b): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 29 15:31:27 crc kubenswrapper[5008]: E0129 15:31:27.524480 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-4dwdf" podUID="d2d42845-cca1-4b60-bc84-4b2baebf702b" Jan 29 15:31:28 crc kubenswrapper[5008]: E0129 15:31:28.058774 5008 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Jan 29 15:31:28 crc kubenswrapper[5008]: E0129 15:31:28.059056 5008 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-btkm4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-h7vmc_openshift-marketplace(9bcecb83-1aec-4bd4-9b46-f02deb628018): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 29 15:31:28 crc kubenswrapper[5008]: E0129 15:31:28.060337 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-h7vmc" podUID="9bcecb83-1aec-4bd4-9b46-f02deb628018" Jan 29 15:31:36 crc kubenswrapper[5008]: E0129 15:31:36.319087 5008 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Jan 29 15:31:36 crc kubenswrapper[5008]: E0129 15:31:36.320023 5008 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-6pfbb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-lhtht_openshift-marketplace(a954daed-802a-4b46-81ef-7079dcddbaa5): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 29 15:31:36 crc kubenswrapper[5008]: E0129 15:31:36.321276 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-operators-lhtht" podUID="a954daed-802a-4b46-81ef-7079dcddbaa5" Jan 29 15:31:36 crc kubenswrapper[5008]: E0129 15:31:36.323980 5008 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Jan 29 15:31:36 crc kubenswrapper[5008]: E0129 15:31:36.324081 5008 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-229kp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-tst9c_openshift-marketplace(ea8deba9-72cb-4274-add1-e80591a9e7cc): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 29 15:31:36 crc kubenswrapper[5008]: E0129 15:31:36.325361 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-operators-tst9c" podUID="ea8deba9-72cb-4274-add1-e80591a9e7cc" Jan 29 15:31:39 crc kubenswrapper[5008]: E0129 15:31:39.641377 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-4dwdf" podUID="d2d42845-cca1-4b60-bc84-4b2baebf702b" Jan 29 15:31:39 crc kubenswrapper[5008]: E0129 15:31:39.661494 5008 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Jan 29 15:31:39 crc kubenswrapper[5008]: E0129 15:31:39.661704 5008 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-dldqp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-cwgw5_openshift-marketplace(6aebe040-289b-48c1-a825-f12b471a5ad6): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 29 15:31:39 crc kubenswrapper[5008]: E0129 15:31:39.663081 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-cwgw5" podUID="6aebe040-289b-48c1-a825-f12b471a5ad6" Jan 29 15:31:39 crc kubenswrapper[5008]: E0129 15:31:39.697688 5008 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Jan 29 15:31:39 crc kubenswrapper[5008]: E0129 15:31:39.698026 5008 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-z5sl4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-z9t2h_openshift-marketplace(250e7db8-88dd-44fd-8d73-51a6f8f4ba96): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 29 15:31:39 crc kubenswrapper[5008]: E0129 15:31:39.699214 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-z9t2h" podUID="250e7db8-88dd-44fd-8d73-51a6f8f4ba96" Jan 29 15:31:40 crc kubenswrapper[5008]: E0129 15:31:40.900082 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-h7vmc" podUID="9bcecb83-1aec-4bd4-9b46-f02deb628018" Jan 29 15:31:40 crc kubenswrapper[5008]: E0129 15:31:40.907946 5008 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Jan 29 15:31:40 crc kubenswrapper[5008]: E0129 15:31:40.908071 5008 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-ftbd9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-mkxw5_openshift-marketplace(6aef1830-577d-405c-bb54-6f9fe217ae86): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 29 15:31:40 crc kubenswrapper[5008]: E0129 15:31:40.909262 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-marketplace-mkxw5" podUID="6aef1830-577d-405c-bb54-6f9fe217ae86" Jan 29 15:31:40 crc kubenswrapper[5008]: E0129 15:31:40.917762 5008 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Jan 29 15:31:40 crc kubenswrapper[5008]: E0129 15:31:40.917908 5008 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-lw6k4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-fd6nq_openshift-marketplace(37742fc9-fce4-41f0-ba04-7232b6e647a7): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 29 15:31:40 crc kubenswrapper[5008]: E0129 15:31:40.919095 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-marketplace-fd6nq" podUID="37742fc9-fce4-41f0-ba04-7232b6e647a7" Jan 29 15:31:41 crc kubenswrapper[5008]: I0129 15:31:41.341669 5008 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-58c6d6bbf4-dzqxt"] Jan 29 15:31:41 crc kubenswrapper[5008]: W0129 15:31:41.348431 5008 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2f3f8688_c937_4724_83ec_494dcce5177d.slice/crio-8377a8ad0934799e196e9abb9f60b501a8ac0a2ca3e736013d5254ba54abd663 WatchSource:0}: Error finding container 8377a8ad0934799e196e9abb9f60b501a8ac0a2ca3e736013d5254ba54abd663: Status 404 returned error can't find the container with id 8377a8ad0934799e196e9abb9f60b501a8ac0a2ca3e736013d5254ba54abd663 Jan 29 15:31:41 crc kubenswrapper[5008]: I0129 15:31:41.393514 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Jan 29 15:31:41 crc kubenswrapper[5008]: I0129 15:31:41.398317 5008 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-65dbd47846-qgvzb"] Jan 29 15:31:41 crc kubenswrapper[5008]: W0129 15:31:41.402696 5008 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-podaf4b11bc_2d2f_4e68_ab59_cbc08fecba52.slice/crio-3fbb18559f4006c21dcfe445af54451f7c34b27ece772e485463a9d59d5f3753 WatchSource:0}: Error finding container 3fbb18559f4006c21dcfe445af54451f7c34b27ece772e485463a9d59d5f3753: Status 404 returned error can't find the container with id 3fbb18559f4006c21dcfe445af54451f7c34b27ece772e485463a9d59d5f3753 Jan 29 15:31:41 crc kubenswrapper[5008]: W0129 15:31:41.406316 5008 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb46f1f12_a290_441c_a3bb_4584cc2a3102.slice/crio-9721da6a6b936d29937431fe10eb863eff5114e4271b46c477b1083f5c955934 WatchSource:0}: Error finding container 9721da6a6b936d29937431fe10eb863eff5114e4271b46c477b1083f5c955934: Status 404 returned error can't find the container with id 9721da6a6b936d29937431fe10eb863eff5114e4271b46c477b1083f5c955934 Jan 29 15:31:41 crc kubenswrapper[5008]: I0129 15:31:41.410753 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Jan 29 15:31:41 crc kubenswrapper[5008]: W0129 15:31:41.431097 5008 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod70797d1b_2554_4595_aaed_29539196bbd1.slice/crio-c53bfe723487e14c18ecdbc12136eea34bb11109ee7d7e5f7b0bdf07b8cfad3e WatchSource:0}: Error finding container c53bfe723487e14c18ecdbc12136eea34bb11109ee7d7e5f7b0bdf07b8cfad3e: Status 404 returned error can't find the container with id c53bfe723487e14c18ecdbc12136eea34bb11109ee7d7e5f7b0bdf07b8cfad3e Jan 29 15:31:41 crc kubenswrapper[5008]: I0129 15:31:41.627219 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-65dbd47846-qgvzb" event={"ID":"b46f1f12-a290-441c-a3bb-4584cc2a3102","Type":"ContainerStarted","Data":"23df9f5e487c90cd3a8c5694679972c5e894b1b84afd6fb8e62b3b3d43f428ad"} Jan 29 15:31:41 crc kubenswrapper[5008]: I0129 15:31:41.627262 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-65dbd47846-qgvzb" event={"ID":"b46f1f12-a290-441c-a3bb-4584cc2a3102","Type":"ContainerStarted","Data":"9721da6a6b936d29937431fe10eb863eff5114e4271b46c477b1083f5c955934"} Jan 29 15:31:41 crc kubenswrapper[5008]: I0129 15:31:41.627355 5008 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-65dbd47846-qgvzb" podUID="b46f1f12-a290-441c-a3bb-4584cc2a3102" containerName="route-controller-manager" containerID="cri-o://23df9f5e487c90cd3a8c5694679972c5e894b1b84afd6fb8e62b3b3d43f428ad" gracePeriod=30 Jan 29 15:31:41 crc kubenswrapper[5008]: I0129 15:31:41.627467 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-65dbd47846-qgvzb" Jan 29 15:31:41 crc kubenswrapper[5008]: I0129 15:31:41.631731 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-gk9q8" event={"ID":"ca0fcb2d-733d-4bde-9bbf-3f7082d0e244","Type":"ContainerStarted","Data":"1094d3e48c81c3e2ea9f57f39bbd7ccc01c1ccc72a4337e691b80548a8d40521"} Jan 29 15:31:41 crc kubenswrapper[5008]: I0129 15:31:41.632946 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"af4b11bc-2d2f-4e68-ab59-cbc08fecba52","Type":"ContainerStarted","Data":"3fbb18559f4006c21dcfe445af54451f7c34b27ece772e485463a9d59d5f3753"} Jan 29 15:31:41 crc kubenswrapper[5008]: I0129 15:31:41.634848 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"70797d1b-2554-4595-aaed-29539196bbd1","Type":"ContainerStarted","Data":"c53bfe723487e14c18ecdbc12136eea34bb11109ee7d7e5f7b0bdf07b8cfad3e"} Jan 29 15:31:41 crc kubenswrapper[5008]: I0129 15:31:41.635996 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-58c6d6bbf4-dzqxt" event={"ID":"2f3f8688-c937-4724-83ec-494dcce5177d","Type":"ContainerStarted","Data":"8a153c4c7ac3c8a86b287a80213273efffbc6db000eff9bef3905617af6a5a26"} Jan 29 15:31:41 crc kubenswrapper[5008]: I0129 15:31:41.636030 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-58c6d6bbf4-dzqxt" event={"ID":"2f3f8688-c937-4724-83ec-494dcce5177d","Type":"ContainerStarted","Data":"8377a8ad0934799e196e9abb9f60b501a8ac0a2ca3e736013d5254ba54abd663"} Jan 29 15:31:41 crc kubenswrapper[5008]: I0129 15:31:41.636112 5008 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-58c6d6bbf4-dzqxt" podUID="2f3f8688-c937-4724-83ec-494dcce5177d" containerName="controller-manager" containerID="cri-o://8a153c4c7ac3c8a86b287a80213273efffbc6db000eff9bef3905617af6a5a26" gracePeriod=30 Jan 29 15:31:41 crc kubenswrapper[5008]: I0129 15:31:41.636264 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-58c6d6bbf4-dzqxt" Jan 29 15:31:41 crc kubenswrapper[5008]: I0129 15:31:41.640744 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-58c6d6bbf4-dzqxt" Jan 29 15:31:41 crc kubenswrapper[5008]: I0129 15:31:41.644073 5008 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-65dbd47846-qgvzb" podStartSLOduration=65.644058141 podStartE2EDuration="1m5.644058141s" podCreationTimestamp="2026-01-29 15:30:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 15:31:41.641588679 +0000 UTC m=+245.314442916" watchObservedRunningTime="2026-01-29 15:31:41.644058141 +0000 UTC m=+245.316912378" Jan 29 15:31:41 crc kubenswrapper[5008]: I0129 15:31:41.685234 5008 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-58c6d6bbf4-dzqxt" podStartSLOduration=65.68521153 podStartE2EDuration="1m5.68521153s" podCreationTimestamp="2026-01-29 15:30:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 15:31:41.666119821 +0000 UTC m=+245.338974058" watchObservedRunningTime="2026-01-29 15:31:41.68521153 +0000 UTC m=+245.358065777" Jan 29 15:31:41 crc kubenswrapper[5008]: I0129 15:31:41.992285 5008 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-58c6d6bbf4-dzqxt" Jan 29 15:31:42 crc kubenswrapper[5008]: I0129 15:31:42.016014 5008 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-585448bccb-4m9fq"] Jan 29 15:31:42 crc kubenswrapper[5008]: E0129 15:31:42.016230 5008 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2f3f8688-c937-4724-83ec-494dcce5177d" containerName="controller-manager" Jan 29 15:31:42 crc kubenswrapper[5008]: I0129 15:31:42.016242 5008 state_mem.go:107] "Deleted CPUSet assignment" podUID="2f3f8688-c937-4724-83ec-494dcce5177d" containerName="controller-manager" Jan 29 15:31:42 crc kubenswrapper[5008]: I0129 15:31:42.016345 5008 memory_manager.go:354] "RemoveStaleState removing state" podUID="2f3f8688-c937-4724-83ec-494dcce5177d" containerName="controller-manager" Jan 29 15:31:42 crc kubenswrapper[5008]: I0129 15:31:42.016662 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-585448bccb-4m9fq" Jan 29 15:31:42 crc kubenswrapper[5008]: I0129 15:31:42.034598 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-585448bccb-4m9fq"] Jan 29 15:31:42 crc kubenswrapper[5008]: I0129 15:31:42.109632 5008 patch_prober.go:28] interesting pod/route-controller-manager-65dbd47846-qgvzb container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.58:8443/healthz\": read tcp 10.217.0.2:36014->10.217.0.58:8443: read: connection reset by peer" start-of-body= Jan 29 15:31:42 crc kubenswrapper[5008]: I0129 15:31:42.109701 5008 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-65dbd47846-qgvzb" podUID="b46f1f12-a290-441c-a3bb-4584cc2a3102" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.58:8443/healthz\": read tcp 10.217.0.2:36014->10.217.0.58:8443: read: connection reset by peer" Jan 29 15:31:42 crc kubenswrapper[5008]: I0129 15:31:42.113192 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2f3f8688-c937-4724-83ec-494dcce5177d-config\") pod \"2f3f8688-c937-4724-83ec-494dcce5177d\" (UID: \"2f3f8688-c937-4724-83ec-494dcce5177d\") " Jan 29 15:31:42 crc kubenswrapper[5008]: I0129 15:31:42.113289 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/2f3f8688-c937-4724-83ec-494dcce5177d-proxy-ca-bundles\") pod \"2f3f8688-c937-4724-83ec-494dcce5177d\" (UID: \"2f3f8688-c937-4724-83ec-494dcce5177d\") " Jan 29 15:31:42 crc kubenswrapper[5008]: I0129 15:31:42.113330 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xjbjx\" (UniqueName: \"kubernetes.io/projected/2f3f8688-c937-4724-83ec-494dcce5177d-kube-api-access-xjbjx\") pod \"2f3f8688-c937-4724-83ec-494dcce5177d\" (UID: \"2f3f8688-c937-4724-83ec-494dcce5177d\") " Jan 29 15:31:42 crc kubenswrapper[5008]: I0129 15:31:42.113358 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2f3f8688-c937-4724-83ec-494dcce5177d-serving-cert\") pod \"2f3f8688-c937-4724-83ec-494dcce5177d\" (UID: \"2f3f8688-c937-4724-83ec-494dcce5177d\") " Jan 29 15:31:42 crc kubenswrapper[5008]: I0129 15:31:42.113442 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/2f3f8688-c937-4724-83ec-494dcce5177d-client-ca\") pod \"2f3f8688-c937-4724-83ec-494dcce5177d\" (UID: \"2f3f8688-c937-4724-83ec-494dcce5177d\") " Jan 29 15:31:42 crc kubenswrapper[5008]: I0129 15:31:42.114074 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2f3f8688-c937-4724-83ec-494dcce5177d-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "2f3f8688-c937-4724-83ec-494dcce5177d" (UID: "2f3f8688-c937-4724-83ec-494dcce5177d"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:31:42 crc kubenswrapper[5008]: I0129 15:31:42.114096 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2f3f8688-c937-4724-83ec-494dcce5177d-client-ca" (OuterVolumeSpecName: "client-ca") pod "2f3f8688-c937-4724-83ec-494dcce5177d" (UID: "2f3f8688-c937-4724-83ec-494dcce5177d"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:31:42 crc kubenswrapper[5008]: I0129 15:31:42.114149 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2f3f8688-c937-4724-83ec-494dcce5177d-config" (OuterVolumeSpecName: "config") pod "2f3f8688-c937-4724-83ec-494dcce5177d" (UID: "2f3f8688-c937-4724-83ec-494dcce5177d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:31:42 crc kubenswrapper[5008]: I0129 15:31:42.118945 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2f3f8688-c937-4724-83ec-494dcce5177d-kube-api-access-xjbjx" (OuterVolumeSpecName: "kube-api-access-xjbjx") pod "2f3f8688-c937-4724-83ec-494dcce5177d" (UID: "2f3f8688-c937-4724-83ec-494dcce5177d"). InnerVolumeSpecName "kube-api-access-xjbjx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:31:42 crc kubenswrapper[5008]: I0129 15:31:42.119919 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2f3f8688-c937-4724-83ec-494dcce5177d-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "2f3f8688-c937-4724-83ec-494dcce5177d" (UID: "2f3f8688-c937-4724-83ec-494dcce5177d"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:31:42 crc kubenswrapper[5008]: I0129 15:31:42.215007 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/64612440-e59b-46bb-a60f-f10989166e58-client-ca\") pod \"controller-manager-585448bccb-4m9fq\" (UID: \"64612440-e59b-46bb-a60f-f10989166e58\") " pod="openshift-controller-manager/controller-manager-585448bccb-4m9fq" Jan 29 15:31:42 crc kubenswrapper[5008]: I0129 15:31:42.215172 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/64612440-e59b-46bb-a60f-f10989166e58-config\") pod \"controller-manager-585448bccb-4m9fq\" (UID: \"64612440-e59b-46bb-a60f-f10989166e58\") " pod="openshift-controller-manager/controller-manager-585448bccb-4m9fq" Jan 29 15:31:42 crc kubenswrapper[5008]: I0129 15:31:42.215232 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-st2l6\" (UniqueName: \"kubernetes.io/projected/64612440-e59b-46bb-a60f-f10989166e58-kube-api-access-st2l6\") pod \"controller-manager-585448bccb-4m9fq\" (UID: \"64612440-e59b-46bb-a60f-f10989166e58\") " pod="openshift-controller-manager/controller-manager-585448bccb-4m9fq" Jan 29 15:31:42 crc kubenswrapper[5008]: I0129 15:31:42.215306 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/64612440-e59b-46bb-a60f-f10989166e58-proxy-ca-bundles\") pod \"controller-manager-585448bccb-4m9fq\" (UID: \"64612440-e59b-46bb-a60f-f10989166e58\") " pod="openshift-controller-manager/controller-manager-585448bccb-4m9fq" Jan 29 15:31:42 crc kubenswrapper[5008]: I0129 15:31:42.215362 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/64612440-e59b-46bb-a60f-f10989166e58-serving-cert\") pod \"controller-manager-585448bccb-4m9fq\" (UID: \"64612440-e59b-46bb-a60f-f10989166e58\") " pod="openshift-controller-manager/controller-manager-585448bccb-4m9fq" Jan 29 15:31:42 crc kubenswrapper[5008]: I0129 15:31:42.215468 5008 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xjbjx\" (UniqueName: \"kubernetes.io/projected/2f3f8688-c937-4724-83ec-494dcce5177d-kube-api-access-xjbjx\") on node \"crc\" DevicePath \"\"" Jan 29 15:31:42 crc kubenswrapper[5008]: I0129 15:31:42.215495 5008 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2f3f8688-c937-4724-83ec-494dcce5177d-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 29 15:31:42 crc kubenswrapper[5008]: I0129 15:31:42.215509 5008 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/2f3f8688-c937-4724-83ec-494dcce5177d-client-ca\") on node \"crc\" DevicePath \"\"" Jan 29 15:31:42 crc kubenswrapper[5008]: I0129 15:31:42.215520 5008 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2f3f8688-c937-4724-83ec-494dcce5177d-config\") on node \"crc\" DevicePath \"\"" Jan 29 15:31:42 crc kubenswrapper[5008]: I0129 15:31:42.215532 5008 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/2f3f8688-c937-4724-83ec-494dcce5177d-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 29 15:31:42 crc kubenswrapper[5008]: I0129 15:31:42.317280 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/64612440-e59b-46bb-a60f-f10989166e58-proxy-ca-bundles\") pod \"controller-manager-585448bccb-4m9fq\" (UID: \"64612440-e59b-46bb-a60f-f10989166e58\") " pod="openshift-controller-manager/controller-manager-585448bccb-4m9fq" Jan 29 15:31:42 crc kubenswrapper[5008]: I0129 15:31:42.317337 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/64612440-e59b-46bb-a60f-f10989166e58-serving-cert\") pod \"controller-manager-585448bccb-4m9fq\" (UID: \"64612440-e59b-46bb-a60f-f10989166e58\") " pod="openshift-controller-manager/controller-manager-585448bccb-4m9fq" Jan 29 15:31:42 crc kubenswrapper[5008]: I0129 15:31:42.317421 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/64612440-e59b-46bb-a60f-f10989166e58-client-ca\") pod \"controller-manager-585448bccb-4m9fq\" (UID: \"64612440-e59b-46bb-a60f-f10989166e58\") " pod="openshift-controller-manager/controller-manager-585448bccb-4m9fq" Jan 29 15:31:42 crc kubenswrapper[5008]: I0129 15:31:42.317455 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/64612440-e59b-46bb-a60f-f10989166e58-config\") pod \"controller-manager-585448bccb-4m9fq\" (UID: \"64612440-e59b-46bb-a60f-f10989166e58\") " pod="openshift-controller-manager/controller-manager-585448bccb-4m9fq" Jan 29 15:31:42 crc kubenswrapper[5008]: I0129 15:31:42.317474 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-st2l6\" (UniqueName: \"kubernetes.io/projected/64612440-e59b-46bb-a60f-f10989166e58-kube-api-access-st2l6\") pod \"controller-manager-585448bccb-4m9fq\" (UID: \"64612440-e59b-46bb-a60f-f10989166e58\") " pod="openshift-controller-manager/controller-manager-585448bccb-4m9fq" Jan 29 15:31:42 crc kubenswrapper[5008]: I0129 15:31:42.319162 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/64612440-e59b-46bb-a60f-f10989166e58-client-ca\") pod \"controller-manager-585448bccb-4m9fq\" (UID: \"64612440-e59b-46bb-a60f-f10989166e58\") " pod="openshift-controller-manager/controller-manager-585448bccb-4m9fq" Jan 29 15:31:42 crc kubenswrapper[5008]: I0129 15:31:42.319418 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/64612440-e59b-46bb-a60f-f10989166e58-proxy-ca-bundles\") pod \"controller-manager-585448bccb-4m9fq\" (UID: \"64612440-e59b-46bb-a60f-f10989166e58\") " pod="openshift-controller-manager/controller-manager-585448bccb-4m9fq" Jan 29 15:31:42 crc kubenswrapper[5008]: I0129 15:31:42.319684 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/64612440-e59b-46bb-a60f-f10989166e58-config\") pod \"controller-manager-585448bccb-4m9fq\" (UID: \"64612440-e59b-46bb-a60f-f10989166e58\") " pod="openshift-controller-manager/controller-manager-585448bccb-4m9fq" Jan 29 15:31:42 crc kubenswrapper[5008]: I0129 15:31:42.322430 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/64612440-e59b-46bb-a60f-f10989166e58-serving-cert\") pod \"controller-manager-585448bccb-4m9fq\" (UID: \"64612440-e59b-46bb-a60f-f10989166e58\") " pod="openshift-controller-manager/controller-manager-585448bccb-4m9fq" Jan 29 15:31:42 crc kubenswrapper[5008]: I0129 15:31:42.338323 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-st2l6\" (UniqueName: \"kubernetes.io/projected/64612440-e59b-46bb-a60f-f10989166e58-kube-api-access-st2l6\") pod \"controller-manager-585448bccb-4m9fq\" (UID: \"64612440-e59b-46bb-a60f-f10989166e58\") " pod="openshift-controller-manager/controller-manager-585448bccb-4m9fq" Jan 29 15:31:42 crc kubenswrapper[5008]: I0129 15:31:42.343193 5008 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-route-controller-manager_route-controller-manager-65dbd47846-qgvzb_b46f1f12-a290-441c-a3bb-4584cc2a3102/route-controller-manager/0.log" Jan 29 15:31:42 crc kubenswrapper[5008]: I0129 15:31:42.343249 5008 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-65dbd47846-qgvzb" Jan 29 15:31:42 crc kubenswrapper[5008]: I0129 15:31:42.519467 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f5jnw\" (UniqueName: \"kubernetes.io/projected/b46f1f12-a290-441c-a3bb-4584cc2a3102-kube-api-access-f5jnw\") pod \"b46f1f12-a290-441c-a3bb-4584cc2a3102\" (UID: \"b46f1f12-a290-441c-a3bb-4584cc2a3102\") " Jan 29 15:31:42 crc kubenswrapper[5008]: I0129 15:31:42.519536 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b46f1f12-a290-441c-a3bb-4584cc2a3102-config\") pod \"b46f1f12-a290-441c-a3bb-4584cc2a3102\" (UID: \"b46f1f12-a290-441c-a3bb-4584cc2a3102\") " Jan 29 15:31:42 crc kubenswrapper[5008]: I0129 15:31:42.519647 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b46f1f12-a290-441c-a3bb-4584cc2a3102-serving-cert\") pod \"b46f1f12-a290-441c-a3bb-4584cc2a3102\" (UID: \"b46f1f12-a290-441c-a3bb-4584cc2a3102\") " Jan 29 15:31:42 crc kubenswrapper[5008]: I0129 15:31:42.519684 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b46f1f12-a290-441c-a3bb-4584cc2a3102-client-ca\") pod \"b46f1f12-a290-441c-a3bb-4584cc2a3102\" (UID: \"b46f1f12-a290-441c-a3bb-4584cc2a3102\") " Jan 29 15:31:42 crc kubenswrapper[5008]: I0129 15:31:42.520548 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b46f1f12-a290-441c-a3bb-4584cc2a3102-config" (OuterVolumeSpecName: "config") pod "b46f1f12-a290-441c-a3bb-4584cc2a3102" (UID: "b46f1f12-a290-441c-a3bb-4584cc2a3102"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:31:42 crc kubenswrapper[5008]: I0129 15:31:42.520750 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b46f1f12-a290-441c-a3bb-4584cc2a3102-client-ca" (OuterVolumeSpecName: "client-ca") pod "b46f1f12-a290-441c-a3bb-4584cc2a3102" (UID: "b46f1f12-a290-441c-a3bb-4584cc2a3102"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:31:42 crc kubenswrapper[5008]: I0129 15:31:42.524406 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b46f1f12-a290-441c-a3bb-4584cc2a3102-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "b46f1f12-a290-441c-a3bb-4584cc2a3102" (UID: "b46f1f12-a290-441c-a3bb-4584cc2a3102"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:31:42 crc kubenswrapper[5008]: I0129 15:31:42.524662 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b46f1f12-a290-441c-a3bb-4584cc2a3102-kube-api-access-f5jnw" (OuterVolumeSpecName: "kube-api-access-f5jnw") pod "b46f1f12-a290-441c-a3bb-4584cc2a3102" (UID: "b46f1f12-a290-441c-a3bb-4584cc2a3102"). InnerVolumeSpecName "kube-api-access-f5jnw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:31:42 crc kubenswrapper[5008]: I0129 15:31:42.621076 5008 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b46f1f12-a290-441c-a3bb-4584cc2a3102-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 29 15:31:42 crc kubenswrapper[5008]: I0129 15:31:42.621125 5008 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b46f1f12-a290-441c-a3bb-4584cc2a3102-client-ca\") on node \"crc\" DevicePath \"\"" Jan 29 15:31:42 crc kubenswrapper[5008]: I0129 15:31:42.621139 5008 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f5jnw\" (UniqueName: \"kubernetes.io/projected/b46f1f12-a290-441c-a3bb-4584cc2a3102-kube-api-access-f5jnw\") on node \"crc\" DevicePath \"\"" Jan 29 15:31:42 crc kubenswrapper[5008]: I0129 15:31:42.621152 5008 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b46f1f12-a290-441c-a3bb-4584cc2a3102-config\") on node \"crc\" DevicePath \"\"" Jan 29 15:31:42 crc kubenswrapper[5008]: I0129 15:31:42.637443 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-585448bccb-4m9fq" Jan 29 15:31:42 crc kubenswrapper[5008]: I0129 15:31:42.643225 5008 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-route-controller-manager_route-controller-manager-65dbd47846-qgvzb_b46f1f12-a290-441c-a3bb-4584cc2a3102/route-controller-manager/0.log" Jan 29 15:31:42 crc kubenswrapper[5008]: I0129 15:31:42.643273 5008 generic.go:334] "Generic (PLEG): container finished" podID="b46f1f12-a290-441c-a3bb-4584cc2a3102" containerID="23df9f5e487c90cd3a8c5694679972c5e894b1b84afd6fb8e62b3b3d43f428ad" exitCode=255 Jan 29 15:31:42 crc kubenswrapper[5008]: I0129 15:31:42.643329 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-65dbd47846-qgvzb" event={"ID":"b46f1f12-a290-441c-a3bb-4584cc2a3102","Type":"ContainerDied","Data":"23df9f5e487c90cd3a8c5694679972c5e894b1b84afd6fb8e62b3b3d43f428ad"} Jan 29 15:31:42 crc kubenswrapper[5008]: I0129 15:31:42.643341 5008 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-65dbd47846-qgvzb" Jan 29 15:31:42 crc kubenswrapper[5008]: I0129 15:31:42.643354 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-65dbd47846-qgvzb" event={"ID":"b46f1f12-a290-441c-a3bb-4584cc2a3102","Type":"ContainerDied","Data":"9721da6a6b936d29937431fe10eb863eff5114e4271b46c477b1083f5c955934"} Jan 29 15:31:42 crc kubenswrapper[5008]: I0129 15:31:42.643370 5008 scope.go:117] "RemoveContainer" containerID="23df9f5e487c90cd3a8c5694679972c5e894b1b84afd6fb8e62b3b3d43f428ad" Jan 29 15:31:42 crc kubenswrapper[5008]: I0129 15:31:42.645294 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"af4b11bc-2d2f-4e68-ab59-cbc08fecba52","Type":"ContainerStarted","Data":"7c398ab151812dfd065f5ce688e5a1aab9c54766a8265004ad57f01a071e1896"} Jan 29 15:31:42 crc kubenswrapper[5008]: I0129 15:31:42.648275 5008 generic.go:334] "Generic (PLEG): container finished" podID="70797d1b-2554-4595-aaed-29539196bbd1" containerID="676b828ba9c9e8717218ec7b830b98e6f483d763f70089b6eac56428e7248a03" exitCode=0 Jan 29 15:31:42 crc kubenswrapper[5008]: I0129 15:31:42.648448 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"70797d1b-2554-4595-aaed-29539196bbd1","Type":"ContainerDied","Data":"676b828ba9c9e8717218ec7b830b98e6f483d763f70089b6eac56428e7248a03"} Jan 29 15:31:42 crc kubenswrapper[5008]: I0129 15:31:42.650185 5008 generic.go:334] "Generic (PLEG): container finished" podID="2f3f8688-c937-4724-83ec-494dcce5177d" containerID="8a153c4c7ac3c8a86b287a80213273efffbc6db000eff9bef3905617af6a5a26" exitCode=0 Jan 29 15:31:42 crc kubenswrapper[5008]: I0129 15:31:42.650296 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-58c6d6bbf4-dzqxt" event={"ID":"2f3f8688-c937-4724-83ec-494dcce5177d","Type":"ContainerDied","Data":"8a153c4c7ac3c8a86b287a80213273efffbc6db000eff9bef3905617af6a5a26"} Jan 29 15:31:42 crc kubenswrapper[5008]: I0129 15:31:42.650324 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-58c6d6bbf4-dzqxt" event={"ID":"2f3f8688-c937-4724-83ec-494dcce5177d","Type":"ContainerDied","Data":"8377a8ad0934799e196e9abb9f60b501a8ac0a2ca3e736013d5254ba54abd663"} Jan 29 15:31:42 crc kubenswrapper[5008]: I0129 15:31:42.650516 5008 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-58c6d6bbf4-dzqxt" Jan 29 15:31:42 crc kubenswrapper[5008]: I0129 15:31:42.658670 5008 scope.go:117] "RemoveContainer" containerID="23df9f5e487c90cd3a8c5694679972c5e894b1b84afd6fb8e62b3b3d43f428ad" Jan 29 15:31:42 crc kubenswrapper[5008]: E0129 15:31:42.659156 5008 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"23df9f5e487c90cd3a8c5694679972c5e894b1b84afd6fb8e62b3b3d43f428ad\": container with ID starting with 23df9f5e487c90cd3a8c5694679972c5e894b1b84afd6fb8e62b3b3d43f428ad not found: ID does not exist" containerID="23df9f5e487c90cd3a8c5694679972c5e894b1b84afd6fb8e62b3b3d43f428ad" Jan 29 15:31:42 crc kubenswrapper[5008]: I0129 15:31:42.659195 5008 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"23df9f5e487c90cd3a8c5694679972c5e894b1b84afd6fb8e62b3b3d43f428ad"} err="failed to get container status \"23df9f5e487c90cd3a8c5694679972c5e894b1b84afd6fb8e62b3b3d43f428ad\": rpc error: code = NotFound desc = could not find container \"23df9f5e487c90cd3a8c5694679972c5e894b1b84afd6fb8e62b3b3d43f428ad\": container with ID starting with 23df9f5e487c90cd3a8c5694679972c5e894b1b84afd6fb8e62b3b3d43f428ad not found: ID does not exist" Jan 29 15:31:42 crc kubenswrapper[5008]: I0129 15:31:42.659225 5008 scope.go:117] "RemoveContainer" containerID="8a153c4c7ac3c8a86b287a80213273efffbc6db000eff9bef3905617af6a5a26" Jan 29 15:31:42 crc kubenswrapper[5008]: I0129 15:31:42.663980 5008 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/installer-9-crc" podStartSLOduration=45.66396284 podStartE2EDuration="45.66396284s" podCreationTimestamp="2026-01-29 15:30:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 15:31:42.66300885 +0000 UTC m=+246.335863107" watchObservedRunningTime="2026-01-29 15:31:42.66396284 +0000 UTC m=+246.336817077" Jan 29 15:31:42 crc kubenswrapper[5008]: I0129 15:31:42.679644 5008 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-65dbd47846-qgvzb"] Jan 29 15:31:42 crc kubenswrapper[5008]: I0129 15:31:42.679743 5008 scope.go:117] "RemoveContainer" containerID="8a153c4c7ac3c8a86b287a80213273efffbc6db000eff9bef3905617af6a5a26" Jan 29 15:31:42 crc kubenswrapper[5008]: E0129 15:31:42.680154 5008 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8a153c4c7ac3c8a86b287a80213273efffbc6db000eff9bef3905617af6a5a26\": container with ID starting with 8a153c4c7ac3c8a86b287a80213273efffbc6db000eff9bef3905617af6a5a26 not found: ID does not exist" containerID="8a153c4c7ac3c8a86b287a80213273efffbc6db000eff9bef3905617af6a5a26" Jan 29 15:31:42 crc kubenswrapper[5008]: I0129 15:31:42.680192 5008 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8a153c4c7ac3c8a86b287a80213273efffbc6db000eff9bef3905617af6a5a26"} err="failed to get container status \"8a153c4c7ac3c8a86b287a80213273efffbc6db000eff9bef3905617af6a5a26\": rpc error: code = NotFound desc = could not find container \"8a153c4c7ac3c8a86b287a80213273efffbc6db000eff9bef3905617af6a5a26\": container with ID starting with 8a153c4c7ac3c8a86b287a80213273efffbc6db000eff9bef3905617af6a5a26 not found: ID does not exist" Jan 29 15:31:42 crc kubenswrapper[5008]: I0129 15:31:42.682439 5008 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-65dbd47846-qgvzb"] Jan 29 15:31:42 crc kubenswrapper[5008]: I0129 15:31:42.706975 5008 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-58c6d6bbf4-dzqxt"] Jan 29 15:31:42 crc kubenswrapper[5008]: I0129 15:31:42.709305 5008 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-58c6d6bbf4-dzqxt"] Jan 29 15:31:42 crc kubenswrapper[5008]: I0129 15:31:42.838480 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-585448bccb-4m9fq"] Jan 29 15:31:43 crc kubenswrapper[5008]: I0129 15:31:43.330459 5008 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2f3f8688-c937-4724-83ec-494dcce5177d" path="/var/lib/kubelet/pods/2f3f8688-c937-4724-83ec-494dcce5177d/volumes" Jan 29 15:31:43 crc kubenswrapper[5008]: I0129 15:31:43.331171 5008 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b46f1f12-a290-441c-a3bb-4584cc2a3102" path="/var/lib/kubelet/pods/b46f1f12-a290-441c-a3bb-4584cc2a3102/volumes" Jan 29 15:31:43 crc kubenswrapper[5008]: I0129 15:31:43.657458 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-585448bccb-4m9fq" event={"ID":"64612440-e59b-46bb-a60f-f10989166e58","Type":"ContainerStarted","Data":"40321afd189e235fc1bb78923d74cb98e8fe85b88b55f9bd3844976bd07eb0f5"} Jan 29 15:31:43 crc kubenswrapper[5008]: I0129 15:31:43.658624 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-585448bccb-4m9fq" event={"ID":"64612440-e59b-46bb-a60f-f10989166e58","Type":"ContainerStarted","Data":"cbb9854cfe9f99d27e1796a8bf85e10b2281996e9b1dad79a2b1e102f79ba6c3"} Jan 29 15:31:43 crc kubenswrapper[5008]: I0129 15:31:43.680697 5008 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-585448bccb-4m9fq" podStartSLOduration=47.680674894 podStartE2EDuration="47.680674894s" podCreationTimestamp="2026-01-29 15:30:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 15:31:43.680131023 +0000 UTC m=+247.352985310" watchObservedRunningTime="2026-01-29 15:31:43.680674894 +0000 UTC m=+247.353529161" Jan 29 15:31:43 crc kubenswrapper[5008]: I0129 15:31:43.874544 5008 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 29 15:31:44 crc kubenswrapper[5008]: I0129 15:31:44.036589 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/70797d1b-2554-4595-aaed-29539196bbd1-kubelet-dir\") pod \"70797d1b-2554-4595-aaed-29539196bbd1\" (UID: \"70797d1b-2554-4595-aaed-29539196bbd1\") " Jan 29 15:31:44 crc kubenswrapper[5008]: I0129 15:31:44.036645 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/70797d1b-2554-4595-aaed-29539196bbd1-kube-api-access\") pod \"70797d1b-2554-4595-aaed-29539196bbd1\" (UID: \"70797d1b-2554-4595-aaed-29539196bbd1\") " Jan 29 15:31:44 crc kubenswrapper[5008]: I0129 15:31:44.036767 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/70797d1b-2554-4595-aaed-29539196bbd1-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "70797d1b-2554-4595-aaed-29539196bbd1" (UID: "70797d1b-2554-4595-aaed-29539196bbd1"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 15:31:44 crc kubenswrapper[5008]: I0129 15:31:44.037330 5008 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/70797d1b-2554-4595-aaed-29539196bbd1-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 29 15:31:44 crc kubenswrapper[5008]: I0129 15:31:44.051345 5008 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-556b59fcb8-5lkx4"] Jan 29 15:31:44 crc kubenswrapper[5008]: E0129 15:31:44.051560 5008 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b46f1f12-a290-441c-a3bb-4584cc2a3102" containerName="route-controller-manager" Jan 29 15:31:44 crc kubenswrapper[5008]: I0129 15:31:44.051573 5008 state_mem.go:107] "Deleted CPUSet assignment" podUID="b46f1f12-a290-441c-a3bb-4584cc2a3102" containerName="route-controller-manager" Jan 29 15:31:44 crc kubenswrapper[5008]: E0129 15:31:44.051586 5008 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="70797d1b-2554-4595-aaed-29539196bbd1" containerName="pruner" Jan 29 15:31:44 crc kubenswrapper[5008]: I0129 15:31:44.051592 5008 state_mem.go:107] "Deleted CPUSet assignment" podUID="70797d1b-2554-4595-aaed-29539196bbd1" containerName="pruner" Jan 29 15:31:44 crc kubenswrapper[5008]: I0129 15:31:44.051697 5008 memory_manager.go:354] "RemoveStaleState removing state" podUID="b46f1f12-a290-441c-a3bb-4584cc2a3102" containerName="route-controller-manager" Jan 29 15:31:44 crc kubenswrapper[5008]: I0129 15:31:44.051713 5008 memory_manager.go:354] "RemoveStaleState removing state" podUID="70797d1b-2554-4595-aaed-29539196bbd1" containerName="pruner" Jan 29 15:31:44 crc kubenswrapper[5008]: I0129 15:31:44.052339 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-556b59fcb8-5lkx4" Jan 29 15:31:44 crc kubenswrapper[5008]: I0129 15:31:44.052657 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/70797d1b-2554-4595-aaed-29539196bbd1-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "70797d1b-2554-4595-aaed-29539196bbd1" (UID: "70797d1b-2554-4595-aaed-29539196bbd1"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:31:44 crc kubenswrapper[5008]: I0129 15:31:44.054195 5008 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 29 15:31:44 crc kubenswrapper[5008]: I0129 15:31:44.054707 5008 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 29 15:31:44 crc kubenswrapper[5008]: I0129 15:31:44.055555 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 29 15:31:44 crc kubenswrapper[5008]: I0129 15:31:44.055564 5008 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 29 15:31:44 crc kubenswrapper[5008]: I0129 15:31:44.055977 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 29 15:31:44 crc kubenswrapper[5008]: I0129 15:31:44.055724 5008 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 29 15:31:44 crc kubenswrapper[5008]: I0129 15:31:44.068660 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-556b59fcb8-5lkx4"] Jan 29 15:31:44 crc kubenswrapper[5008]: I0129 15:31:44.138690 5008 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/70797d1b-2554-4595-aaed-29539196bbd1-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 29 15:31:44 crc kubenswrapper[5008]: I0129 15:31:44.239753 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mw5nn\" (UniqueName: \"kubernetes.io/projected/bf35ff68-68b3-4743-803f-e451a5f5c5bd-kube-api-access-mw5nn\") pod \"route-controller-manager-556b59fcb8-5lkx4\" (UID: \"bf35ff68-68b3-4743-803f-e451a5f5c5bd\") " pod="openshift-route-controller-manager/route-controller-manager-556b59fcb8-5lkx4" Jan 29 15:31:44 crc kubenswrapper[5008]: I0129 15:31:44.240207 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bf35ff68-68b3-4743-803f-e451a5f5c5bd-serving-cert\") pod \"route-controller-manager-556b59fcb8-5lkx4\" (UID: \"bf35ff68-68b3-4743-803f-e451a5f5c5bd\") " pod="openshift-route-controller-manager/route-controller-manager-556b59fcb8-5lkx4" Jan 29 15:31:44 crc kubenswrapper[5008]: I0129 15:31:44.240247 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/bf35ff68-68b3-4743-803f-e451a5f5c5bd-client-ca\") pod \"route-controller-manager-556b59fcb8-5lkx4\" (UID: \"bf35ff68-68b3-4743-803f-e451a5f5c5bd\") " pod="openshift-route-controller-manager/route-controller-manager-556b59fcb8-5lkx4" Jan 29 15:31:44 crc kubenswrapper[5008]: I0129 15:31:44.240321 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bf35ff68-68b3-4743-803f-e451a5f5c5bd-config\") pod \"route-controller-manager-556b59fcb8-5lkx4\" (UID: \"bf35ff68-68b3-4743-803f-e451a5f5c5bd\") " pod="openshift-route-controller-manager/route-controller-manager-556b59fcb8-5lkx4" Jan 29 15:31:44 crc kubenswrapper[5008]: I0129 15:31:44.342130 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mw5nn\" (UniqueName: \"kubernetes.io/projected/bf35ff68-68b3-4743-803f-e451a5f5c5bd-kube-api-access-mw5nn\") pod \"route-controller-manager-556b59fcb8-5lkx4\" (UID: \"bf35ff68-68b3-4743-803f-e451a5f5c5bd\") " pod="openshift-route-controller-manager/route-controller-manager-556b59fcb8-5lkx4" Jan 29 15:31:44 crc kubenswrapper[5008]: I0129 15:31:44.342248 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bf35ff68-68b3-4743-803f-e451a5f5c5bd-serving-cert\") pod \"route-controller-manager-556b59fcb8-5lkx4\" (UID: \"bf35ff68-68b3-4743-803f-e451a5f5c5bd\") " pod="openshift-route-controller-manager/route-controller-manager-556b59fcb8-5lkx4" Jan 29 15:31:44 crc kubenswrapper[5008]: I0129 15:31:44.342303 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/bf35ff68-68b3-4743-803f-e451a5f5c5bd-client-ca\") pod \"route-controller-manager-556b59fcb8-5lkx4\" (UID: \"bf35ff68-68b3-4743-803f-e451a5f5c5bd\") " pod="openshift-route-controller-manager/route-controller-manager-556b59fcb8-5lkx4" Jan 29 15:31:44 crc kubenswrapper[5008]: I0129 15:31:44.342424 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bf35ff68-68b3-4743-803f-e451a5f5c5bd-config\") pod \"route-controller-manager-556b59fcb8-5lkx4\" (UID: \"bf35ff68-68b3-4743-803f-e451a5f5c5bd\") " pod="openshift-route-controller-manager/route-controller-manager-556b59fcb8-5lkx4" Jan 29 15:31:44 crc kubenswrapper[5008]: I0129 15:31:44.345013 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/bf35ff68-68b3-4743-803f-e451a5f5c5bd-client-ca\") pod \"route-controller-manager-556b59fcb8-5lkx4\" (UID: \"bf35ff68-68b3-4743-803f-e451a5f5c5bd\") " pod="openshift-route-controller-manager/route-controller-manager-556b59fcb8-5lkx4" Jan 29 15:31:44 crc kubenswrapper[5008]: I0129 15:31:44.345502 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bf35ff68-68b3-4743-803f-e451a5f5c5bd-config\") pod \"route-controller-manager-556b59fcb8-5lkx4\" (UID: \"bf35ff68-68b3-4743-803f-e451a5f5c5bd\") " pod="openshift-route-controller-manager/route-controller-manager-556b59fcb8-5lkx4" Jan 29 15:31:44 crc kubenswrapper[5008]: I0129 15:31:44.354918 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bf35ff68-68b3-4743-803f-e451a5f5c5bd-serving-cert\") pod \"route-controller-manager-556b59fcb8-5lkx4\" (UID: \"bf35ff68-68b3-4743-803f-e451a5f5c5bd\") " pod="openshift-route-controller-manager/route-controller-manager-556b59fcb8-5lkx4" Jan 29 15:31:44 crc kubenswrapper[5008]: I0129 15:31:44.364684 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mw5nn\" (UniqueName: \"kubernetes.io/projected/bf35ff68-68b3-4743-803f-e451a5f5c5bd-kube-api-access-mw5nn\") pod \"route-controller-manager-556b59fcb8-5lkx4\" (UID: \"bf35ff68-68b3-4743-803f-e451a5f5c5bd\") " pod="openshift-route-controller-manager/route-controller-manager-556b59fcb8-5lkx4" Jan 29 15:31:44 crc kubenswrapper[5008]: I0129 15:31:44.388459 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-556b59fcb8-5lkx4" Jan 29 15:31:44 crc kubenswrapper[5008]: I0129 15:31:44.665207 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"70797d1b-2554-4595-aaed-29539196bbd1","Type":"ContainerDied","Data":"c53bfe723487e14c18ecdbc12136eea34bb11109ee7d7e5f7b0bdf07b8cfad3e"} Jan 29 15:31:44 crc kubenswrapper[5008]: I0129 15:31:44.665258 5008 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c53bfe723487e14c18ecdbc12136eea34bb11109ee7d7e5f7b0bdf07b8cfad3e" Jan 29 15:31:44 crc kubenswrapper[5008]: I0129 15:31:44.665226 5008 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 29 15:31:44 crc kubenswrapper[5008]: I0129 15:31:44.665420 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-585448bccb-4m9fq" Jan 29 15:31:44 crc kubenswrapper[5008]: I0129 15:31:44.672867 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-585448bccb-4m9fq" Jan 29 15:31:44 crc kubenswrapper[5008]: I0129 15:31:44.835969 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-556b59fcb8-5lkx4"] Jan 29 15:31:45 crc kubenswrapper[5008]: I0129 15:31:45.671461 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-556b59fcb8-5lkx4" event={"ID":"bf35ff68-68b3-4743-803f-e451a5f5c5bd","Type":"ContainerStarted","Data":"dbb82c43ba7943df2747aa78a2127da4c2cba3ad40144842a2f920c5e71f8479"} Jan 29 15:31:45 crc kubenswrapper[5008]: I0129 15:31:45.671747 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-556b59fcb8-5lkx4" event={"ID":"bf35ff68-68b3-4743-803f-e451a5f5c5bd","Type":"ContainerStarted","Data":"151a001a83e99402752792ff1d9b03e857965ca404f04dce980c55396aacc517"} Jan 29 15:31:45 crc kubenswrapper[5008]: I0129 15:31:45.672036 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-556b59fcb8-5lkx4" Jan 29 15:31:45 crc kubenswrapper[5008]: I0129 15:31:45.677448 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-556b59fcb8-5lkx4" Jan 29 15:31:45 crc kubenswrapper[5008]: I0129 15:31:45.691063 5008 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-556b59fcb8-5lkx4" podStartSLOduration=49.69104716 podStartE2EDuration="49.69104716s" podCreationTimestamp="2026-01-29 15:30:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 15:31:45.685897102 +0000 UTC m=+249.358751339" watchObservedRunningTime="2026-01-29 15:31:45.69104716 +0000 UTC m=+249.363901397" Jan 29 15:31:48 crc kubenswrapper[5008]: E0129 15:31:48.326393 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-lhtht" podUID="a954daed-802a-4b46-81ef-7079dcddbaa5" Jan 29 15:31:50 crc kubenswrapper[5008]: E0129 15:31:50.326258 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-tst9c" podUID="ea8deba9-72cb-4274-add1-e80591a9e7cc" Jan 29 15:31:52 crc kubenswrapper[5008]: I0129 15:31:52.726589 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-h7vmc" event={"ID":"9bcecb83-1aec-4bd4-9b46-f02deb628018","Type":"ContainerStarted","Data":"b223097e454aee435bbd77657fe9958c2c3189f1cb2e7d87694fd6419e82df76"} Jan 29 15:31:53 crc kubenswrapper[5008]: I0129 15:31:53.746194 5008 generic.go:334] "Generic (PLEG): container finished" podID="9bcecb83-1aec-4bd4-9b46-f02deb628018" containerID="b223097e454aee435bbd77657fe9958c2c3189f1cb2e7d87694fd6419e82df76" exitCode=0 Jan 29 15:31:53 crc kubenswrapper[5008]: I0129 15:31:53.746264 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-h7vmc" event={"ID":"9bcecb83-1aec-4bd4-9b46-f02deb628018","Type":"ContainerDied","Data":"b223097e454aee435bbd77657fe9958c2c3189f1cb2e7d87694fd6419e82df76"} Jan 29 15:31:54 crc kubenswrapper[5008]: E0129 15:31:54.325559 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-z9t2h" podUID="250e7db8-88dd-44fd-8d73-51a6f8f4ba96" Jan 29 15:31:54 crc kubenswrapper[5008]: E0129 15:31:54.325866 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-cwgw5" podUID="6aebe040-289b-48c1-a825-f12b471a5ad6" Jan 29 15:31:54 crc kubenswrapper[5008]: I0129 15:31:54.752576 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4dwdf" event={"ID":"d2d42845-cca1-4b60-bc84-4b2baebf702b","Type":"ContainerStarted","Data":"5ef6720d337e6b7bdd09776b3452601c072f482c35a5a9e55c34041df49ba20b"} Jan 29 15:31:55 crc kubenswrapper[5008]: E0129 15:31:55.325721 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-fd6nq" podUID="37742fc9-fce4-41f0-ba04-7232b6e647a7" Jan 29 15:31:55 crc kubenswrapper[5008]: I0129 15:31:55.761330 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-h7vmc" event={"ID":"9bcecb83-1aec-4bd4-9b46-f02deb628018","Type":"ContainerStarted","Data":"c55db14b5b65a6dc32558d4a826c10d9adc0281fc2c7c7c6aeb10f0ab3965d96"} Jan 29 15:31:55 crc kubenswrapper[5008]: I0129 15:31:55.763524 5008 generic.go:334] "Generic (PLEG): container finished" podID="d2d42845-cca1-4b60-bc84-4b2baebf702b" containerID="5ef6720d337e6b7bdd09776b3452601c072f482c35a5a9e55c34041df49ba20b" exitCode=0 Jan 29 15:31:55 crc kubenswrapper[5008]: I0129 15:31:55.763554 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4dwdf" event={"ID":"d2d42845-cca1-4b60-bc84-4b2baebf702b","Type":"ContainerDied","Data":"5ef6720d337e6b7bdd09776b3452601c072f482c35a5a9e55c34041df49ba20b"} Jan 29 15:31:56 crc kubenswrapper[5008]: E0129 15:31:56.373592 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-mkxw5" podUID="6aef1830-577d-405c-bb54-6f9fe217ae86" Jan 29 15:31:56 crc kubenswrapper[5008]: I0129 15:31:56.774719 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4dwdf" event={"ID":"d2d42845-cca1-4b60-bc84-4b2baebf702b","Type":"ContainerStarted","Data":"f602032356e6af24b6539dc335606faed034c76d076edd55de00a1f6423d0579"} Jan 29 15:31:56 crc kubenswrapper[5008]: I0129 15:31:56.798491 5008 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-h7vmc" podStartSLOduration=5.969454205 podStartE2EDuration="1m40.79847478s" podCreationTimestamp="2026-01-29 15:30:16 +0000 UTC" firstStartedPulling="2026-01-29 15:30:20.05662067 +0000 UTC m=+163.729474907" lastFinishedPulling="2026-01-29 15:31:54.885641245 +0000 UTC m=+258.558495482" observedRunningTime="2026-01-29 15:31:56.795225438 +0000 UTC m=+260.468079735" watchObservedRunningTime="2026-01-29 15:31:56.79847478 +0000 UTC m=+260.471329017" Jan 29 15:31:56 crc kubenswrapper[5008]: I0129 15:31:56.818374 5008 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-4dwdf" podStartSLOduration=3.433740065 podStartE2EDuration="1m40.818358426s" podCreationTimestamp="2026-01-29 15:30:16 +0000 UTC" firstStartedPulling="2026-01-29 15:30:19.013641734 +0000 UTC m=+162.686495971" lastFinishedPulling="2026-01-29 15:31:56.398260095 +0000 UTC m=+260.071114332" observedRunningTime="2026-01-29 15:31:56.815226766 +0000 UTC m=+260.488081043" watchObservedRunningTime="2026-01-29 15:31:56.818358426 +0000 UTC m=+260.491212663" Jan 29 15:31:57 crc kubenswrapper[5008]: I0129 15:31:57.150104 5008 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-h7vmc" Jan 29 15:31:57 crc kubenswrapper[5008]: I0129 15:31:57.151651 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-h7vmc" Jan 29 15:31:58 crc kubenswrapper[5008]: I0129 15:31:58.401013 5008 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-h7vmc" podUID="9bcecb83-1aec-4bd4-9b46-f02deb628018" containerName="registry-server" probeResult="failure" output=< Jan 29 15:31:58 crc kubenswrapper[5008]: timeout: failed to connect service ":50051" within 1s Jan 29 15:31:58 crc kubenswrapper[5008]: > Jan 29 15:32:02 crc kubenswrapper[5008]: I0129 15:32:02.804713 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lhtht" event={"ID":"a954daed-802a-4b46-81ef-7079dcddbaa5","Type":"ContainerStarted","Data":"3eeb9aabc3dc27af90cd2bf8cd8e6832ded1925edec96187d03601420f52e277"} Jan 29 15:32:03 crc kubenswrapper[5008]: I0129 15:32:03.811247 5008 generic.go:334] "Generic (PLEG): container finished" podID="a954daed-802a-4b46-81ef-7079dcddbaa5" containerID="3eeb9aabc3dc27af90cd2bf8cd8e6832ded1925edec96187d03601420f52e277" exitCode=0 Jan 29 15:32:03 crc kubenswrapper[5008]: I0129 15:32:03.811300 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lhtht" event={"ID":"a954daed-802a-4b46-81ef-7079dcddbaa5","Type":"ContainerDied","Data":"3eeb9aabc3dc27af90cd2bf8cd8e6832ded1925edec96187d03601420f52e277"} Jan 29 15:32:06 crc kubenswrapper[5008]: I0129 15:32:06.771601 5008 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-4dwdf" Jan 29 15:32:06 crc kubenswrapper[5008]: I0129 15:32:06.771981 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-4dwdf" Jan 29 15:32:06 crc kubenswrapper[5008]: I0129 15:32:06.971351 5008 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-4dwdf" Jan 29 15:32:07 crc kubenswrapper[5008]: I0129 15:32:07.005768 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-4dwdf" Jan 29 15:32:07 crc kubenswrapper[5008]: I0129 15:32:07.228088 5008 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-h7vmc" Jan 29 15:32:07 crc kubenswrapper[5008]: I0129 15:32:07.262878 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-h7vmc" Jan 29 15:32:07 crc kubenswrapper[5008]: I0129 15:32:07.837577 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lhtht" event={"ID":"a954daed-802a-4b46-81ef-7079dcddbaa5","Type":"ContainerStarted","Data":"a279fd865e1e761fdf4aa984a1b9d5a9d26fdcf44f1cb482fe636069d4d8f0ee"} Jan 29 15:32:07 crc kubenswrapper[5008]: I0129 15:32:07.866271 5008 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-lhtht" podStartSLOduration=4.616620277 podStartE2EDuration="1m48.866244207s" podCreationTimestamp="2026-01-29 15:30:19 +0000 UTC" firstStartedPulling="2026-01-29 15:30:23.095532993 +0000 UTC m=+166.768387230" lastFinishedPulling="2026-01-29 15:32:07.345156923 +0000 UTC m=+271.018011160" observedRunningTime="2026-01-29 15:32:07.855564447 +0000 UTC m=+271.528418694" watchObservedRunningTime="2026-01-29 15:32:07.866244207 +0000 UTC m=+271.539098474" Jan 29 15:32:08 crc kubenswrapper[5008]: I0129 15:32:08.398358 5008 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-h7vmc"] Jan 29 15:32:08 crc kubenswrapper[5008]: I0129 15:32:08.844770 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tst9c" event={"ID":"ea8deba9-72cb-4274-add1-e80591a9e7cc","Type":"ContainerStarted","Data":"c66762f5da3eb3376b4ceceb433da1a00c15c72c9c525f47d7d7528bad62fea4"} Jan 29 15:32:08 crc kubenswrapper[5008]: I0129 15:32:08.844959 5008 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-h7vmc" podUID="9bcecb83-1aec-4bd4-9b46-f02deb628018" containerName="registry-server" containerID="cri-o://c55db14b5b65a6dc32558d4a826c10d9adc0281fc2c7c7c6aeb10f0ab3965d96" gracePeriod=2 Jan 29 15:32:09 crc kubenswrapper[5008]: I0129 15:32:09.254134 5008 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-h7vmc" Jan 29 15:32:09 crc kubenswrapper[5008]: I0129 15:32:09.392662 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9bcecb83-1aec-4bd4-9b46-f02deb628018-utilities\") pod \"9bcecb83-1aec-4bd4-9b46-f02deb628018\" (UID: \"9bcecb83-1aec-4bd4-9b46-f02deb628018\") " Jan 29 15:32:09 crc kubenswrapper[5008]: I0129 15:32:09.392985 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9bcecb83-1aec-4bd4-9b46-f02deb628018-catalog-content\") pod \"9bcecb83-1aec-4bd4-9b46-f02deb628018\" (UID: \"9bcecb83-1aec-4bd4-9b46-f02deb628018\") " Jan 29 15:32:09 crc kubenswrapper[5008]: I0129 15:32:09.393057 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-btkm4\" (UniqueName: \"kubernetes.io/projected/9bcecb83-1aec-4bd4-9b46-f02deb628018-kube-api-access-btkm4\") pod \"9bcecb83-1aec-4bd4-9b46-f02deb628018\" (UID: \"9bcecb83-1aec-4bd4-9b46-f02deb628018\") " Jan 29 15:32:09 crc kubenswrapper[5008]: I0129 15:32:09.405716 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9bcecb83-1aec-4bd4-9b46-f02deb628018-utilities" (OuterVolumeSpecName: "utilities") pod "9bcecb83-1aec-4bd4-9b46-f02deb628018" (UID: "9bcecb83-1aec-4bd4-9b46-f02deb628018"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 15:32:09 crc kubenswrapper[5008]: I0129 15:32:09.406155 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9bcecb83-1aec-4bd4-9b46-f02deb628018-kube-api-access-btkm4" (OuterVolumeSpecName: "kube-api-access-btkm4") pod "9bcecb83-1aec-4bd4-9b46-f02deb628018" (UID: "9bcecb83-1aec-4bd4-9b46-f02deb628018"). InnerVolumeSpecName "kube-api-access-btkm4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:32:09 crc kubenswrapper[5008]: I0129 15:32:09.444311 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9bcecb83-1aec-4bd4-9b46-f02deb628018-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "9bcecb83-1aec-4bd4-9b46-f02deb628018" (UID: "9bcecb83-1aec-4bd4-9b46-f02deb628018"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 15:32:09 crc kubenswrapper[5008]: I0129 15:32:09.494676 5008 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9bcecb83-1aec-4bd4-9b46-f02deb628018-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 15:32:09 crc kubenswrapper[5008]: I0129 15:32:09.494715 5008 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-btkm4\" (UniqueName: \"kubernetes.io/projected/9bcecb83-1aec-4bd4-9b46-f02deb628018-kube-api-access-btkm4\") on node \"crc\" DevicePath \"\"" Jan 29 15:32:09 crc kubenswrapper[5008]: I0129 15:32:09.494733 5008 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9bcecb83-1aec-4bd4-9b46-f02deb628018-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 15:32:09 crc kubenswrapper[5008]: I0129 15:32:09.851546 5008 generic.go:334] "Generic (PLEG): container finished" podID="9bcecb83-1aec-4bd4-9b46-f02deb628018" containerID="c55db14b5b65a6dc32558d4a826c10d9adc0281fc2c7c7c6aeb10f0ab3965d96" exitCode=0 Jan 29 15:32:09 crc kubenswrapper[5008]: I0129 15:32:09.851626 5008 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-h7vmc" Jan 29 15:32:09 crc kubenswrapper[5008]: I0129 15:32:09.851900 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-h7vmc" event={"ID":"9bcecb83-1aec-4bd4-9b46-f02deb628018","Type":"ContainerDied","Data":"c55db14b5b65a6dc32558d4a826c10d9adc0281fc2c7c7c6aeb10f0ab3965d96"} Jan 29 15:32:09 crc kubenswrapper[5008]: I0129 15:32:09.852124 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-h7vmc" event={"ID":"9bcecb83-1aec-4bd4-9b46-f02deb628018","Type":"ContainerDied","Data":"af3e1a3fc6fe6b714e3700dd86c4612e0716f599f6f3f8cae393165561ce5bfe"} Jan 29 15:32:09 crc kubenswrapper[5008]: I0129 15:32:09.852170 5008 scope.go:117] "RemoveContainer" containerID="c55db14b5b65a6dc32558d4a826c10d9adc0281fc2c7c7c6aeb10f0ab3965d96" Jan 29 15:32:09 crc kubenswrapper[5008]: I0129 15:32:09.860015 5008 generic.go:334] "Generic (PLEG): container finished" podID="ea8deba9-72cb-4274-add1-e80591a9e7cc" containerID="c66762f5da3eb3376b4ceceb433da1a00c15c72c9c525f47d7d7528bad62fea4" exitCode=0 Jan 29 15:32:09 crc kubenswrapper[5008]: I0129 15:32:09.860089 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tst9c" event={"ID":"ea8deba9-72cb-4274-add1-e80591a9e7cc","Type":"ContainerDied","Data":"c66762f5da3eb3376b4ceceb433da1a00c15c72c9c525f47d7d7528bad62fea4"} Jan 29 15:32:09 crc kubenswrapper[5008]: I0129 15:32:09.865880 5008 generic.go:334] "Generic (PLEG): container finished" podID="6aebe040-289b-48c1-a825-f12b471a5ad6" containerID="b7bd66f1ab52d36602a85b79dd606c04b810e09efd18dedd3f58cfeff8f24869" exitCode=0 Jan 29 15:32:09 crc kubenswrapper[5008]: I0129 15:32:09.865918 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cwgw5" event={"ID":"6aebe040-289b-48c1-a825-f12b471a5ad6","Type":"ContainerDied","Data":"b7bd66f1ab52d36602a85b79dd606c04b810e09efd18dedd3f58cfeff8f24869"} Jan 29 15:32:09 crc kubenswrapper[5008]: I0129 15:32:09.872455 5008 scope.go:117] "RemoveContainer" containerID="b223097e454aee435bbd77657fe9958c2c3189f1cb2e7d87694fd6419e82df76" Jan 29 15:32:09 crc kubenswrapper[5008]: I0129 15:32:09.918839 5008 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-h7vmc"] Jan 29 15:32:09 crc kubenswrapper[5008]: I0129 15:32:09.919290 5008 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-h7vmc"] Jan 29 15:32:09 crc kubenswrapper[5008]: I0129 15:32:09.920371 5008 scope.go:117] "RemoveContainer" containerID="2c5bd79fe1383fd09ebd0db5b0a83990cb1f07f4f895a71dc2c671033d14863f" Jan 29 15:32:09 crc kubenswrapper[5008]: I0129 15:32:09.941634 5008 scope.go:117] "RemoveContainer" containerID="c55db14b5b65a6dc32558d4a826c10d9adc0281fc2c7c7c6aeb10f0ab3965d96" Jan 29 15:32:09 crc kubenswrapper[5008]: E0129 15:32:09.942183 5008 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c55db14b5b65a6dc32558d4a826c10d9adc0281fc2c7c7c6aeb10f0ab3965d96\": container with ID starting with c55db14b5b65a6dc32558d4a826c10d9adc0281fc2c7c7c6aeb10f0ab3965d96 not found: ID does not exist" containerID="c55db14b5b65a6dc32558d4a826c10d9adc0281fc2c7c7c6aeb10f0ab3965d96" Jan 29 15:32:09 crc kubenswrapper[5008]: I0129 15:32:09.942223 5008 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c55db14b5b65a6dc32558d4a826c10d9adc0281fc2c7c7c6aeb10f0ab3965d96"} err="failed to get container status \"c55db14b5b65a6dc32558d4a826c10d9adc0281fc2c7c7c6aeb10f0ab3965d96\": rpc error: code = NotFound desc = could not find container \"c55db14b5b65a6dc32558d4a826c10d9adc0281fc2c7c7c6aeb10f0ab3965d96\": container with ID starting with c55db14b5b65a6dc32558d4a826c10d9adc0281fc2c7c7c6aeb10f0ab3965d96 not found: ID does not exist" Jan 29 15:32:09 crc kubenswrapper[5008]: I0129 15:32:09.942249 5008 scope.go:117] "RemoveContainer" containerID="b223097e454aee435bbd77657fe9958c2c3189f1cb2e7d87694fd6419e82df76" Jan 29 15:32:09 crc kubenswrapper[5008]: E0129 15:32:09.943129 5008 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b223097e454aee435bbd77657fe9958c2c3189f1cb2e7d87694fd6419e82df76\": container with ID starting with b223097e454aee435bbd77657fe9958c2c3189f1cb2e7d87694fd6419e82df76 not found: ID does not exist" containerID="b223097e454aee435bbd77657fe9958c2c3189f1cb2e7d87694fd6419e82df76" Jan 29 15:32:09 crc kubenswrapper[5008]: I0129 15:32:09.943155 5008 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b223097e454aee435bbd77657fe9958c2c3189f1cb2e7d87694fd6419e82df76"} err="failed to get container status \"b223097e454aee435bbd77657fe9958c2c3189f1cb2e7d87694fd6419e82df76\": rpc error: code = NotFound desc = could not find container \"b223097e454aee435bbd77657fe9958c2c3189f1cb2e7d87694fd6419e82df76\": container with ID starting with b223097e454aee435bbd77657fe9958c2c3189f1cb2e7d87694fd6419e82df76 not found: ID does not exist" Jan 29 15:32:09 crc kubenswrapper[5008]: I0129 15:32:09.943172 5008 scope.go:117] "RemoveContainer" containerID="2c5bd79fe1383fd09ebd0db5b0a83990cb1f07f4f895a71dc2c671033d14863f" Jan 29 15:32:09 crc kubenswrapper[5008]: E0129 15:32:09.943695 5008 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2c5bd79fe1383fd09ebd0db5b0a83990cb1f07f4f895a71dc2c671033d14863f\": container with ID starting with 2c5bd79fe1383fd09ebd0db5b0a83990cb1f07f4f895a71dc2c671033d14863f not found: ID does not exist" containerID="2c5bd79fe1383fd09ebd0db5b0a83990cb1f07f4f895a71dc2c671033d14863f" Jan 29 15:32:09 crc kubenswrapper[5008]: I0129 15:32:09.943730 5008 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2c5bd79fe1383fd09ebd0db5b0a83990cb1f07f4f895a71dc2c671033d14863f"} err="failed to get container status \"2c5bd79fe1383fd09ebd0db5b0a83990cb1f07f4f895a71dc2c671033d14863f\": rpc error: code = NotFound desc = could not find container \"2c5bd79fe1383fd09ebd0db5b0a83990cb1f07f4f895a71dc2c671033d14863f\": container with ID starting with 2c5bd79fe1383fd09ebd0db5b0a83990cb1f07f4f895a71dc2c671033d14863f not found: ID does not exist" Jan 29 15:32:10 crc kubenswrapper[5008]: I0129 15:32:10.190655 5008 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-lhtht" Jan 29 15:32:10 crc kubenswrapper[5008]: I0129 15:32:10.190693 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-lhtht" Jan 29 15:32:11 crc kubenswrapper[5008]: I0129 15:32:11.236411 5008 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-lhtht" podUID="a954daed-802a-4b46-81ef-7079dcddbaa5" containerName="registry-server" probeResult="failure" output=< Jan 29 15:32:11 crc kubenswrapper[5008]: timeout: failed to connect service ":50051" within 1s Jan 29 15:32:11 crc kubenswrapper[5008]: > Jan 29 15:32:11 crc kubenswrapper[5008]: I0129 15:32:11.332444 5008 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9bcecb83-1aec-4bd4-9b46-f02deb628018" path="/var/lib/kubelet/pods/9bcecb83-1aec-4bd4-9b46-f02deb628018/volumes" Jan 29 15:32:12 crc kubenswrapper[5008]: I0129 15:32:12.884323 5008 generic.go:334] "Generic (PLEG): container finished" podID="250e7db8-88dd-44fd-8d73-51a6f8f4ba96" containerID="e1c843618cf47e0f0dd906fe965d45ec9a3b4948ac0b8fb36792a472149a1987" exitCode=0 Jan 29 15:32:12 crc kubenswrapper[5008]: I0129 15:32:12.884398 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-z9t2h" event={"ID":"250e7db8-88dd-44fd-8d73-51a6f8f4ba96","Type":"ContainerDied","Data":"e1c843618cf47e0f0dd906fe965d45ec9a3b4948ac0b8fb36792a472149a1987"} Jan 29 15:32:12 crc kubenswrapper[5008]: I0129 15:32:12.888433 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tst9c" event={"ID":"ea8deba9-72cb-4274-add1-e80591a9e7cc","Type":"ContainerStarted","Data":"9c3f342d019c4b99216e2db36a8519922ee184a93aa73ddc5f5e324d243d11e6"} Jan 29 15:32:12 crc kubenswrapper[5008]: I0129 15:32:12.891622 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cwgw5" event={"ID":"6aebe040-289b-48c1-a825-f12b471a5ad6","Type":"ContainerStarted","Data":"fb026266eabc9b6ace205f36e42b0dab030a6b065f770827028c0ed16d1aa84f"} Jan 29 15:32:12 crc kubenswrapper[5008]: I0129 15:32:12.893064 5008 generic.go:334] "Generic (PLEG): container finished" podID="6aef1830-577d-405c-bb54-6f9fe217ae86" containerID="6fbbb1c70108b41582b5edef8de3a67424fd51168b22d0d1f5469f11eceefd27" exitCode=0 Jan 29 15:32:12 crc kubenswrapper[5008]: I0129 15:32:12.893115 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mkxw5" event={"ID":"6aef1830-577d-405c-bb54-6f9fe217ae86","Type":"ContainerDied","Data":"6fbbb1c70108b41582b5edef8de3a67424fd51168b22d0d1f5469f11eceefd27"} Jan 29 15:32:12 crc kubenswrapper[5008]: I0129 15:32:12.894947 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fd6nq" event={"ID":"37742fc9-fce4-41f0-ba04-7232b6e647a7","Type":"ContainerDied","Data":"20a33ecc180de094bba9265fa7129b16b4f9de45343188f6197cb71f4f1ca528"} Jan 29 15:32:12 crc kubenswrapper[5008]: I0129 15:32:12.894969 5008 generic.go:334] "Generic (PLEG): container finished" podID="37742fc9-fce4-41f0-ba04-7232b6e647a7" containerID="20a33ecc180de094bba9265fa7129b16b4f9de45343188f6197cb71f4f1ca528" exitCode=0 Jan 29 15:32:12 crc kubenswrapper[5008]: I0129 15:32:12.939364 5008 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-tst9c" podStartSLOduration=5.101490329 podStartE2EDuration="1m53.939346405s" podCreationTimestamp="2026-01-29 15:30:19 +0000 UTC" firstStartedPulling="2026-01-29 15:30:23.095590535 +0000 UTC m=+166.768444772" lastFinishedPulling="2026-01-29 15:32:11.933446611 +0000 UTC m=+275.606300848" observedRunningTime="2026-01-29 15:32:12.937049543 +0000 UTC m=+276.609903810" watchObservedRunningTime="2026-01-29 15:32:12.939346405 +0000 UTC m=+276.612200652" Jan 29 15:32:12 crc kubenswrapper[5008]: I0129 15:32:12.958398 5008 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-cwgw5" podStartSLOduration=3.88256389 podStartE2EDuration="1m56.958377532s" podCreationTimestamp="2026-01-29 15:30:16 +0000 UTC" firstStartedPulling="2026-01-29 15:30:19.0207747 +0000 UTC m=+162.693628937" lastFinishedPulling="2026-01-29 15:32:12.096588342 +0000 UTC m=+275.769442579" observedRunningTime="2026-01-29 15:32:12.951700472 +0000 UTC m=+276.624554729" watchObservedRunningTime="2026-01-29 15:32:12.958377532 +0000 UTC m=+276.631231789" Jan 29 15:32:13 crc kubenswrapper[5008]: I0129 15:32:13.903411 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-z9t2h" event={"ID":"250e7db8-88dd-44fd-8d73-51a6f8f4ba96","Type":"ContainerStarted","Data":"437e7c2a1dc758509d30fbbc79bf01370b5111c6588abe44eded360be5897c51"} Jan 29 15:32:13 crc kubenswrapper[5008]: I0129 15:32:13.906012 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mkxw5" event={"ID":"6aef1830-577d-405c-bb54-6f9fe217ae86","Type":"ContainerStarted","Data":"ed3317e50ebd56908f1ad0d5cbc15af6b8fc520caee4385415a1615527ccd62b"} Jan 29 15:32:13 crc kubenswrapper[5008]: I0129 15:32:13.927143 5008 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-mkxw5" podStartSLOduration=4.68211099 podStartE2EDuration="1m55.927127742s" podCreationTimestamp="2026-01-29 15:30:18 +0000 UTC" firstStartedPulling="2026-01-29 15:30:22.075336143 +0000 UTC m=+165.748190380" lastFinishedPulling="2026-01-29 15:32:13.320352895 +0000 UTC m=+276.993207132" observedRunningTime="2026-01-29 15:32:13.924942713 +0000 UTC m=+277.597796960" watchObservedRunningTime="2026-01-29 15:32:13.927127742 +0000 UTC m=+277.599981979" Jan 29 15:32:16 crc kubenswrapper[5008]: I0129 15:32:16.719846 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-cwgw5" Jan 29 15:32:16 crc kubenswrapper[5008]: I0129 15:32:16.720144 5008 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-cwgw5" Jan 29 15:32:16 crc kubenswrapper[5008]: I0129 15:32:16.769603 5008 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-cwgw5" Jan 29 15:32:16 crc kubenswrapper[5008]: I0129 15:32:16.797071 5008 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-z9t2h" podStartSLOduration=7.55810387 podStartE2EDuration="2m0.797047638s" podCreationTimestamp="2026-01-29 15:30:16 +0000 UTC" firstStartedPulling="2026-01-29 15:30:20.057048301 +0000 UTC m=+163.729902528" lastFinishedPulling="2026-01-29 15:32:13.295992059 +0000 UTC m=+276.968846296" observedRunningTime="2026-01-29 15:32:14.933365013 +0000 UTC m=+278.606219250" watchObservedRunningTime="2026-01-29 15:32:16.797047638 +0000 UTC m=+280.469901925" Jan 29 15:32:16 crc kubenswrapper[5008]: I0129 15:32:16.858098 5008 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-585448bccb-4m9fq"] Jan 29 15:32:16 crc kubenswrapper[5008]: I0129 15:32:16.858347 5008 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-585448bccb-4m9fq" podUID="64612440-e59b-46bb-a60f-f10989166e58" containerName="controller-manager" containerID="cri-o://40321afd189e235fc1bb78923d74cb98e8fe85b88b55f9bd3844976bd07eb0f5" gracePeriod=30 Jan 29 15:32:16 crc kubenswrapper[5008]: I0129 15:32:16.955965 5008 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-z9t2h" Jan 29 15:32:16 crc kubenswrapper[5008]: I0129 15:32:16.956012 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-z9t2h" Jan 29 15:32:16 crc kubenswrapper[5008]: I0129 15:32:16.956290 5008 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-556b59fcb8-5lkx4"] Jan 29 15:32:16 crc kubenswrapper[5008]: I0129 15:32:16.956497 5008 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-556b59fcb8-5lkx4" podUID="bf35ff68-68b3-4743-803f-e451a5f5c5bd" containerName="route-controller-manager" containerID="cri-o://dbb82c43ba7943df2747aa78a2127da4c2cba3ad40144842a2f920c5e71f8479" gracePeriod=30 Jan 29 15:32:17 crc kubenswrapper[5008]: I0129 15:32:17.004586 5008 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-z9t2h" Jan 29 15:32:18 crc kubenswrapper[5008]: I0129 15:32:17.930037 5008 generic.go:334] "Generic (PLEG): container finished" podID="64612440-e59b-46bb-a60f-f10989166e58" containerID="40321afd189e235fc1bb78923d74cb98e8fe85b88b55f9bd3844976bd07eb0f5" exitCode=0 Jan 29 15:32:18 crc kubenswrapper[5008]: I0129 15:32:17.930127 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-585448bccb-4m9fq" event={"ID":"64612440-e59b-46bb-a60f-f10989166e58","Type":"ContainerDied","Data":"40321afd189e235fc1bb78923d74cb98e8fe85b88b55f9bd3844976bd07eb0f5"} Jan 29 15:32:18 crc kubenswrapper[5008]: I0129 15:32:17.931957 5008 generic.go:334] "Generic (PLEG): container finished" podID="bf35ff68-68b3-4743-803f-e451a5f5c5bd" containerID="dbb82c43ba7943df2747aa78a2127da4c2cba3ad40144842a2f920c5e71f8479" exitCode=0 Jan 29 15:32:18 crc kubenswrapper[5008]: I0129 15:32:17.932024 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-556b59fcb8-5lkx4" event={"ID":"bf35ff68-68b3-4743-803f-e451a5f5c5bd","Type":"ContainerDied","Data":"dbb82c43ba7943df2747aa78a2127da4c2cba3ad40144842a2f920c5e71f8479"} Jan 29 15:32:18 crc kubenswrapper[5008]: I0129 15:32:18.821325 5008 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-mkxw5" Jan 29 15:32:18 crc kubenswrapper[5008]: I0129 15:32:18.821738 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-mkxw5" Jan 29 15:32:18 crc kubenswrapper[5008]: I0129 15:32:18.872434 5008 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-mkxw5" Jan 29 15:32:18 crc kubenswrapper[5008]: I0129 15:32:18.938252 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-585448bccb-4m9fq" event={"ID":"64612440-e59b-46bb-a60f-f10989166e58","Type":"ContainerDied","Data":"cbb9854cfe9f99d27e1796a8bf85e10b2281996e9b1dad79a2b1e102f79ba6c3"} Jan 29 15:32:18 crc kubenswrapper[5008]: I0129 15:32:18.938286 5008 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cbb9854cfe9f99d27e1796a8bf85e10b2281996e9b1dad79a2b1e102f79ba6c3" Jan 29 15:32:18 crc kubenswrapper[5008]: I0129 15:32:18.940672 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fd6nq" event={"ID":"37742fc9-fce4-41f0-ba04-7232b6e647a7","Type":"ContainerStarted","Data":"a8d67992841dda8d8ecfe4b7861b1a552c63f6a32f809f7c1c99d45b6eba1024"} Jan 29 15:32:18 crc kubenswrapper[5008]: I0129 15:32:18.951247 5008 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-585448bccb-4m9fq" Jan 29 15:32:18 crc kubenswrapper[5008]: I0129 15:32:18.985748 5008 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-67df9d9956-9zzpb"] Jan 29 15:32:18 crc kubenswrapper[5008]: E0129 15:32:18.986175 5008 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9bcecb83-1aec-4bd4-9b46-f02deb628018" containerName="registry-server" Jan 29 15:32:18 crc kubenswrapper[5008]: I0129 15:32:18.986278 5008 state_mem.go:107] "Deleted CPUSet assignment" podUID="9bcecb83-1aec-4bd4-9b46-f02deb628018" containerName="registry-server" Jan 29 15:32:18 crc kubenswrapper[5008]: E0129 15:32:18.986352 5008 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9bcecb83-1aec-4bd4-9b46-f02deb628018" containerName="extract-utilities" Jan 29 15:32:18 crc kubenswrapper[5008]: I0129 15:32:18.986430 5008 state_mem.go:107] "Deleted CPUSet assignment" podUID="9bcecb83-1aec-4bd4-9b46-f02deb628018" containerName="extract-utilities" Jan 29 15:32:18 crc kubenswrapper[5008]: E0129 15:32:18.986511 5008 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9bcecb83-1aec-4bd4-9b46-f02deb628018" containerName="extract-content" Jan 29 15:32:18 crc kubenswrapper[5008]: I0129 15:32:18.986572 5008 state_mem.go:107] "Deleted CPUSet assignment" podUID="9bcecb83-1aec-4bd4-9b46-f02deb628018" containerName="extract-content" Jan 29 15:32:18 crc kubenswrapper[5008]: E0129 15:32:18.986628 5008 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="64612440-e59b-46bb-a60f-f10989166e58" containerName="controller-manager" Jan 29 15:32:18 crc kubenswrapper[5008]: I0129 15:32:18.986688 5008 state_mem.go:107] "Deleted CPUSet assignment" podUID="64612440-e59b-46bb-a60f-f10989166e58" containerName="controller-manager" Jan 29 15:32:18 crc kubenswrapper[5008]: I0129 15:32:18.986875 5008 memory_manager.go:354] "RemoveStaleState removing state" podUID="9bcecb83-1aec-4bd4-9b46-f02deb628018" containerName="registry-server" Jan 29 15:32:18 crc kubenswrapper[5008]: I0129 15:32:18.986959 5008 memory_manager.go:354] "RemoveStaleState removing state" podUID="64612440-e59b-46bb-a60f-f10989166e58" containerName="controller-manager" Jan 29 15:32:18 crc kubenswrapper[5008]: I0129 15:32:18.987411 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-67df9d9956-9zzpb" Jan 29 15:32:19 crc kubenswrapper[5008]: I0129 15:32:19.000541 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-67df9d9956-9zzpb"] Jan 29 15:32:19 crc kubenswrapper[5008]: I0129 15:32:19.012464 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-mkxw5" Jan 29 15:32:19 crc kubenswrapper[5008]: I0129 15:32:19.050775 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/64612440-e59b-46bb-a60f-f10989166e58-serving-cert\") pod \"64612440-e59b-46bb-a60f-f10989166e58\" (UID: \"64612440-e59b-46bb-a60f-f10989166e58\") " Jan 29 15:32:19 crc kubenswrapper[5008]: I0129 15:32:19.050830 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/64612440-e59b-46bb-a60f-f10989166e58-config\") pod \"64612440-e59b-46bb-a60f-f10989166e58\" (UID: \"64612440-e59b-46bb-a60f-f10989166e58\") " Jan 29 15:32:19 crc kubenswrapper[5008]: I0129 15:32:19.050894 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/64612440-e59b-46bb-a60f-f10989166e58-proxy-ca-bundles\") pod \"64612440-e59b-46bb-a60f-f10989166e58\" (UID: \"64612440-e59b-46bb-a60f-f10989166e58\") " Jan 29 15:32:19 crc kubenswrapper[5008]: I0129 15:32:19.050913 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-st2l6\" (UniqueName: \"kubernetes.io/projected/64612440-e59b-46bb-a60f-f10989166e58-kube-api-access-st2l6\") pod \"64612440-e59b-46bb-a60f-f10989166e58\" (UID: \"64612440-e59b-46bb-a60f-f10989166e58\") " Jan 29 15:32:19 crc kubenswrapper[5008]: I0129 15:32:19.052038 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/64612440-e59b-46bb-a60f-f10989166e58-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "64612440-e59b-46bb-a60f-f10989166e58" (UID: "64612440-e59b-46bb-a60f-f10989166e58"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:32:19 crc kubenswrapper[5008]: I0129 15:32:19.052171 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/64612440-e59b-46bb-a60f-f10989166e58-config" (OuterVolumeSpecName: "config") pod "64612440-e59b-46bb-a60f-f10989166e58" (UID: "64612440-e59b-46bb-a60f-f10989166e58"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:32:19 crc kubenswrapper[5008]: I0129 15:32:19.056947 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/64612440-e59b-46bb-a60f-f10989166e58-kube-api-access-st2l6" (OuterVolumeSpecName: "kube-api-access-st2l6") pod "64612440-e59b-46bb-a60f-f10989166e58" (UID: "64612440-e59b-46bb-a60f-f10989166e58"). InnerVolumeSpecName "kube-api-access-st2l6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:32:19 crc kubenswrapper[5008]: I0129 15:32:19.058021 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/64612440-e59b-46bb-a60f-f10989166e58-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "64612440-e59b-46bb-a60f-f10989166e58" (UID: "64612440-e59b-46bb-a60f-f10989166e58"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:32:19 crc kubenswrapper[5008]: I0129 15:32:19.152396 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/64612440-e59b-46bb-a60f-f10989166e58-client-ca\") pod \"64612440-e59b-46bb-a60f-f10989166e58\" (UID: \"64612440-e59b-46bb-a60f-f10989166e58\") " Jan 29 15:32:19 crc kubenswrapper[5008]: I0129 15:32:19.152540 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/17f45bda-9243-4ae2-858a-e32e62abeebc-proxy-ca-bundles\") pod \"controller-manager-67df9d9956-9zzpb\" (UID: \"17f45bda-9243-4ae2-858a-e32e62abeebc\") " pod="openshift-controller-manager/controller-manager-67df9d9956-9zzpb" Jan 29 15:32:19 crc kubenswrapper[5008]: I0129 15:32:19.152584 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/17f45bda-9243-4ae2-858a-e32e62abeebc-config\") pod \"controller-manager-67df9d9956-9zzpb\" (UID: \"17f45bda-9243-4ae2-858a-e32e62abeebc\") " pod="openshift-controller-manager/controller-manager-67df9d9956-9zzpb" Jan 29 15:32:19 crc kubenswrapper[5008]: I0129 15:32:19.152601 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mxhk7\" (UniqueName: \"kubernetes.io/projected/17f45bda-9243-4ae2-858a-e32e62abeebc-kube-api-access-mxhk7\") pod \"controller-manager-67df9d9956-9zzpb\" (UID: \"17f45bda-9243-4ae2-858a-e32e62abeebc\") " pod="openshift-controller-manager/controller-manager-67df9d9956-9zzpb" Jan 29 15:32:19 crc kubenswrapper[5008]: I0129 15:32:19.152635 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/17f45bda-9243-4ae2-858a-e32e62abeebc-serving-cert\") pod \"controller-manager-67df9d9956-9zzpb\" (UID: \"17f45bda-9243-4ae2-858a-e32e62abeebc\") " pod="openshift-controller-manager/controller-manager-67df9d9956-9zzpb" Jan 29 15:32:19 crc kubenswrapper[5008]: I0129 15:32:19.152658 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/17f45bda-9243-4ae2-858a-e32e62abeebc-client-ca\") pod \"controller-manager-67df9d9956-9zzpb\" (UID: \"17f45bda-9243-4ae2-858a-e32e62abeebc\") " pod="openshift-controller-manager/controller-manager-67df9d9956-9zzpb" Jan 29 15:32:19 crc kubenswrapper[5008]: I0129 15:32:19.152707 5008 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/64612440-e59b-46bb-a60f-f10989166e58-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 29 15:32:19 crc kubenswrapper[5008]: I0129 15:32:19.152717 5008 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/64612440-e59b-46bb-a60f-f10989166e58-config\") on node \"crc\" DevicePath \"\"" Jan 29 15:32:19 crc kubenswrapper[5008]: I0129 15:32:19.152727 5008 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/64612440-e59b-46bb-a60f-f10989166e58-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 29 15:32:19 crc kubenswrapper[5008]: I0129 15:32:19.152735 5008 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-st2l6\" (UniqueName: \"kubernetes.io/projected/64612440-e59b-46bb-a60f-f10989166e58-kube-api-access-st2l6\") on node \"crc\" DevicePath \"\"" Jan 29 15:32:19 crc kubenswrapper[5008]: I0129 15:32:19.152905 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/64612440-e59b-46bb-a60f-f10989166e58-client-ca" (OuterVolumeSpecName: "client-ca") pod "64612440-e59b-46bb-a60f-f10989166e58" (UID: "64612440-e59b-46bb-a60f-f10989166e58"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:32:19 crc kubenswrapper[5008]: I0129 15:32:19.202580 5008 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-556b59fcb8-5lkx4" Jan 29 15:32:19 crc kubenswrapper[5008]: I0129 15:32:19.254200 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/17f45bda-9243-4ae2-858a-e32e62abeebc-config\") pod \"controller-manager-67df9d9956-9zzpb\" (UID: \"17f45bda-9243-4ae2-858a-e32e62abeebc\") " pod="openshift-controller-manager/controller-manager-67df9d9956-9zzpb" Jan 29 15:32:19 crc kubenswrapper[5008]: I0129 15:32:19.254240 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mxhk7\" (UniqueName: \"kubernetes.io/projected/17f45bda-9243-4ae2-858a-e32e62abeebc-kube-api-access-mxhk7\") pod \"controller-manager-67df9d9956-9zzpb\" (UID: \"17f45bda-9243-4ae2-858a-e32e62abeebc\") " pod="openshift-controller-manager/controller-manager-67df9d9956-9zzpb" Jan 29 15:32:19 crc kubenswrapper[5008]: I0129 15:32:19.254278 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/17f45bda-9243-4ae2-858a-e32e62abeebc-serving-cert\") pod \"controller-manager-67df9d9956-9zzpb\" (UID: \"17f45bda-9243-4ae2-858a-e32e62abeebc\") " pod="openshift-controller-manager/controller-manager-67df9d9956-9zzpb" Jan 29 15:32:19 crc kubenswrapper[5008]: I0129 15:32:19.254302 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/17f45bda-9243-4ae2-858a-e32e62abeebc-client-ca\") pod \"controller-manager-67df9d9956-9zzpb\" (UID: \"17f45bda-9243-4ae2-858a-e32e62abeebc\") " pod="openshift-controller-manager/controller-manager-67df9d9956-9zzpb" Jan 29 15:32:19 crc kubenswrapper[5008]: I0129 15:32:19.254666 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/17f45bda-9243-4ae2-858a-e32e62abeebc-proxy-ca-bundles\") pod \"controller-manager-67df9d9956-9zzpb\" (UID: \"17f45bda-9243-4ae2-858a-e32e62abeebc\") " pod="openshift-controller-manager/controller-manager-67df9d9956-9zzpb" Jan 29 15:32:19 crc kubenswrapper[5008]: I0129 15:32:19.254812 5008 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/64612440-e59b-46bb-a60f-f10989166e58-client-ca\") on node \"crc\" DevicePath \"\"" Jan 29 15:32:19 crc kubenswrapper[5008]: I0129 15:32:19.256083 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/17f45bda-9243-4ae2-858a-e32e62abeebc-client-ca\") pod \"controller-manager-67df9d9956-9zzpb\" (UID: \"17f45bda-9243-4ae2-858a-e32e62abeebc\") " pod="openshift-controller-manager/controller-manager-67df9d9956-9zzpb" Jan 29 15:32:19 crc kubenswrapper[5008]: I0129 15:32:19.256132 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/17f45bda-9243-4ae2-858a-e32e62abeebc-config\") pod \"controller-manager-67df9d9956-9zzpb\" (UID: \"17f45bda-9243-4ae2-858a-e32e62abeebc\") " pod="openshift-controller-manager/controller-manager-67df9d9956-9zzpb" Jan 29 15:32:19 crc kubenswrapper[5008]: I0129 15:32:19.256977 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/17f45bda-9243-4ae2-858a-e32e62abeebc-proxy-ca-bundles\") pod \"controller-manager-67df9d9956-9zzpb\" (UID: \"17f45bda-9243-4ae2-858a-e32e62abeebc\") " pod="openshift-controller-manager/controller-manager-67df9d9956-9zzpb" Jan 29 15:32:19 crc kubenswrapper[5008]: I0129 15:32:19.260601 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/17f45bda-9243-4ae2-858a-e32e62abeebc-serving-cert\") pod \"controller-manager-67df9d9956-9zzpb\" (UID: \"17f45bda-9243-4ae2-858a-e32e62abeebc\") " pod="openshift-controller-manager/controller-manager-67df9d9956-9zzpb" Jan 29 15:32:19 crc kubenswrapper[5008]: I0129 15:32:19.275643 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mxhk7\" (UniqueName: \"kubernetes.io/projected/17f45bda-9243-4ae2-858a-e32e62abeebc-kube-api-access-mxhk7\") pod \"controller-manager-67df9d9956-9zzpb\" (UID: \"17f45bda-9243-4ae2-858a-e32e62abeebc\") " pod="openshift-controller-manager/controller-manager-67df9d9956-9zzpb" Jan 29 15:32:19 crc kubenswrapper[5008]: I0129 15:32:19.302311 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-67df9d9956-9zzpb" Jan 29 15:32:19 crc kubenswrapper[5008]: I0129 15:32:19.354055 5008 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 29 15:32:19 crc kubenswrapper[5008]: E0129 15:32:19.354265 5008 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bf35ff68-68b3-4743-803f-e451a5f5c5bd" containerName="route-controller-manager" Jan 29 15:32:19 crc kubenswrapper[5008]: I0129 15:32:19.354281 5008 state_mem.go:107] "Deleted CPUSet assignment" podUID="bf35ff68-68b3-4743-803f-e451a5f5c5bd" containerName="route-controller-manager" Jan 29 15:32:19 crc kubenswrapper[5008]: I0129 15:32:19.354376 5008 memory_manager.go:354] "RemoveStaleState removing state" podUID="bf35ff68-68b3-4743-803f-e451a5f5c5bd" containerName="route-controller-manager" Jan 29 15:32:19 crc kubenswrapper[5008]: I0129 15:32:19.354637 5008 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 29 15:32:19 crc kubenswrapper[5008]: I0129 15:32:19.354684 5008 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 29 15:32:19 crc kubenswrapper[5008]: I0129 15:32:19.355178 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bf35ff68-68b3-4743-803f-e451a5f5c5bd-config\") pod \"bf35ff68-68b3-4743-803f-e451a5f5c5bd\" (UID: \"bf35ff68-68b3-4743-803f-e451a5f5c5bd\") " Jan 29 15:32:19 crc kubenswrapper[5008]: I0129 15:32:19.355222 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/bf35ff68-68b3-4743-803f-e451a5f5c5bd-client-ca\") pod \"bf35ff68-68b3-4743-803f-e451a5f5c5bd\" (UID: \"bf35ff68-68b3-4743-803f-e451a5f5c5bd\") " Jan 29 15:32:19 crc kubenswrapper[5008]: I0129 15:32:19.355253 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bf35ff68-68b3-4743-803f-e451a5f5c5bd-serving-cert\") pod \"bf35ff68-68b3-4743-803f-e451a5f5c5bd\" (UID: \"bf35ff68-68b3-4743-803f-e451a5f5c5bd\") " Jan 29 15:32:19 crc kubenswrapper[5008]: I0129 15:32:19.355273 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mw5nn\" (UniqueName: \"kubernetes.io/projected/bf35ff68-68b3-4743-803f-e451a5f5c5bd-kube-api-access-mw5nn\") pod \"bf35ff68-68b3-4743-803f-e451a5f5c5bd\" (UID: \"bf35ff68-68b3-4743-803f-e451a5f5c5bd\") " Jan 29 15:32:19 crc kubenswrapper[5008]: I0129 15:32:19.355939 5008 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" containerID="cri-o://3397d4d59fbac09e49247425eb263f25d13c62a72013146c981b606f6389165d" gracePeriod=15 Jan 29 15:32:19 crc kubenswrapper[5008]: I0129 15:32:19.355966 5008 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" containerID="cri-o://5ed1794f8b68a0810301b6f7b91e03cfb269b35084dd97b2f153789ba70970f2" gracePeriod=15 Jan 29 15:32:19 crc kubenswrapper[5008]: I0129 15:32:19.356045 5008 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" containerID="cri-o://677c04a1dffb767e6149ccb064772548ca29cf553afc20cb4eb82a5f85742ff0" gracePeriod=15 Jan 29 15:32:19 crc kubenswrapper[5008]: I0129 15:32:19.356078 5008 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" containerID="cri-o://4702656214e54c2881bf198364622648679a9981721d09a6b1551a134c63b7d7" gracePeriod=15 Jan 29 15:32:19 crc kubenswrapper[5008]: I0129 15:32:19.356134 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 29 15:32:19 crc kubenswrapper[5008]: I0129 15:32:19.356566 5008 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" containerID="cri-o://412b5d429b7a86a87e710ba4a0c81a54b03108f41ce6cc29f429aede063eb76c" gracePeriod=15 Jan 29 15:32:19 crc kubenswrapper[5008]: I0129 15:32:19.356943 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bf35ff68-68b3-4743-803f-e451a5f5c5bd-config" (OuterVolumeSpecName: "config") pod "bf35ff68-68b3-4743-803f-e451a5f5c5bd" (UID: "bf35ff68-68b3-4743-803f-e451a5f5c5bd"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:32:19 crc kubenswrapper[5008]: E0129 15:32:19.357596 5008 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 29 15:32:19 crc kubenswrapper[5008]: I0129 15:32:19.357615 5008 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 29 15:32:19 crc kubenswrapper[5008]: E0129 15:32:19.357629 5008 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Jan 29 15:32:19 crc kubenswrapper[5008]: I0129 15:32:19.357634 5008 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Jan 29 15:32:19 crc kubenswrapper[5008]: E0129 15:32:19.357645 5008 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Jan 29 15:32:19 crc kubenswrapper[5008]: I0129 15:32:19.357651 5008 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Jan 29 15:32:19 crc kubenswrapper[5008]: E0129 15:32:19.357882 5008 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 29 15:32:19 crc kubenswrapper[5008]: I0129 15:32:19.357887 5008 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 29 15:32:19 crc kubenswrapper[5008]: E0129 15:32:19.357895 5008 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Jan 29 15:32:19 crc kubenswrapper[5008]: I0129 15:32:19.357901 5008 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Jan 29 15:32:19 crc kubenswrapper[5008]: E0129 15:32:19.357912 5008 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Jan 29 15:32:19 crc kubenswrapper[5008]: I0129 15:32:19.357917 5008 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Jan 29 15:32:19 crc kubenswrapper[5008]: E0129 15:32:19.357925 5008 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Jan 29 15:32:19 crc kubenswrapper[5008]: I0129 15:32:19.357931 5008 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Jan 29 15:32:19 crc kubenswrapper[5008]: I0129 15:32:19.360106 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf35ff68-68b3-4743-803f-e451a5f5c5bd-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "bf35ff68-68b3-4743-803f-e451a5f5c5bd" (UID: "bf35ff68-68b3-4743-803f-e451a5f5c5bd"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:32:19 crc kubenswrapper[5008]: I0129 15:32:19.360419 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bf35ff68-68b3-4743-803f-e451a5f5c5bd-client-ca" (OuterVolumeSpecName: "client-ca") pod "bf35ff68-68b3-4743-803f-e451a5f5c5bd" (UID: "bf35ff68-68b3-4743-803f-e451a5f5c5bd"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:32:19 crc kubenswrapper[5008]: I0129 15:32:19.364601 5008 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="f4b27818a5e8e43d0dc095d08835c792" podUID="71bb4a3aecc4ba5b26c4b7318770ce13" Jan 29 15:32:19 crc kubenswrapper[5008]: I0129 15:32:19.368677 5008 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 29 15:32:19 crc kubenswrapper[5008]: I0129 15:32:19.368708 5008 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 29 15:32:19 crc kubenswrapper[5008]: I0129 15:32:19.368721 5008 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Jan 29 15:32:19 crc kubenswrapper[5008]: I0129 15:32:19.368742 5008 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Jan 29 15:32:19 crc kubenswrapper[5008]: I0129 15:32:19.368760 5008 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Jan 29 15:32:19 crc kubenswrapper[5008]: I0129 15:32:19.368774 5008 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Jan 29 15:32:19 crc kubenswrapper[5008]: E0129 15:32:19.369091 5008 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 29 15:32:19 crc kubenswrapper[5008]: I0129 15:32:19.369103 5008 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 29 15:32:19 crc kubenswrapper[5008]: I0129 15:32:19.369289 5008 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 29 15:32:19 crc kubenswrapper[5008]: I0129 15:32:19.404942 5008 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 29 15:32:19 crc kubenswrapper[5008]: I0129 15:32:19.411624 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf35ff68-68b3-4743-803f-e451a5f5c5bd-kube-api-access-mw5nn" (OuterVolumeSpecName: "kube-api-access-mw5nn") pod "bf35ff68-68b3-4743-803f-e451a5f5c5bd" (UID: "bf35ff68-68b3-4743-803f-e451a5f5c5bd"). InnerVolumeSpecName "kube-api-access-mw5nn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:32:19 crc kubenswrapper[5008]: I0129 15:32:19.460238 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 15:32:19 crc kubenswrapper[5008]: I0129 15:32:19.460306 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 29 15:32:19 crc kubenswrapper[5008]: I0129 15:32:19.460584 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 29 15:32:19 crc kubenswrapper[5008]: I0129 15:32:19.460619 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 15:32:19 crc kubenswrapper[5008]: I0129 15:32:19.460669 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 29 15:32:19 crc kubenswrapper[5008]: I0129 15:32:19.460692 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 29 15:32:19 crc kubenswrapper[5008]: I0129 15:32:19.460742 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 15:32:19 crc kubenswrapper[5008]: I0129 15:32:19.460820 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 29 15:32:19 crc kubenswrapper[5008]: I0129 15:32:19.460876 5008 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bf35ff68-68b3-4743-803f-e451a5f5c5bd-config\") on node \"crc\" DevicePath \"\"" Jan 29 15:32:19 crc kubenswrapper[5008]: I0129 15:32:19.460891 5008 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/bf35ff68-68b3-4743-803f-e451a5f5c5bd-client-ca\") on node \"crc\" DevicePath \"\"" Jan 29 15:32:19 crc kubenswrapper[5008]: I0129 15:32:19.460902 5008 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bf35ff68-68b3-4743-803f-e451a5f5c5bd-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 29 15:32:19 crc kubenswrapper[5008]: I0129 15:32:19.460913 5008 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mw5nn\" (UniqueName: \"kubernetes.io/projected/bf35ff68-68b3-4743-803f-e451a5f5c5bd-kube-api-access-mw5nn\") on node \"crc\" DevicePath \"\"" Jan 29 15:32:19 crc kubenswrapper[5008]: I0129 15:32:19.563376 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 15:32:19 crc kubenswrapper[5008]: I0129 15:32:19.563696 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 29 15:32:19 crc kubenswrapper[5008]: I0129 15:32:19.563721 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 29 15:32:19 crc kubenswrapper[5008]: I0129 15:32:19.563738 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 15:32:19 crc kubenswrapper[5008]: I0129 15:32:19.563768 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 29 15:32:19 crc kubenswrapper[5008]: I0129 15:32:19.563800 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 29 15:32:19 crc kubenswrapper[5008]: I0129 15:32:19.563824 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 15:32:19 crc kubenswrapper[5008]: I0129 15:32:19.563842 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 29 15:32:19 crc kubenswrapper[5008]: I0129 15:32:19.563898 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 29 15:32:19 crc kubenswrapper[5008]: I0129 15:32:19.563894 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 29 15:32:19 crc kubenswrapper[5008]: I0129 15:32:19.563933 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 15:32:19 crc kubenswrapper[5008]: I0129 15:32:19.563498 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 15:32:19 crc kubenswrapper[5008]: I0129 15:32:19.563954 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 29 15:32:19 crc kubenswrapper[5008]: I0129 15:32:19.563970 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 29 15:32:19 crc kubenswrapper[5008]: I0129 15:32:19.563976 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 29 15:32:19 crc kubenswrapper[5008]: I0129 15:32:19.563999 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 15:32:19 crc kubenswrapper[5008]: I0129 15:32:19.694553 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 29 15:32:19 crc kubenswrapper[5008]: W0129 15:32:19.710829 5008 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf85e55b1a89d02b0cb034b1ea31ed45a.slice/crio-031631225f1ca18b081aa08300f1896d4e4f3792d561cb5553da09b79d07d25d WatchSource:0}: Error finding container 031631225f1ca18b081aa08300f1896d4e4f3792d561cb5553da09b79d07d25d: Status 404 returned error can't find the container with id 031631225f1ca18b081aa08300f1896d4e4f3792d561cb5553da09b79d07d25d Jan 29 15:32:19 crc kubenswrapper[5008]: E0129 15:32:19.713770 5008 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.50:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.188f3d724d56d6e9 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-29 15:32:19.712997097 +0000 UTC m=+283.385851334,LastTimestamp:2026-01-29 15:32:19.712997097 +0000 UTC m=+283.385851334,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 29 15:32:19 crc kubenswrapper[5008]: I0129 15:32:19.804034 5008 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-tst9c" Jan 29 15:32:19 crc kubenswrapper[5008]: I0129 15:32:19.804090 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-tst9c" Jan 29 15:32:19 crc kubenswrapper[5008]: I0129 15:32:19.847513 5008 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-tst9c" Jan 29 15:32:19 crc kubenswrapper[5008]: I0129 15:32:19.848028 5008 status_manager.go:851] "Failed to get status for pod" podUID="ea8deba9-72cb-4274-add1-e80591a9e7cc" pod="openshift-marketplace/redhat-operators-tst9c" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-tst9c\": dial tcp 38.102.83.50:6443: connect: connection refused" Jan 29 15:32:19 crc kubenswrapper[5008]: I0129 15:32:19.848316 5008 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.50:6443: connect: connection refused" Jan 29 15:32:19 crc kubenswrapper[5008]: I0129 15:32:19.953037 5008 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Jan 29 15:32:19 crc kubenswrapper[5008]: I0129 15:32:19.955554 5008 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 29 15:32:19 crc kubenswrapper[5008]: I0129 15:32:19.972448 5008 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="412b5d429b7a86a87e710ba4a0c81a54b03108f41ce6cc29f429aede063eb76c" exitCode=0 Jan 29 15:32:19 crc kubenswrapper[5008]: I0129 15:32:19.972682 5008 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="3397d4d59fbac09e49247425eb263f25d13c62a72013146c981b606f6389165d" exitCode=0 Jan 29 15:32:19 crc kubenswrapper[5008]: I0129 15:32:19.972751 5008 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="5ed1794f8b68a0810301b6f7b91e03cfb269b35084dd97b2f153789ba70970f2" exitCode=0 Jan 29 15:32:19 crc kubenswrapper[5008]: I0129 15:32:19.972834 5008 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="677c04a1dffb767e6149ccb064772548ca29cf553afc20cb4eb82a5f85742ff0" exitCode=2 Jan 29 15:32:19 crc kubenswrapper[5008]: I0129 15:32:19.972960 5008 scope.go:117] "RemoveContainer" containerID="4d710e35a02d14289e2d5fe6b35c08621e78c96b7e9e30451ffd6d51962fb761" Jan 29 15:32:19 crc kubenswrapper[5008]: I0129 15:32:19.980232 5008 generic.go:334] "Generic (PLEG): container finished" podID="af4b11bc-2d2f-4e68-ab59-cbc08fecba52" containerID="7c398ab151812dfd065f5ce688e5a1aab9c54766a8265004ad57f01a071e1896" exitCode=0 Jan 29 15:32:19 crc kubenswrapper[5008]: I0129 15:32:19.980517 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"af4b11bc-2d2f-4e68-ab59-cbc08fecba52","Type":"ContainerDied","Data":"7c398ab151812dfd065f5ce688e5a1aab9c54766a8265004ad57f01a071e1896"} Jan 29 15:32:19 crc kubenswrapper[5008]: I0129 15:32:19.981182 5008 status_manager.go:851] "Failed to get status for pod" podUID="af4b11bc-2d2f-4e68-ab59-cbc08fecba52" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.50:6443: connect: connection refused" Jan 29 15:32:19 crc kubenswrapper[5008]: I0129 15:32:19.981481 5008 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.50:6443: connect: connection refused" Jan 29 15:32:19 crc kubenswrapper[5008]: I0129 15:32:19.981769 5008 status_manager.go:851] "Failed to get status for pod" podUID="ea8deba9-72cb-4274-add1-e80591a9e7cc" pod="openshift-marketplace/redhat-operators-tst9c" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-tst9c\": dial tcp 38.102.83.50:6443: connect: connection refused" Jan 29 15:32:19 crc kubenswrapper[5008]: I0129 15:32:19.984048 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-556b59fcb8-5lkx4" event={"ID":"bf35ff68-68b3-4743-803f-e451a5f5c5bd","Type":"ContainerDied","Data":"151a001a83e99402752792ff1d9b03e857965ca404f04dce980c55396aacc517"} Jan 29 15:32:19 crc kubenswrapper[5008]: I0129 15:32:19.984109 5008 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-556b59fcb8-5lkx4" Jan 29 15:32:19 crc kubenswrapper[5008]: I0129 15:32:19.984700 5008 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.50:6443: connect: connection refused" Jan 29 15:32:19 crc kubenswrapper[5008]: I0129 15:32:19.985216 5008 status_manager.go:851] "Failed to get status for pod" podUID="bf35ff68-68b3-4743-803f-e451a5f5c5bd" pod="openshift-route-controller-manager/route-controller-manager-556b59fcb8-5lkx4" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-556b59fcb8-5lkx4\": dial tcp 38.102.83.50:6443: connect: connection refused" Jan 29 15:32:19 crc kubenswrapper[5008]: I0129 15:32:19.985563 5008 status_manager.go:851] "Failed to get status for pod" podUID="ea8deba9-72cb-4274-add1-e80591a9e7cc" pod="openshift-marketplace/redhat-operators-tst9c" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-tst9c\": dial tcp 38.102.83.50:6443: connect: connection refused" Jan 29 15:32:19 crc kubenswrapper[5008]: I0129 15:32:19.985815 5008 status_manager.go:851] "Failed to get status for pod" podUID="af4b11bc-2d2f-4e68-ab59-cbc08fecba52" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.50:6443: connect: connection refused" Jan 29 15:32:19 crc kubenswrapper[5008]: I0129 15:32:19.985974 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"031631225f1ca18b081aa08300f1896d4e4f3792d561cb5553da09b79d07d25d"} Jan 29 15:32:19 crc kubenswrapper[5008]: I0129 15:32:19.986050 5008 status_manager.go:851] "Failed to get status for pod" podUID="af4b11bc-2d2f-4e68-ab59-cbc08fecba52" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.50:6443: connect: connection refused" Jan 29 15:32:19 crc kubenswrapper[5008]: I0129 15:32:19.986235 5008 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-585448bccb-4m9fq" Jan 29 15:32:19 crc kubenswrapper[5008]: I0129 15:32:19.986243 5008 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.50:6443: connect: connection refused" Jan 29 15:32:19 crc kubenswrapper[5008]: I0129 15:32:19.986495 5008 status_manager.go:851] "Failed to get status for pod" podUID="37742fc9-fce4-41f0-ba04-7232b6e647a7" pod="openshift-marketplace/redhat-marketplace-fd6nq" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-fd6nq\": dial tcp 38.102.83.50:6443: connect: connection refused" Jan 29 15:32:19 crc kubenswrapper[5008]: I0129 15:32:19.986853 5008 status_manager.go:851] "Failed to get status for pod" podUID="bf35ff68-68b3-4743-803f-e451a5f5c5bd" pod="openshift-route-controller-manager/route-controller-manager-556b59fcb8-5lkx4" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-556b59fcb8-5lkx4\": dial tcp 38.102.83.50:6443: connect: connection refused" Jan 29 15:32:19 crc kubenswrapper[5008]: I0129 15:32:19.987183 5008 status_manager.go:851] "Failed to get status for pod" podUID="ea8deba9-72cb-4274-add1-e80591a9e7cc" pod="openshift-marketplace/redhat-operators-tst9c" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-tst9c\": dial tcp 38.102.83.50:6443: connect: connection refused" Jan 29 15:32:19 crc kubenswrapper[5008]: I0129 15:32:19.987651 5008 status_manager.go:851] "Failed to get status for pod" podUID="37742fc9-fce4-41f0-ba04-7232b6e647a7" pod="openshift-marketplace/redhat-marketplace-fd6nq" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-fd6nq\": dial tcp 38.102.83.50:6443: connect: connection refused" Jan 29 15:32:19 crc kubenswrapper[5008]: I0129 15:32:19.987944 5008 status_manager.go:851] "Failed to get status for pod" podUID="bf35ff68-68b3-4743-803f-e451a5f5c5bd" pod="openshift-route-controller-manager/route-controller-manager-556b59fcb8-5lkx4" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-556b59fcb8-5lkx4\": dial tcp 38.102.83.50:6443: connect: connection refused" Jan 29 15:32:19 crc kubenswrapper[5008]: I0129 15:32:19.988191 5008 status_manager.go:851] "Failed to get status for pod" podUID="ea8deba9-72cb-4274-add1-e80591a9e7cc" pod="openshift-marketplace/redhat-operators-tst9c" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-tst9c\": dial tcp 38.102.83.50:6443: connect: connection refused" Jan 29 15:32:19 crc kubenswrapper[5008]: I0129 15:32:19.988468 5008 status_manager.go:851] "Failed to get status for pod" podUID="64612440-e59b-46bb-a60f-f10989166e58" pod="openshift-controller-manager/controller-manager-585448bccb-4m9fq" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-585448bccb-4m9fq\": dial tcp 38.102.83.50:6443: connect: connection refused" Jan 29 15:32:19 crc kubenswrapper[5008]: I0129 15:32:19.988734 5008 status_manager.go:851] "Failed to get status for pod" podUID="af4b11bc-2d2f-4e68-ab59-cbc08fecba52" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.50:6443: connect: connection refused" Jan 29 15:32:19 crc kubenswrapper[5008]: I0129 15:32:19.989058 5008 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.50:6443: connect: connection refused" Jan 29 15:32:20 crc kubenswrapper[5008]: I0129 15:32:20.002555 5008 status_manager.go:851] "Failed to get status for pod" podUID="af4b11bc-2d2f-4e68-ab59-cbc08fecba52" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.50:6443: connect: connection refused" Jan 29 15:32:20 crc kubenswrapper[5008]: I0129 15:32:20.002839 5008 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.50:6443: connect: connection refused" Jan 29 15:32:20 crc kubenswrapper[5008]: I0129 15:32:20.003393 5008 status_manager.go:851] "Failed to get status for pod" podUID="37742fc9-fce4-41f0-ba04-7232b6e647a7" pod="openshift-marketplace/redhat-marketplace-fd6nq" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-fd6nq\": dial tcp 38.102.83.50:6443: connect: connection refused" Jan 29 15:32:20 crc kubenswrapper[5008]: I0129 15:32:20.003625 5008 status_manager.go:851] "Failed to get status for pod" podUID="bf35ff68-68b3-4743-803f-e451a5f5c5bd" pod="openshift-route-controller-manager/route-controller-manager-556b59fcb8-5lkx4" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-556b59fcb8-5lkx4\": dial tcp 38.102.83.50:6443: connect: connection refused" Jan 29 15:32:20 crc kubenswrapper[5008]: I0129 15:32:20.003956 5008 status_manager.go:851] "Failed to get status for pod" podUID="ea8deba9-72cb-4274-add1-e80591a9e7cc" pod="openshift-marketplace/redhat-operators-tst9c" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-tst9c\": dial tcp 38.102.83.50:6443: connect: connection refused" Jan 29 15:32:20 crc kubenswrapper[5008]: I0129 15:32:20.004221 5008 status_manager.go:851] "Failed to get status for pod" podUID="64612440-e59b-46bb-a60f-f10989166e58" pod="openshift-controller-manager/controller-manager-585448bccb-4m9fq" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-585448bccb-4m9fq\": dial tcp 38.102.83.50:6443: connect: connection refused" Jan 29 15:32:20 crc kubenswrapper[5008]: I0129 15:32:20.004526 5008 status_manager.go:851] "Failed to get status for pod" podUID="af4b11bc-2d2f-4e68-ab59-cbc08fecba52" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.50:6443: connect: connection refused" Jan 29 15:32:20 crc kubenswrapper[5008]: I0129 15:32:20.004808 5008 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.50:6443: connect: connection refused" Jan 29 15:32:20 crc kubenswrapper[5008]: I0129 15:32:20.005094 5008 status_manager.go:851] "Failed to get status for pod" podUID="37742fc9-fce4-41f0-ba04-7232b6e647a7" pod="openshift-marketplace/redhat-marketplace-fd6nq" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-fd6nq\": dial tcp 38.102.83.50:6443: connect: connection refused" Jan 29 15:32:20 crc kubenswrapper[5008]: I0129 15:32:20.005320 5008 status_manager.go:851] "Failed to get status for pod" podUID="bf35ff68-68b3-4743-803f-e451a5f5c5bd" pod="openshift-route-controller-manager/route-controller-manager-556b59fcb8-5lkx4" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-556b59fcb8-5lkx4\": dial tcp 38.102.83.50:6443: connect: connection refused" Jan 29 15:32:20 crc kubenswrapper[5008]: I0129 15:32:20.005574 5008 status_manager.go:851] "Failed to get status for pod" podUID="ea8deba9-72cb-4274-add1-e80591a9e7cc" pod="openshift-marketplace/redhat-operators-tst9c" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-tst9c\": dial tcp 38.102.83.50:6443: connect: connection refused" Jan 29 15:32:20 crc kubenswrapper[5008]: I0129 15:32:20.005810 5008 status_manager.go:851] "Failed to get status for pod" podUID="64612440-e59b-46bb-a60f-f10989166e58" pod="openshift-controller-manager/controller-manager-585448bccb-4m9fq" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-585448bccb-4m9fq\": dial tcp 38.102.83.50:6443: connect: connection refused" Jan 29 15:32:20 crc kubenswrapper[5008]: I0129 15:32:20.027370 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-tst9c" Jan 29 15:32:20 crc kubenswrapper[5008]: I0129 15:32:20.028276 5008 status_manager.go:851] "Failed to get status for pod" podUID="af4b11bc-2d2f-4e68-ab59-cbc08fecba52" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.50:6443: connect: connection refused" Jan 29 15:32:20 crc kubenswrapper[5008]: I0129 15:32:20.028655 5008 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.50:6443: connect: connection refused" Jan 29 15:32:20 crc kubenswrapper[5008]: E0129 15:32:20.028700 5008 log.go:32] "RunPodSandbox from runtime service failed" err=< Jan 29 15:32:20 crc kubenswrapper[5008]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_controller-manager-67df9d9956-9zzpb_openshift-controller-manager_17f45bda-9243-4ae2-858a-e32e62abeebc_0(fb02973262e70205852a89456f4d44a195841d276d93495b63daf9794681bf72): error adding pod openshift-controller-manager_controller-manager-67df9d9956-9zzpb to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"fb02973262e70205852a89456f4d44a195841d276d93495b63daf9794681bf72" Netns:"/var/run/netns/2b7ba383-8a1d-4a1b-8df0-841f1e10d4c2" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-controller-manager;K8S_POD_NAME=controller-manager-67df9d9956-9zzpb;K8S_POD_INFRA_CONTAINER_ID=fb02973262e70205852a89456f4d44a195841d276d93495b63daf9794681bf72;K8S_POD_UID=17f45bda-9243-4ae2-858a-e32e62abeebc" Path:"" ERRORED: error configuring pod [openshift-controller-manager/controller-manager-67df9d9956-9zzpb] networking: Multus: [openshift-controller-manager/controller-manager-67df9d9956-9zzpb/17f45bda-9243-4ae2-858a-e32e62abeebc]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod controller-manager-67df9d9956-9zzpb in out of cluster comm: SetNetworkStatus: failed to update the pod controller-manager-67df9d9956-9zzpb in out of cluster comm: status update failed for pod /: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-67df9d9956-9zzpb?timeout=1m0s": dial tcp 38.102.83.50:6443: connect: connection refused Jan 29 15:32:20 crc kubenswrapper[5008]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Jan 29 15:32:20 crc kubenswrapper[5008]: > Jan 29 15:32:20 crc kubenswrapper[5008]: E0129 15:32:20.028943 5008 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err=< Jan 29 15:32:20 crc kubenswrapper[5008]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_controller-manager-67df9d9956-9zzpb_openshift-controller-manager_17f45bda-9243-4ae2-858a-e32e62abeebc_0(fb02973262e70205852a89456f4d44a195841d276d93495b63daf9794681bf72): error adding pod openshift-controller-manager_controller-manager-67df9d9956-9zzpb to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"fb02973262e70205852a89456f4d44a195841d276d93495b63daf9794681bf72" Netns:"/var/run/netns/2b7ba383-8a1d-4a1b-8df0-841f1e10d4c2" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-controller-manager;K8S_POD_NAME=controller-manager-67df9d9956-9zzpb;K8S_POD_INFRA_CONTAINER_ID=fb02973262e70205852a89456f4d44a195841d276d93495b63daf9794681bf72;K8S_POD_UID=17f45bda-9243-4ae2-858a-e32e62abeebc" Path:"" ERRORED: error configuring pod [openshift-controller-manager/controller-manager-67df9d9956-9zzpb] networking: Multus: [openshift-controller-manager/controller-manager-67df9d9956-9zzpb/17f45bda-9243-4ae2-858a-e32e62abeebc]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod controller-manager-67df9d9956-9zzpb in out of cluster comm: SetNetworkStatus: failed to update the pod controller-manager-67df9d9956-9zzpb in out of cluster comm: status update failed for pod /: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-67df9d9956-9zzpb?timeout=1m0s": dial tcp 38.102.83.50:6443: connect: connection refused Jan 29 15:32:20 crc kubenswrapper[5008]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Jan 29 15:32:20 crc kubenswrapper[5008]: > pod="openshift-controller-manager/controller-manager-67df9d9956-9zzpb" Jan 29 15:32:20 crc kubenswrapper[5008]: E0129 15:32:20.028966 5008 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err=< Jan 29 15:32:20 crc kubenswrapper[5008]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_controller-manager-67df9d9956-9zzpb_openshift-controller-manager_17f45bda-9243-4ae2-858a-e32e62abeebc_0(fb02973262e70205852a89456f4d44a195841d276d93495b63daf9794681bf72): error adding pod openshift-controller-manager_controller-manager-67df9d9956-9zzpb to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"fb02973262e70205852a89456f4d44a195841d276d93495b63daf9794681bf72" Netns:"/var/run/netns/2b7ba383-8a1d-4a1b-8df0-841f1e10d4c2" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-controller-manager;K8S_POD_NAME=controller-manager-67df9d9956-9zzpb;K8S_POD_INFRA_CONTAINER_ID=fb02973262e70205852a89456f4d44a195841d276d93495b63daf9794681bf72;K8S_POD_UID=17f45bda-9243-4ae2-858a-e32e62abeebc" Path:"" ERRORED: error configuring pod [openshift-controller-manager/controller-manager-67df9d9956-9zzpb] networking: Multus: [openshift-controller-manager/controller-manager-67df9d9956-9zzpb/17f45bda-9243-4ae2-858a-e32e62abeebc]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod controller-manager-67df9d9956-9zzpb in out of cluster comm: SetNetworkStatus: failed to update the pod controller-manager-67df9d9956-9zzpb in out of cluster comm: status update failed for pod /: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-67df9d9956-9zzpb?timeout=1m0s": dial tcp 38.102.83.50:6443: connect: connection refused Jan 29 15:32:20 crc kubenswrapper[5008]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Jan 29 15:32:20 crc kubenswrapper[5008]: > pod="openshift-controller-manager/controller-manager-67df9d9956-9zzpb" Jan 29 15:32:20 crc kubenswrapper[5008]: E0129 15:32:20.029014 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"controller-manager-67df9d9956-9zzpb_openshift-controller-manager(17f45bda-9243-4ae2-858a-e32e62abeebc)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"controller-manager-67df9d9956-9zzpb_openshift-controller-manager(17f45bda-9243-4ae2-858a-e32e62abeebc)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_controller-manager-67df9d9956-9zzpb_openshift-controller-manager_17f45bda-9243-4ae2-858a-e32e62abeebc_0(fb02973262e70205852a89456f4d44a195841d276d93495b63daf9794681bf72): error adding pod openshift-controller-manager_controller-manager-67df9d9956-9zzpb to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus-shim\\\" name=\\\"multus-cni-network\\\" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:\\\"fb02973262e70205852a89456f4d44a195841d276d93495b63daf9794681bf72\\\" Netns:\\\"/var/run/netns/2b7ba383-8a1d-4a1b-8df0-841f1e10d4c2\\\" IfName:\\\"eth0\\\" Args:\\\"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-controller-manager;K8S_POD_NAME=controller-manager-67df9d9956-9zzpb;K8S_POD_INFRA_CONTAINER_ID=fb02973262e70205852a89456f4d44a195841d276d93495b63daf9794681bf72;K8S_POD_UID=17f45bda-9243-4ae2-858a-e32e62abeebc\\\" Path:\\\"\\\" ERRORED: error configuring pod [openshift-controller-manager/controller-manager-67df9d9956-9zzpb] networking: Multus: [openshift-controller-manager/controller-manager-67df9d9956-9zzpb/17f45bda-9243-4ae2-858a-e32e62abeebc]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod controller-manager-67df9d9956-9zzpb in out of cluster comm: SetNetworkStatus: failed to update the pod controller-manager-67df9d9956-9zzpb in out of cluster comm: status update failed for pod /: Get \\\"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-67df9d9956-9zzpb?timeout=1m0s\\\": dial tcp 38.102.83.50:6443: connect: connection refused\\n': StdinData: {\\\"binDir\\\":\\\"/var/lib/cni/bin\\\",\\\"clusterNetwork\\\":\\\"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf\\\",\\\"cniVersion\\\":\\\"0.3.1\\\",\\\"daemonSocketDir\\\":\\\"/run/multus/socket\\\",\\\"globalNamespaces\\\":\\\"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv\\\",\\\"logLevel\\\":\\\"verbose\\\",\\\"logToStderr\\\":true,\\\"name\\\":\\\"multus-cni-network\\\",\\\"namespaceIsolation\\\":true,\\\"type\\\":\\\"multus-shim\\\"}\"" pod="openshift-controller-manager/controller-manager-67df9d9956-9zzpb" podUID="17f45bda-9243-4ae2-858a-e32e62abeebc" Jan 29 15:32:20 crc kubenswrapper[5008]: I0129 15:32:20.029866 5008 status_manager.go:851] "Failed to get status for pod" podUID="bf35ff68-68b3-4743-803f-e451a5f5c5bd" pod="openshift-route-controller-manager/route-controller-manager-556b59fcb8-5lkx4" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-556b59fcb8-5lkx4\": dial tcp 38.102.83.50:6443: connect: connection refused" Jan 29 15:32:20 crc kubenswrapper[5008]: I0129 15:32:20.030142 5008 status_manager.go:851] "Failed to get status for pod" podUID="37742fc9-fce4-41f0-ba04-7232b6e647a7" pod="openshift-marketplace/redhat-marketplace-fd6nq" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-fd6nq\": dial tcp 38.102.83.50:6443: connect: connection refused" Jan 29 15:32:20 crc kubenswrapper[5008]: I0129 15:32:20.030495 5008 status_manager.go:851] "Failed to get status for pod" podUID="64612440-e59b-46bb-a60f-f10989166e58" pod="openshift-controller-manager/controller-manager-585448bccb-4m9fq" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-585448bccb-4m9fq\": dial tcp 38.102.83.50:6443: connect: connection refused" Jan 29 15:32:20 crc kubenswrapper[5008]: I0129 15:32:20.030875 5008 status_manager.go:851] "Failed to get status for pod" podUID="ea8deba9-72cb-4274-add1-e80591a9e7cc" pod="openshift-marketplace/redhat-operators-tst9c" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-tst9c\": dial tcp 38.102.83.50:6443: connect: connection refused" Jan 29 15:32:20 crc kubenswrapper[5008]: I0129 15:32:20.225897 5008 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-lhtht" Jan 29 15:32:20 crc kubenswrapper[5008]: I0129 15:32:20.227120 5008 status_manager.go:851] "Failed to get status for pod" podUID="a954daed-802a-4b46-81ef-7079dcddbaa5" pod="openshift-marketplace/redhat-operators-lhtht" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-lhtht\": dial tcp 38.102.83.50:6443: connect: connection refused" Jan 29 15:32:20 crc kubenswrapper[5008]: I0129 15:32:20.227595 5008 status_manager.go:851] "Failed to get status for pod" podUID="af4b11bc-2d2f-4e68-ab59-cbc08fecba52" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.50:6443: connect: connection refused" Jan 29 15:32:20 crc kubenswrapper[5008]: I0129 15:32:20.228046 5008 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.50:6443: connect: connection refused" Jan 29 15:32:20 crc kubenswrapper[5008]: I0129 15:32:20.228430 5008 status_manager.go:851] "Failed to get status for pod" podUID="37742fc9-fce4-41f0-ba04-7232b6e647a7" pod="openshift-marketplace/redhat-marketplace-fd6nq" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-fd6nq\": dial tcp 38.102.83.50:6443: connect: connection refused" Jan 29 15:32:20 crc kubenswrapper[5008]: I0129 15:32:20.228899 5008 status_manager.go:851] "Failed to get status for pod" podUID="bf35ff68-68b3-4743-803f-e451a5f5c5bd" pod="openshift-route-controller-manager/route-controller-manager-556b59fcb8-5lkx4" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-556b59fcb8-5lkx4\": dial tcp 38.102.83.50:6443: connect: connection refused" Jan 29 15:32:20 crc kubenswrapper[5008]: I0129 15:32:20.229151 5008 status_manager.go:851] "Failed to get status for pod" podUID="ea8deba9-72cb-4274-add1-e80591a9e7cc" pod="openshift-marketplace/redhat-operators-tst9c" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-tst9c\": dial tcp 38.102.83.50:6443: connect: connection refused" Jan 29 15:32:20 crc kubenswrapper[5008]: I0129 15:32:20.229446 5008 status_manager.go:851] "Failed to get status for pod" podUID="64612440-e59b-46bb-a60f-f10989166e58" pod="openshift-controller-manager/controller-manager-585448bccb-4m9fq" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-585448bccb-4m9fq\": dial tcp 38.102.83.50:6443: connect: connection refused" Jan 29 15:32:20 crc kubenswrapper[5008]: I0129 15:32:20.270625 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-lhtht" Jan 29 15:32:20 crc kubenswrapper[5008]: I0129 15:32:20.271684 5008 status_manager.go:851] "Failed to get status for pod" podUID="37742fc9-fce4-41f0-ba04-7232b6e647a7" pod="openshift-marketplace/redhat-marketplace-fd6nq" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-fd6nq\": dial tcp 38.102.83.50:6443: connect: connection refused" Jan 29 15:32:20 crc kubenswrapper[5008]: I0129 15:32:20.272127 5008 status_manager.go:851] "Failed to get status for pod" podUID="bf35ff68-68b3-4743-803f-e451a5f5c5bd" pod="openshift-route-controller-manager/route-controller-manager-556b59fcb8-5lkx4" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-556b59fcb8-5lkx4\": dial tcp 38.102.83.50:6443: connect: connection refused" Jan 29 15:32:20 crc kubenswrapper[5008]: I0129 15:32:20.272400 5008 status_manager.go:851] "Failed to get status for pod" podUID="ea8deba9-72cb-4274-add1-e80591a9e7cc" pod="openshift-marketplace/redhat-operators-tst9c" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-tst9c\": dial tcp 38.102.83.50:6443: connect: connection refused" Jan 29 15:32:20 crc kubenswrapper[5008]: I0129 15:32:20.272659 5008 status_manager.go:851] "Failed to get status for pod" podUID="64612440-e59b-46bb-a60f-f10989166e58" pod="openshift-controller-manager/controller-manager-585448bccb-4m9fq" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-585448bccb-4m9fq\": dial tcp 38.102.83.50:6443: connect: connection refused" Jan 29 15:32:20 crc kubenswrapper[5008]: I0129 15:32:20.272870 5008 status_manager.go:851] "Failed to get status for pod" podUID="a954daed-802a-4b46-81ef-7079dcddbaa5" pod="openshift-marketplace/redhat-operators-lhtht" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-lhtht\": dial tcp 38.102.83.50:6443: connect: connection refused" Jan 29 15:32:20 crc kubenswrapper[5008]: I0129 15:32:20.273095 5008 status_manager.go:851] "Failed to get status for pod" podUID="af4b11bc-2d2f-4e68-ab59-cbc08fecba52" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.50:6443: connect: connection refused" Jan 29 15:32:20 crc kubenswrapper[5008]: I0129 15:32:20.273339 5008 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.50:6443: connect: connection refused" Jan 29 15:32:20 crc kubenswrapper[5008]: E0129 15:32:20.682240 5008 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:32:20Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:32:20Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:32:20Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:32:20Z\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:15db2d5dee506f58d0ee5bf1684107211c0473c43ef6111e13df0c55850f77c9\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:acd62b9cbbc1168a7c81182ba747850ea67c24294a6703fb341471191da484f8\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1676237031},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:40a0af9b58137c413272f3533763f7affd5db97e6ef410a6aeabce6d81a246ee\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:7e9b6f6bdbfa69f6106bc85eaee51d908ede4be851b578362af443af6bf732a8\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1202031349},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:06acdd148ddfe14125d9ab253b9eb0dca1930047787f5b277a21bc88cdfd5030\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a649014abb6de45bd5e9eba64d76cf536ed766c876c58c0e1388115bafecf763\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1185399018},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917}]}}\" for node \"crc\": Patch \"https://api-int.crc.testing:6443/api/v1/nodes/crc/status?timeout=10s\": dial tcp 38.102.83.50:6443: connect: connection refused" Jan 29 15:32:20 crc kubenswrapper[5008]: E0129 15:32:20.683252 5008 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.50:6443: connect: connection refused" Jan 29 15:32:20 crc kubenswrapper[5008]: E0129 15:32:20.683766 5008 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.50:6443: connect: connection refused" Jan 29 15:32:20 crc kubenswrapper[5008]: E0129 15:32:20.684302 5008 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.50:6443: connect: connection refused" Jan 29 15:32:20 crc kubenswrapper[5008]: E0129 15:32:20.684771 5008 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.50:6443: connect: connection refused" Jan 29 15:32:20 crc kubenswrapper[5008]: E0129 15:32:20.684864 5008 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 29 15:32:20 crc kubenswrapper[5008]: I0129 15:32:20.991127 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-67df9d9956-9zzpb" Jan 29 15:32:20 crc kubenswrapper[5008]: I0129 15:32:20.992311 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-67df9d9956-9zzpb" Jan 29 15:32:21 crc kubenswrapper[5008]: I0129 15:32:21.292005 5008 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 29 15:32:21 crc kubenswrapper[5008]: I0129 15:32:21.292711 5008 status_manager.go:851] "Failed to get status for pod" podUID="ea8deba9-72cb-4274-add1-e80591a9e7cc" pod="openshift-marketplace/redhat-operators-tst9c" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-tst9c\": dial tcp 38.102.83.50:6443: connect: connection refused" Jan 29 15:32:21 crc kubenswrapper[5008]: I0129 15:32:21.292926 5008 status_manager.go:851] "Failed to get status for pod" podUID="64612440-e59b-46bb-a60f-f10989166e58" pod="openshift-controller-manager/controller-manager-585448bccb-4m9fq" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-585448bccb-4m9fq\": dial tcp 38.102.83.50:6443: connect: connection refused" Jan 29 15:32:21 crc kubenswrapper[5008]: I0129 15:32:21.293241 5008 status_manager.go:851] "Failed to get status for pod" podUID="a954daed-802a-4b46-81ef-7079dcddbaa5" pod="openshift-marketplace/redhat-operators-lhtht" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-lhtht\": dial tcp 38.102.83.50:6443: connect: connection refused" Jan 29 15:32:21 crc kubenswrapper[5008]: I0129 15:32:21.293775 5008 status_manager.go:851] "Failed to get status for pod" podUID="af4b11bc-2d2f-4e68-ab59-cbc08fecba52" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.50:6443: connect: connection refused" Jan 29 15:32:21 crc kubenswrapper[5008]: I0129 15:32:21.294172 5008 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.50:6443: connect: connection refused" Jan 29 15:32:21 crc kubenswrapper[5008]: I0129 15:32:21.294516 5008 status_manager.go:851] "Failed to get status for pod" podUID="37742fc9-fce4-41f0-ba04-7232b6e647a7" pod="openshift-marketplace/redhat-marketplace-fd6nq" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-fd6nq\": dial tcp 38.102.83.50:6443: connect: connection refused" Jan 29 15:32:21 crc kubenswrapper[5008]: I0129 15:32:21.294814 5008 status_manager.go:851] "Failed to get status for pod" podUID="bf35ff68-68b3-4743-803f-e451a5f5c5bd" pod="openshift-route-controller-manager/route-controller-manager-556b59fcb8-5lkx4" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-556b59fcb8-5lkx4\": dial tcp 38.102.83.50:6443: connect: connection refused" Jan 29 15:32:21 crc kubenswrapper[5008]: E0129 15:32:21.388152 5008 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.50:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.188f3d724d56d6e9 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-29 15:32:19.712997097 +0000 UTC m=+283.385851334,LastTimestamp:2026-01-29 15:32:19.712997097 +0000 UTC m=+283.385851334,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 29 15:32:21 crc kubenswrapper[5008]: I0129 15:32:21.491635 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/af4b11bc-2d2f-4e68-ab59-cbc08fecba52-var-lock\") pod \"af4b11bc-2d2f-4e68-ab59-cbc08fecba52\" (UID: \"af4b11bc-2d2f-4e68-ab59-cbc08fecba52\") " Jan 29 15:32:21 crc kubenswrapper[5008]: I0129 15:32:21.492032 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/af4b11bc-2d2f-4e68-ab59-cbc08fecba52-kubelet-dir\") pod \"af4b11bc-2d2f-4e68-ab59-cbc08fecba52\" (UID: \"af4b11bc-2d2f-4e68-ab59-cbc08fecba52\") " Jan 29 15:32:21 crc kubenswrapper[5008]: I0129 15:32:21.492186 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/af4b11bc-2d2f-4e68-ab59-cbc08fecba52-kube-api-access\") pod \"af4b11bc-2d2f-4e68-ab59-cbc08fecba52\" (UID: \"af4b11bc-2d2f-4e68-ab59-cbc08fecba52\") " Jan 29 15:32:21 crc kubenswrapper[5008]: I0129 15:32:21.492548 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/af4b11bc-2d2f-4e68-ab59-cbc08fecba52-var-lock" (OuterVolumeSpecName: "var-lock") pod "af4b11bc-2d2f-4e68-ab59-cbc08fecba52" (UID: "af4b11bc-2d2f-4e68-ab59-cbc08fecba52"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 15:32:21 crc kubenswrapper[5008]: I0129 15:32:21.492611 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/af4b11bc-2d2f-4e68-ab59-cbc08fecba52-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "af4b11bc-2d2f-4e68-ab59-cbc08fecba52" (UID: "af4b11bc-2d2f-4e68-ab59-cbc08fecba52"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 15:32:21 crc kubenswrapper[5008]: I0129 15:32:21.524900 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/af4b11bc-2d2f-4e68-ab59-cbc08fecba52-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "af4b11bc-2d2f-4e68-ab59-cbc08fecba52" (UID: "af4b11bc-2d2f-4e68-ab59-cbc08fecba52"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:32:21 crc kubenswrapper[5008]: I0129 15:32:21.593939 5008 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/af4b11bc-2d2f-4e68-ab59-cbc08fecba52-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 29 15:32:21 crc kubenswrapper[5008]: I0129 15:32:21.593970 5008 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/af4b11bc-2d2f-4e68-ab59-cbc08fecba52-var-lock\") on node \"crc\" DevicePath \"\"" Jan 29 15:32:21 crc kubenswrapper[5008]: I0129 15:32:21.593979 5008 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/af4b11bc-2d2f-4e68-ab59-cbc08fecba52-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 29 15:32:21 crc kubenswrapper[5008]: E0129 15:32:21.738023 5008 log.go:32] "RunPodSandbox from runtime service failed" err=< Jan 29 15:32:21 crc kubenswrapper[5008]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_controller-manager-67df9d9956-9zzpb_openshift-controller-manager_17f45bda-9243-4ae2-858a-e32e62abeebc_0(257a596fdbb7863c38b50181853f119d6120d4add6c8b74253bb943ef07cc0e0): error adding pod openshift-controller-manager_controller-manager-67df9d9956-9zzpb to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"257a596fdbb7863c38b50181853f119d6120d4add6c8b74253bb943ef07cc0e0" Netns:"/var/run/netns/1ed2d01f-3b8f-4ca0-9954-8f91c613d415" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-controller-manager;K8S_POD_NAME=controller-manager-67df9d9956-9zzpb;K8S_POD_INFRA_CONTAINER_ID=257a596fdbb7863c38b50181853f119d6120d4add6c8b74253bb943ef07cc0e0;K8S_POD_UID=17f45bda-9243-4ae2-858a-e32e62abeebc" Path:"" ERRORED: error configuring pod [openshift-controller-manager/controller-manager-67df9d9956-9zzpb] networking: Multus: [openshift-controller-manager/controller-manager-67df9d9956-9zzpb/17f45bda-9243-4ae2-858a-e32e62abeebc]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod controller-manager-67df9d9956-9zzpb in out of cluster comm: SetNetworkStatus: failed to update the pod controller-manager-67df9d9956-9zzpb in out of cluster comm: status update failed for pod /: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-67df9d9956-9zzpb?timeout=1m0s": dial tcp 38.102.83.50:6443: connect: connection refused Jan 29 15:32:21 crc kubenswrapper[5008]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Jan 29 15:32:21 crc kubenswrapper[5008]: > Jan 29 15:32:21 crc kubenswrapper[5008]: E0129 15:32:21.738104 5008 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err=< Jan 29 15:32:21 crc kubenswrapper[5008]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_controller-manager-67df9d9956-9zzpb_openshift-controller-manager_17f45bda-9243-4ae2-858a-e32e62abeebc_0(257a596fdbb7863c38b50181853f119d6120d4add6c8b74253bb943ef07cc0e0): error adding pod openshift-controller-manager_controller-manager-67df9d9956-9zzpb to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"257a596fdbb7863c38b50181853f119d6120d4add6c8b74253bb943ef07cc0e0" Netns:"/var/run/netns/1ed2d01f-3b8f-4ca0-9954-8f91c613d415" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-controller-manager;K8S_POD_NAME=controller-manager-67df9d9956-9zzpb;K8S_POD_INFRA_CONTAINER_ID=257a596fdbb7863c38b50181853f119d6120d4add6c8b74253bb943ef07cc0e0;K8S_POD_UID=17f45bda-9243-4ae2-858a-e32e62abeebc" Path:"" ERRORED: error configuring pod [openshift-controller-manager/controller-manager-67df9d9956-9zzpb] networking: Multus: [openshift-controller-manager/controller-manager-67df9d9956-9zzpb/17f45bda-9243-4ae2-858a-e32e62abeebc]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod controller-manager-67df9d9956-9zzpb in out of cluster comm: SetNetworkStatus: failed to update the pod controller-manager-67df9d9956-9zzpb in out of cluster comm: status update failed for pod /: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-67df9d9956-9zzpb?timeout=1m0s": dial tcp 38.102.83.50:6443: connect: connection refused Jan 29 15:32:21 crc kubenswrapper[5008]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Jan 29 15:32:21 crc kubenswrapper[5008]: > pod="openshift-controller-manager/controller-manager-67df9d9956-9zzpb" Jan 29 15:32:21 crc kubenswrapper[5008]: E0129 15:32:21.738127 5008 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err=< Jan 29 15:32:21 crc kubenswrapper[5008]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_controller-manager-67df9d9956-9zzpb_openshift-controller-manager_17f45bda-9243-4ae2-858a-e32e62abeebc_0(257a596fdbb7863c38b50181853f119d6120d4add6c8b74253bb943ef07cc0e0): error adding pod openshift-controller-manager_controller-manager-67df9d9956-9zzpb to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"257a596fdbb7863c38b50181853f119d6120d4add6c8b74253bb943ef07cc0e0" Netns:"/var/run/netns/1ed2d01f-3b8f-4ca0-9954-8f91c613d415" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-controller-manager;K8S_POD_NAME=controller-manager-67df9d9956-9zzpb;K8S_POD_INFRA_CONTAINER_ID=257a596fdbb7863c38b50181853f119d6120d4add6c8b74253bb943ef07cc0e0;K8S_POD_UID=17f45bda-9243-4ae2-858a-e32e62abeebc" Path:"" ERRORED: error configuring pod [openshift-controller-manager/controller-manager-67df9d9956-9zzpb] networking: Multus: [openshift-controller-manager/controller-manager-67df9d9956-9zzpb/17f45bda-9243-4ae2-858a-e32e62abeebc]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod controller-manager-67df9d9956-9zzpb in out of cluster comm: SetNetworkStatus: failed to update the pod controller-manager-67df9d9956-9zzpb in out of cluster comm: status update failed for pod /: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-67df9d9956-9zzpb?timeout=1m0s": dial tcp 38.102.83.50:6443: connect: connection refused Jan 29 15:32:21 crc kubenswrapper[5008]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Jan 29 15:32:21 crc kubenswrapper[5008]: > pod="openshift-controller-manager/controller-manager-67df9d9956-9zzpb" Jan 29 15:32:21 crc kubenswrapper[5008]: E0129 15:32:21.738188 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"controller-manager-67df9d9956-9zzpb_openshift-controller-manager(17f45bda-9243-4ae2-858a-e32e62abeebc)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"controller-manager-67df9d9956-9zzpb_openshift-controller-manager(17f45bda-9243-4ae2-858a-e32e62abeebc)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_controller-manager-67df9d9956-9zzpb_openshift-controller-manager_17f45bda-9243-4ae2-858a-e32e62abeebc_0(257a596fdbb7863c38b50181853f119d6120d4add6c8b74253bb943ef07cc0e0): error adding pod openshift-controller-manager_controller-manager-67df9d9956-9zzpb to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus-shim\\\" name=\\\"multus-cni-network\\\" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:\\\"257a596fdbb7863c38b50181853f119d6120d4add6c8b74253bb943ef07cc0e0\\\" Netns:\\\"/var/run/netns/1ed2d01f-3b8f-4ca0-9954-8f91c613d415\\\" IfName:\\\"eth0\\\" Args:\\\"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-controller-manager;K8S_POD_NAME=controller-manager-67df9d9956-9zzpb;K8S_POD_INFRA_CONTAINER_ID=257a596fdbb7863c38b50181853f119d6120d4add6c8b74253bb943ef07cc0e0;K8S_POD_UID=17f45bda-9243-4ae2-858a-e32e62abeebc\\\" Path:\\\"\\\" ERRORED: error configuring pod [openshift-controller-manager/controller-manager-67df9d9956-9zzpb] networking: Multus: [openshift-controller-manager/controller-manager-67df9d9956-9zzpb/17f45bda-9243-4ae2-858a-e32e62abeebc]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod controller-manager-67df9d9956-9zzpb in out of cluster comm: SetNetworkStatus: failed to update the pod controller-manager-67df9d9956-9zzpb in out of cluster comm: status update failed for pod /: Get \\\"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-67df9d9956-9zzpb?timeout=1m0s\\\": dial tcp 38.102.83.50:6443: connect: connection refused\\n': StdinData: {\\\"binDir\\\":\\\"/var/lib/cni/bin\\\",\\\"clusterNetwork\\\":\\\"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf\\\",\\\"cniVersion\\\":\\\"0.3.1\\\",\\\"daemonSocketDir\\\":\\\"/run/multus/socket\\\",\\\"globalNamespaces\\\":\\\"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv\\\",\\\"logLevel\\\":\\\"verbose\\\",\\\"logToStderr\\\":true,\\\"name\\\":\\\"multus-cni-network\\\",\\\"namespaceIsolation\\\":true,\\\"type\\\":\\\"multus-shim\\\"}\"" pod="openshift-controller-manager/controller-manager-67df9d9956-9zzpb" podUID="17f45bda-9243-4ae2-858a-e32e62abeebc" Jan 29 15:32:22 crc kubenswrapper[5008]: I0129 15:32:22.009346 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"af4b11bc-2d2f-4e68-ab59-cbc08fecba52","Type":"ContainerDied","Data":"3fbb18559f4006c21dcfe445af54451f7c34b27ece772e485463a9d59d5f3753"} Jan 29 15:32:22 crc kubenswrapper[5008]: I0129 15:32:22.009671 5008 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3fbb18559f4006c21dcfe445af54451f7c34b27ece772e485463a9d59d5f3753" Jan 29 15:32:22 crc kubenswrapper[5008]: I0129 15:32:22.009983 5008 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 29 15:32:22 crc kubenswrapper[5008]: I0129 15:32:22.057569 5008 status_manager.go:851] "Failed to get status for pod" podUID="a954daed-802a-4b46-81ef-7079dcddbaa5" pod="openshift-marketplace/redhat-operators-lhtht" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-lhtht\": dial tcp 38.102.83.50:6443: connect: connection refused" Jan 29 15:32:22 crc kubenswrapper[5008]: I0129 15:32:22.058460 5008 status_manager.go:851] "Failed to get status for pod" podUID="af4b11bc-2d2f-4e68-ab59-cbc08fecba52" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.50:6443: connect: connection refused" Jan 29 15:32:22 crc kubenswrapper[5008]: I0129 15:32:22.059019 5008 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.50:6443: connect: connection refused" Jan 29 15:32:22 crc kubenswrapper[5008]: I0129 15:32:22.059643 5008 status_manager.go:851] "Failed to get status for pod" podUID="bf35ff68-68b3-4743-803f-e451a5f5c5bd" pod="openshift-route-controller-manager/route-controller-manager-556b59fcb8-5lkx4" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-556b59fcb8-5lkx4\": dial tcp 38.102.83.50:6443: connect: connection refused" Jan 29 15:32:22 crc kubenswrapper[5008]: I0129 15:32:22.060851 5008 status_manager.go:851] "Failed to get status for pod" podUID="37742fc9-fce4-41f0-ba04-7232b6e647a7" pod="openshift-marketplace/redhat-marketplace-fd6nq" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-fd6nq\": dial tcp 38.102.83.50:6443: connect: connection refused" Jan 29 15:32:22 crc kubenswrapper[5008]: I0129 15:32:22.061566 5008 status_manager.go:851] "Failed to get status for pod" podUID="64612440-e59b-46bb-a60f-f10989166e58" pod="openshift-controller-manager/controller-manager-585448bccb-4m9fq" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-585448bccb-4m9fq\": dial tcp 38.102.83.50:6443: connect: connection refused" Jan 29 15:32:22 crc kubenswrapper[5008]: I0129 15:32:22.062351 5008 status_manager.go:851] "Failed to get status for pod" podUID="ea8deba9-72cb-4274-add1-e80591a9e7cc" pod="openshift-marketplace/redhat-operators-tst9c" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-tst9c\": dial tcp 38.102.83.50:6443: connect: connection refused" Jan 29 15:32:22 crc kubenswrapper[5008]: I0129 15:32:22.442300 5008 scope.go:117] "RemoveContainer" containerID="dbb82c43ba7943df2747aa78a2127da4c2cba3ad40144842a2f920c5e71f8479" Jan 29 15:32:23 crc kubenswrapper[5008]: I0129 15:32:23.019648 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"91c8b8e183ceb639dc42455dc6714f740f7596aa5a568725b22cbea1339a8752"} Jan 29 15:32:23 crc kubenswrapper[5008]: I0129 15:32:23.020154 5008 status_manager.go:851] "Failed to get status for pod" podUID="a954daed-802a-4b46-81ef-7079dcddbaa5" pod="openshift-marketplace/redhat-operators-lhtht" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-lhtht\": dial tcp 38.102.83.50:6443: connect: connection refused" Jan 29 15:32:23 crc kubenswrapper[5008]: I0129 15:32:23.020589 5008 status_manager.go:851] "Failed to get status for pod" podUID="af4b11bc-2d2f-4e68-ab59-cbc08fecba52" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.50:6443: connect: connection refused" Jan 29 15:32:23 crc kubenswrapper[5008]: I0129 15:32:23.020870 5008 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.50:6443: connect: connection refused" Jan 29 15:32:23 crc kubenswrapper[5008]: I0129 15:32:23.021174 5008 status_manager.go:851] "Failed to get status for pod" podUID="bf35ff68-68b3-4743-803f-e451a5f5c5bd" pod="openshift-route-controller-manager/route-controller-manager-556b59fcb8-5lkx4" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-556b59fcb8-5lkx4\": dial tcp 38.102.83.50:6443: connect: connection refused" Jan 29 15:32:23 crc kubenswrapper[5008]: I0129 15:32:23.021476 5008 status_manager.go:851] "Failed to get status for pod" podUID="37742fc9-fce4-41f0-ba04-7232b6e647a7" pod="openshift-marketplace/redhat-marketplace-fd6nq" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-fd6nq\": dial tcp 38.102.83.50:6443: connect: connection refused" Jan 29 15:32:23 crc kubenswrapper[5008]: I0129 15:32:23.021692 5008 status_manager.go:851] "Failed to get status for pod" podUID="64612440-e59b-46bb-a60f-f10989166e58" pod="openshift-controller-manager/controller-manager-585448bccb-4m9fq" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-585448bccb-4m9fq\": dial tcp 38.102.83.50:6443: connect: connection refused" Jan 29 15:32:23 crc kubenswrapper[5008]: I0129 15:32:23.022064 5008 status_manager.go:851] "Failed to get status for pod" podUID="ea8deba9-72cb-4274-add1-e80591a9e7cc" pod="openshift-marketplace/redhat-operators-tst9c" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-tst9c\": dial tcp 38.102.83.50:6443: connect: connection refused" Jan 29 15:32:23 crc kubenswrapper[5008]: I0129 15:32:23.023941 5008 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 29 15:32:23 crc kubenswrapper[5008]: I0129 15:32:23.025036 5008 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="4702656214e54c2881bf198364622648679a9981721d09a6b1551a134c63b7d7" exitCode=0 Jan 29 15:32:23 crc kubenswrapper[5008]: I0129 15:32:23.182048 5008 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 29 15:32:23 crc kubenswrapper[5008]: I0129 15:32:23.182621 5008 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 15:32:23 crc kubenswrapper[5008]: I0129 15:32:23.183115 5008 status_manager.go:851] "Failed to get status for pod" podUID="a954daed-802a-4b46-81ef-7079dcddbaa5" pod="openshift-marketplace/redhat-operators-lhtht" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-lhtht\": dial tcp 38.102.83.50:6443: connect: connection refused" Jan 29 15:32:23 crc kubenswrapper[5008]: I0129 15:32:23.183274 5008 status_manager.go:851] "Failed to get status for pod" podUID="af4b11bc-2d2f-4e68-ab59-cbc08fecba52" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.50:6443: connect: connection refused" Jan 29 15:32:23 crc kubenswrapper[5008]: I0129 15:32:23.183476 5008 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.50:6443: connect: connection refused" Jan 29 15:32:23 crc kubenswrapper[5008]: I0129 15:32:23.183717 5008 status_manager.go:851] "Failed to get status for pod" podUID="37742fc9-fce4-41f0-ba04-7232b6e647a7" pod="openshift-marketplace/redhat-marketplace-fd6nq" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-fd6nq\": dial tcp 38.102.83.50:6443: connect: connection refused" Jan 29 15:32:23 crc kubenswrapper[5008]: I0129 15:32:23.183943 5008 status_manager.go:851] "Failed to get status for pod" podUID="bf35ff68-68b3-4743-803f-e451a5f5c5bd" pod="openshift-route-controller-manager/route-controller-manager-556b59fcb8-5lkx4" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-556b59fcb8-5lkx4\": dial tcp 38.102.83.50:6443: connect: connection refused" Jan 29 15:32:23 crc kubenswrapper[5008]: I0129 15:32:23.184144 5008 status_manager.go:851] "Failed to get status for pod" podUID="ea8deba9-72cb-4274-add1-e80591a9e7cc" pod="openshift-marketplace/redhat-operators-tst9c" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-tst9c\": dial tcp 38.102.83.50:6443: connect: connection refused" Jan 29 15:32:23 crc kubenswrapper[5008]: I0129 15:32:23.184343 5008 status_manager.go:851] "Failed to get status for pod" podUID="64612440-e59b-46bb-a60f-f10989166e58" pod="openshift-controller-manager/controller-manager-585448bccb-4m9fq" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-585448bccb-4m9fq\": dial tcp 38.102.83.50:6443: connect: connection refused" Jan 29 15:32:23 crc kubenswrapper[5008]: I0129 15:32:23.184650 5008 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.50:6443: connect: connection refused" Jan 29 15:32:23 crc kubenswrapper[5008]: I0129 15:32:23.318936 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Jan 29 15:32:23 crc kubenswrapper[5008]: I0129 15:32:23.319001 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Jan 29 15:32:23 crc kubenswrapper[5008]: I0129 15:32:23.319046 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Jan 29 15:32:23 crc kubenswrapper[5008]: I0129 15:32:23.319096 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 15:32:23 crc kubenswrapper[5008]: I0129 15:32:23.319190 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir" (OuterVolumeSpecName: "cert-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "cert-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 15:32:23 crc kubenswrapper[5008]: I0129 15:32:23.319328 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 15:32:23 crc kubenswrapper[5008]: I0129 15:32:23.319484 5008 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") on node \"crc\" DevicePath \"\"" Jan 29 15:32:23 crc kubenswrapper[5008]: I0129 15:32:23.319508 5008 reconciler_common.go:293] "Volume detached for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") on node \"crc\" DevicePath \"\"" Jan 29 15:32:23 crc kubenswrapper[5008]: I0129 15:32:23.319527 5008 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") on node \"crc\" DevicePath \"\"" Jan 29 15:32:23 crc kubenswrapper[5008]: I0129 15:32:23.334911 5008 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f4b27818a5e8e43d0dc095d08835c792" path="/var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/volumes" Jan 29 15:32:24 crc kubenswrapper[5008]: I0129 15:32:24.036021 5008 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 29 15:32:24 crc kubenswrapper[5008]: I0129 15:32:24.037709 5008 scope.go:117] "RemoveContainer" containerID="412b5d429b7a86a87e710ba4a0c81a54b03108f41ce6cc29f429aede063eb76c" Jan 29 15:32:24 crc kubenswrapper[5008]: I0129 15:32:24.037863 5008 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 15:32:24 crc kubenswrapper[5008]: I0129 15:32:24.039233 5008 status_manager.go:851] "Failed to get status for pod" podUID="ea8deba9-72cb-4274-add1-e80591a9e7cc" pod="openshift-marketplace/redhat-operators-tst9c" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-tst9c\": dial tcp 38.102.83.50:6443: connect: connection refused" Jan 29 15:32:24 crc kubenswrapper[5008]: I0129 15:32:24.039974 5008 status_manager.go:851] "Failed to get status for pod" podUID="64612440-e59b-46bb-a60f-f10989166e58" pod="openshift-controller-manager/controller-manager-585448bccb-4m9fq" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-585448bccb-4m9fq\": dial tcp 38.102.83.50:6443: connect: connection refused" Jan 29 15:32:24 crc kubenswrapper[5008]: I0129 15:32:24.040551 5008 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.50:6443: connect: connection refused" Jan 29 15:32:24 crc kubenswrapper[5008]: I0129 15:32:24.041898 5008 status_manager.go:851] "Failed to get status for pod" podUID="a954daed-802a-4b46-81ef-7079dcddbaa5" pod="openshift-marketplace/redhat-operators-lhtht" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-lhtht\": dial tcp 38.102.83.50:6443: connect: connection refused" Jan 29 15:32:24 crc kubenswrapper[5008]: I0129 15:32:24.042803 5008 status_manager.go:851] "Failed to get status for pod" podUID="af4b11bc-2d2f-4e68-ab59-cbc08fecba52" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.50:6443: connect: connection refused" Jan 29 15:32:24 crc kubenswrapper[5008]: I0129 15:32:24.043987 5008 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.50:6443: connect: connection refused" Jan 29 15:32:24 crc kubenswrapper[5008]: I0129 15:32:24.044905 5008 status_manager.go:851] "Failed to get status for pod" podUID="37742fc9-fce4-41f0-ba04-7232b6e647a7" pod="openshift-marketplace/redhat-marketplace-fd6nq" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-fd6nq\": dial tcp 38.102.83.50:6443: connect: connection refused" Jan 29 15:32:24 crc kubenswrapper[5008]: I0129 15:32:24.045635 5008 status_manager.go:851] "Failed to get status for pod" podUID="bf35ff68-68b3-4743-803f-e451a5f5c5bd" pod="openshift-route-controller-manager/route-controller-manager-556b59fcb8-5lkx4" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-556b59fcb8-5lkx4\": dial tcp 38.102.83.50:6443: connect: connection refused" Jan 29 15:32:24 crc kubenswrapper[5008]: I0129 15:32:24.046652 5008 status_manager.go:851] "Failed to get status for pod" podUID="a954daed-802a-4b46-81ef-7079dcddbaa5" pod="openshift-marketplace/redhat-operators-lhtht" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-lhtht\": dial tcp 38.102.83.50:6443: connect: connection refused" Jan 29 15:32:24 crc kubenswrapper[5008]: I0129 15:32:24.047347 5008 status_manager.go:851] "Failed to get status for pod" podUID="af4b11bc-2d2f-4e68-ab59-cbc08fecba52" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.50:6443: connect: connection refused" Jan 29 15:32:24 crc kubenswrapper[5008]: I0129 15:32:24.047902 5008 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.50:6443: connect: connection refused" Jan 29 15:32:24 crc kubenswrapper[5008]: I0129 15:32:24.048405 5008 status_manager.go:851] "Failed to get status for pod" podUID="37742fc9-fce4-41f0-ba04-7232b6e647a7" pod="openshift-marketplace/redhat-marketplace-fd6nq" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-fd6nq\": dial tcp 38.102.83.50:6443: connect: connection refused" Jan 29 15:32:24 crc kubenswrapper[5008]: I0129 15:32:24.048930 5008 status_manager.go:851] "Failed to get status for pod" podUID="bf35ff68-68b3-4743-803f-e451a5f5c5bd" pod="openshift-route-controller-manager/route-controller-manager-556b59fcb8-5lkx4" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-556b59fcb8-5lkx4\": dial tcp 38.102.83.50:6443: connect: connection refused" Jan 29 15:32:24 crc kubenswrapper[5008]: I0129 15:32:24.049513 5008 status_manager.go:851] "Failed to get status for pod" podUID="ea8deba9-72cb-4274-add1-e80591a9e7cc" pod="openshift-marketplace/redhat-operators-tst9c" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-tst9c\": dial tcp 38.102.83.50:6443: connect: connection refused" Jan 29 15:32:24 crc kubenswrapper[5008]: I0129 15:32:24.050075 5008 status_manager.go:851] "Failed to get status for pod" podUID="64612440-e59b-46bb-a60f-f10989166e58" pod="openshift-controller-manager/controller-manager-585448bccb-4m9fq" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-585448bccb-4m9fq\": dial tcp 38.102.83.50:6443: connect: connection refused" Jan 29 15:32:24 crc kubenswrapper[5008]: I0129 15:32:24.050550 5008 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.50:6443: connect: connection refused" Jan 29 15:32:24 crc kubenswrapper[5008]: I0129 15:32:24.057058 5008 scope.go:117] "RemoveContainer" containerID="3397d4d59fbac09e49247425eb263f25d13c62a72013146c981b606f6389165d" Jan 29 15:32:24 crc kubenswrapper[5008]: I0129 15:32:24.071982 5008 scope.go:117] "RemoveContainer" containerID="5ed1794f8b68a0810301b6f7b91e03cfb269b35084dd97b2f153789ba70970f2" Jan 29 15:32:24 crc kubenswrapper[5008]: I0129 15:32:24.089339 5008 scope.go:117] "RemoveContainer" containerID="677c04a1dffb767e6149ccb064772548ca29cf553afc20cb4eb82a5f85742ff0" Jan 29 15:32:24 crc kubenswrapper[5008]: I0129 15:32:24.104952 5008 scope.go:117] "RemoveContainer" containerID="4702656214e54c2881bf198364622648679a9981721d09a6b1551a134c63b7d7" Jan 29 15:32:24 crc kubenswrapper[5008]: I0129 15:32:24.124829 5008 scope.go:117] "RemoveContainer" containerID="25a0c747be0a011a60911a631709b27620d8ebc5afea1d21dbeb71f26d971f6e" Jan 29 15:32:26 crc kubenswrapper[5008]: I0129 15:32:26.772182 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-cwgw5" Jan 29 15:32:26 crc kubenswrapper[5008]: I0129 15:32:26.773590 5008 status_manager.go:851] "Failed to get status for pod" podUID="a954daed-802a-4b46-81ef-7079dcddbaa5" pod="openshift-marketplace/redhat-operators-lhtht" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-lhtht\": dial tcp 38.102.83.50:6443: connect: connection refused" Jan 29 15:32:26 crc kubenswrapper[5008]: I0129 15:32:26.774121 5008 status_manager.go:851] "Failed to get status for pod" podUID="af4b11bc-2d2f-4e68-ab59-cbc08fecba52" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.50:6443: connect: connection refused" Jan 29 15:32:26 crc kubenswrapper[5008]: I0129 15:32:26.774420 5008 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.50:6443: connect: connection refused" Jan 29 15:32:26 crc kubenswrapper[5008]: I0129 15:32:26.774682 5008 status_manager.go:851] "Failed to get status for pod" podUID="37742fc9-fce4-41f0-ba04-7232b6e647a7" pod="openshift-marketplace/redhat-marketplace-fd6nq" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-fd6nq\": dial tcp 38.102.83.50:6443: connect: connection refused" Jan 29 15:32:26 crc kubenswrapper[5008]: I0129 15:32:26.774987 5008 status_manager.go:851] "Failed to get status for pod" podUID="bf35ff68-68b3-4743-803f-e451a5f5c5bd" pod="openshift-route-controller-manager/route-controller-manager-556b59fcb8-5lkx4" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-556b59fcb8-5lkx4\": dial tcp 38.102.83.50:6443: connect: connection refused" Jan 29 15:32:26 crc kubenswrapper[5008]: I0129 15:32:26.775280 5008 status_manager.go:851] "Failed to get status for pod" podUID="ea8deba9-72cb-4274-add1-e80591a9e7cc" pod="openshift-marketplace/redhat-operators-tst9c" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-tst9c\": dial tcp 38.102.83.50:6443: connect: connection refused" Jan 29 15:32:26 crc kubenswrapper[5008]: I0129 15:32:26.775524 5008 status_manager.go:851] "Failed to get status for pod" podUID="64612440-e59b-46bb-a60f-f10989166e58" pod="openshift-controller-manager/controller-manager-585448bccb-4m9fq" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-585448bccb-4m9fq\": dial tcp 38.102.83.50:6443: connect: connection refused" Jan 29 15:32:26 crc kubenswrapper[5008]: I0129 15:32:26.775778 5008 status_manager.go:851] "Failed to get status for pod" podUID="6aebe040-289b-48c1-a825-f12b471a5ad6" pod="openshift-marketplace/certified-operators-cwgw5" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-cwgw5\": dial tcp 38.102.83.50:6443: connect: connection refused" Jan 29 15:32:27 crc kubenswrapper[5008]: I0129 15:32:27.028309 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-z9t2h" Jan 29 15:32:27 crc kubenswrapper[5008]: I0129 15:32:27.029190 5008 status_manager.go:851] "Failed to get status for pod" podUID="a954daed-802a-4b46-81ef-7079dcddbaa5" pod="openshift-marketplace/redhat-operators-lhtht" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-lhtht\": dial tcp 38.102.83.50:6443: connect: connection refused" Jan 29 15:32:27 crc kubenswrapper[5008]: I0129 15:32:27.029734 5008 status_manager.go:851] "Failed to get status for pod" podUID="af4b11bc-2d2f-4e68-ab59-cbc08fecba52" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.50:6443: connect: connection refused" Jan 29 15:32:27 crc kubenswrapper[5008]: I0129 15:32:27.030499 5008 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.50:6443: connect: connection refused" Jan 29 15:32:27 crc kubenswrapper[5008]: I0129 15:32:27.031275 5008 status_manager.go:851] "Failed to get status for pod" podUID="37742fc9-fce4-41f0-ba04-7232b6e647a7" pod="openshift-marketplace/redhat-marketplace-fd6nq" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-fd6nq\": dial tcp 38.102.83.50:6443: connect: connection refused" Jan 29 15:32:27 crc kubenswrapper[5008]: I0129 15:32:27.031754 5008 status_manager.go:851] "Failed to get status for pod" podUID="bf35ff68-68b3-4743-803f-e451a5f5c5bd" pod="openshift-route-controller-manager/route-controller-manager-556b59fcb8-5lkx4" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-556b59fcb8-5lkx4\": dial tcp 38.102.83.50:6443: connect: connection refused" Jan 29 15:32:27 crc kubenswrapper[5008]: I0129 15:32:27.032290 5008 status_manager.go:851] "Failed to get status for pod" podUID="ea8deba9-72cb-4274-add1-e80591a9e7cc" pod="openshift-marketplace/redhat-operators-tst9c" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-tst9c\": dial tcp 38.102.83.50:6443: connect: connection refused" Jan 29 15:32:27 crc kubenswrapper[5008]: I0129 15:32:27.032746 5008 status_manager.go:851] "Failed to get status for pod" podUID="64612440-e59b-46bb-a60f-f10989166e58" pod="openshift-controller-manager/controller-manager-585448bccb-4m9fq" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-585448bccb-4m9fq\": dial tcp 38.102.83.50:6443: connect: connection refused" Jan 29 15:32:27 crc kubenswrapper[5008]: I0129 15:32:27.033270 5008 status_manager.go:851] "Failed to get status for pod" podUID="6aebe040-289b-48c1-a825-f12b471a5ad6" pod="openshift-marketplace/certified-operators-cwgw5" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-cwgw5\": dial tcp 38.102.83.50:6443: connect: connection refused" Jan 29 15:32:27 crc kubenswrapper[5008]: I0129 15:32:27.033845 5008 status_manager.go:851] "Failed to get status for pod" podUID="250e7db8-88dd-44fd-8d73-51a6f8f4ba96" pod="openshift-marketplace/certified-operators-z9t2h" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-z9t2h\": dial tcp 38.102.83.50:6443: connect: connection refused" Jan 29 15:32:27 crc kubenswrapper[5008]: I0129 15:32:27.328967 5008 status_manager.go:851] "Failed to get status for pod" podUID="37742fc9-fce4-41f0-ba04-7232b6e647a7" pod="openshift-marketplace/redhat-marketplace-fd6nq" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-fd6nq\": dial tcp 38.102.83.50:6443: connect: connection refused" Jan 29 15:32:27 crc kubenswrapper[5008]: I0129 15:32:27.330016 5008 status_manager.go:851] "Failed to get status for pod" podUID="bf35ff68-68b3-4743-803f-e451a5f5c5bd" pod="openshift-route-controller-manager/route-controller-manager-556b59fcb8-5lkx4" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-556b59fcb8-5lkx4\": dial tcp 38.102.83.50:6443: connect: connection refused" Jan 29 15:32:27 crc kubenswrapper[5008]: I0129 15:32:27.330677 5008 status_manager.go:851] "Failed to get status for pod" podUID="ea8deba9-72cb-4274-add1-e80591a9e7cc" pod="openshift-marketplace/redhat-operators-tst9c" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-tst9c\": dial tcp 38.102.83.50:6443: connect: connection refused" Jan 29 15:32:27 crc kubenswrapper[5008]: I0129 15:32:27.331337 5008 status_manager.go:851] "Failed to get status for pod" podUID="64612440-e59b-46bb-a60f-f10989166e58" pod="openshift-controller-manager/controller-manager-585448bccb-4m9fq" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-585448bccb-4m9fq\": dial tcp 38.102.83.50:6443: connect: connection refused" Jan 29 15:32:27 crc kubenswrapper[5008]: I0129 15:32:27.331908 5008 status_manager.go:851] "Failed to get status for pod" podUID="6aebe040-289b-48c1-a825-f12b471a5ad6" pod="openshift-marketplace/certified-operators-cwgw5" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-cwgw5\": dial tcp 38.102.83.50:6443: connect: connection refused" Jan 29 15:32:27 crc kubenswrapper[5008]: I0129 15:32:27.332350 5008 status_manager.go:851] "Failed to get status for pod" podUID="250e7db8-88dd-44fd-8d73-51a6f8f4ba96" pod="openshift-marketplace/certified-operators-z9t2h" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-z9t2h\": dial tcp 38.102.83.50:6443: connect: connection refused" Jan 29 15:32:27 crc kubenswrapper[5008]: I0129 15:32:27.332876 5008 status_manager.go:851] "Failed to get status for pod" podUID="a954daed-802a-4b46-81ef-7079dcddbaa5" pod="openshift-marketplace/redhat-operators-lhtht" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-lhtht\": dial tcp 38.102.83.50:6443: connect: connection refused" Jan 29 15:32:27 crc kubenswrapper[5008]: I0129 15:32:27.333566 5008 status_manager.go:851] "Failed to get status for pod" podUID="af4b11bc-2d2f-4e68-ab59-cbc08fecba52" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.50:6443: connect: connection refused" Jan 29 15:32:27 crc kubenswrapper[5008]: I0129 15:32:27.334041 5008 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.50:6443: connect: connection refused" Jan 29 15:32:28 crc kubenswrapper[5008]: E0129 15:32:28.452112 5008 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.50:6443: connect: connection refused" Jan 29 15:32:28 crc kubenswrapper[5008]: E0129 15:32:28.452637 5008 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.50:6443: connect: connection refused" Jan 29 15:32:28 crc kubenswrapper[5008]: E0129 15:32:28.453332 5008 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.50:6443: connect: connection refused" Jan 29 15:32:28 crc kubenswrapper[5008]: E0129 15:32:28.454031 5008 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.50:6443: connect: connection refused" Jan 29 15:32:28 crc kubenswrapper[5008]: E0129 15:32:28.454461 5008 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.50:6443: connect: connection refused" Jan 29 15:32:28 crc kubenswrapper[5008]: I0129 15:32:28.454532 5008 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease" Jan 29 15:32:28 crc kubenswrapper[5008]: E0129 15:32:28.455094 5008 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.50:6443: connect: connection refused" interval="200ms" Jan 29 15:32:28 crc kubenswrapper[5008]: E0129 15:32:28.656064 5008 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.50:6443: connect: connection refused" interval="400ms" Jan 29 15:32:29 crc kubenswrapper[5008]: E0129 15:32:29.057766 5008 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.50:6443: connect: connection refused" interval="800ms" Jan 29 15:32:29 crc kubenswrapper[5008]: I0129 15:32:29.138028 5008 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-fd6nq" Jan 29 15:32:29 crc kubenswrapper[5008]: I0129 15:32:29.139141 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-fd6nq" Jan 29 15:32:29 crc kubenswrapper[5008]: I0129 15:32:29.206498 5008 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-fd6nq" Jan 29 15:32:29 crc kubenswrapper[5008]: I0129 15:32:29.207365 5008 status_manager.go:851] "Failed to get status for pod" podUID="a954daed-802a-4b46-81ef-7079dcddbaa5" pod="openshift-marketplace/redhat-operators-lhtht" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-lhtht\": dial tcp 38.102.83.50:6443: connect: connection refused" Jan 29 15:32:29 crc kubenswrapper[5008]: I0129 15:32:29.208052 5008 status_manager.go:851] "Failed to get status for pod" podUID="af4b11bc-2d2f-4e68-ab59-cbc08fecba52" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.50:6443: connect: connection refused" Jan 29 15:32:29 crc kubenswrapper[5008]: I0129 15:32:29.208612 5008 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.50:6443: connect: connection refused" Jan 29 15:32:29 crc kubenswrapper[5008]: I0129 15:32:29.209094 5008 status_manager.go:851] "Failed to get status for pod" podUID="37742fc9-fce4-41f0-ba04-7232b6e647a7" pod="openshift-marketplace/redhat-marketplace-fd6nq" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-fd6nq\": dial tcp 38.102.83.50:6443: connect: connection refused" Jan 29 15:32:29 crc kubenswrapper[5008]: I0129 15:32:29.209488 5008 status_manager.go:851] "Failed to get status for pod" podUID="bf35ff68-68b3-4743-803f-e451a5f5c5bd" pod="openshift-route-controller-manager/route-controller-manager-556b59fcb8-5lkx4" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-556b59fcb8-5lkx4\": dial tcp 38.102.83.50:6443: connect: connection refused" Jan 29 15:32:29 crc kubenswrapper[5008]: I0129 15:32:29.209975 5008 status_manager.go:851] "Failed to get status for pod" podUID="ea8deba9-72cb-4274-add1-e80591a9e7cc" pod="openshift-marketplace/redhat-operators-tst9c" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-tst9c\": dial tcp 38.102.83.50:6443: connect: connection refused" Jan 29 15:32:29 crc kubenswrapper[5008]: I0129 15:32:29.210453 5008 status_manager.go:851] "Failed to get status for pod" podUID="64612440-e59b-46bb-a60f-f10989166e58" pod="openshift-controller-manager/controller-manager-585448bccb-4m9fq" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-585448bccb-4m9fq\": dial tcp 38.102.83.50:6443: connect: connection refused" Jan 29 15:32:29 crc kubenswrapper[5008]: I0129 15:32:29.211018 5008 status_manager.go:851] "Failed to get status for pod" podUID="6aebe040-289b-48c1-a825-f12b471a5ad6" pod="openshift-marketplace/certified-operators-cwgw5" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-cwgw5\": dial tcp 38.102.83.50:6443: connect: connection refused" Jan 29 15:32:29 crc kubenswrapper[5008]: I0129 15:32:29.211478 5008 status_manager.go:851] "Failed to get status for pod" podUID="250e7db8-88dd-44fd-8d73-51a6f8f4ba96" pod="openshift-marketplace/certified-operators-z9t2h" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-z9t2h\": dial tcp 38.102.83.50:6443: connect: connection refused" Jan 29 15:32:29 crc kubenswrapper[5008]: E0129 15:32:29.858977 5008 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.50:6443: connect: connection refused" interval="1.6s" Jan 29 15:32:30 crc kubenswrapper[5008]: I0129 15:32:30.117712 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-fd6nq" Jan 29 15:32:30 crc kubenswrapper[5008]: I0129 15:32:30.118568 5008 status_manager.go:851] "Failed to get status for pod" podUID="a954daed-802a-4b46-81ef-7079dcddbaa5" pod="openshift-marketplace/redhat-operators-lhtht" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-lhtht\": dial tcp 38.102.83.50:6443: connect: connection refused" Jan 29 15:32:30 crc kubenswrapper[5008]: I0129 15:32:30.119099 5008 status_manager.go:851] "Failed to get status for pod" podUID="af4b11bc-2d2f-4e68-ab59-cbc08fecba52" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.50:6443: connect: connection refused" Jan 29 15:32:30 crc kubenswrapper[5008]: I0129 15:32:30.119582 5008 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.50:6443: connect: connection refused" Jan 29 15:32:30 crc kubenswrapper[5008]: I0129 15:32:30.120213 5008 status_manager.go:851] "Failed to get status for pod" podUID="37742fc9-fce4-41f0-ba04-7232b6e647a7" pod="openshift-marketplace/redhat-marketplace-fd6nq" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-fd6nq\": dial tcp 38.102.83.50:6443: connect: connection refused" Jan 29 15:32:30 crc kubenswrapper[5008]: I0129 15:32:30.120922 5008 status_manager.go:851] "Failed to get status for pod" podUID="bf35ff68-68b3-4743-803f-e451a5f5c5bd" pod="openshift-route-controller-manager/route-controller-manager-556b59fcb8-5lkx4" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-556b59fcb8-5lkx4\": dial tcp 38.102.83.50:6443: connect: connection refused" Jan 29 15:32:30 crc kubenswrapper[5008]: I0129 15:32:30.121390 5008 status_manager.go:851] "Failed to get status for pod" podUID="ea8deba9-72cb-4274-add1-e80591a9e7cc" pod="openshift-marketplace/redhat-operators-tst9c" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-tst9c\": dial tcp 38.102.83.50:6443: connect: connection refused" Jan 29 15:32:30 crc kubenswrapper[5008]: I0129 15:32:30.121919 5008 status_manager.go:851] "Failed to get status for pod" podUID="64612440-e59b-46bb-a60f-f10989166e58" pod="openshift-controller-manager/controller-manager-585448bccb-4m9fq" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-585448bccb-4m9fq\": dial tcp 38.102.83.50:6443: connect: connection refused" Jan 29 15:32:30 crc kubenswrapper[5008]: I0129 15:32:30.122498 5008 status_manager.go:851] "Failed to get status for pod" podUID="6aebe040-289b-48c1-a825-f12b471a5ad6" pod="openshift-marketplace/certified-operators-cwgw5" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-cwgw5\": dial tcp 38.102.83.50:6443: connect: connection refused" Jan 29 15:32:30 crc kubenswrapper[5008]: I0129 15:32:30.123341 5008 status_manager.go:851] "Failed to get status for pod" podUID="250e7db8-88dd-44fd-8d73-51a6f8f4ba96" pod="openshift-marketplace/certified-operators-z9t2h" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-z9t2h\": dial tcp 38.102.83.50:6443: connect: connection refused" Jan 29 15:32:31 crc kubenswrapper[5008]: E0129 15:32:31.000073 5008 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:32:30Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:32:30Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:32:30Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:32:30Z\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:15db2d5dee506f58d0ee5bf1684107211c0473c43ef6111e13df0c55850f77c9\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:acd62b9cbbc1168a7c81182ba747850ea67c24294a6703fb341471191da484f8\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1676237031},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:40a0af9b58137c413272f3533763f7affd5db97e6ef410a6aeabce6d81a246ee\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:7e9b6f6bdbfa69f6106bc85eaee51d908ede4be851b578362af443af6bf732a8\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1202031349},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:06acdd148ddfe14125d9ab253b9eb0dca1930047787f5b277a21bc88cdfd5030\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a649014abb6de45bd5e9eba64d76cf536ed766c876c58c0e1388115bafecf763\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1185399018},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917}]}}\" for node \"crc\": Patch \"https://api-int.crc.testing:6443/api/v1/nodes/crc/status?timeout=10s\": dial tcp 38.102.83.50:6443: connect: connection refused" Jan 29 15:32:31 crc kubenswrapper[5008]: E0129 15:32:31.001567 5008 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.50:6443: connect: connection refused" Jan 29 15:32:31 crc kubenswrapper[5008]: E0129 15:32:31.002072 5008 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.50:6443: connect: connection refused" Jan 29 15:32:31 crc kubenswrapper[5008]: E0129 15:32:31.002489 5008 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.50:6443: connect: connection refused" Jan 29 15:32:31 crc kubenswrapper[5008]: E0129 15:32:31.002933 5008 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.50:6443: connect: connection refused" Jan 29 15:32:31 crc kubenswrapper[5008]: E0129 15:32:31.002981 5008 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 29 15:32:31 crc kubenswrapper[5008]: E0129 15:32:31.390017 5008 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.50:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.188f3d724d56d6e9 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-29 15:32:19.712997097 +0000 UTC m=+283.385851334,LastTimestamp:2026-01-29 15:32:19.712997097 +0000 UTC m=+283.385851334,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 29 15:32:31 crc kubenswrapper[5008]: E0129 15:32:31.460091 5008 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.50:6443: connect: connection refused" interval="3.2s" Jan 29 15:32:33 crc kubenswrapper[5008]: I0129 15:32:33.106395 5008 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Jan 29 15:32:33 crc kubenswrapper[5008]: I0129 15:32:33.106479 5008 generic.go:334] "Generic (PLEG): container finished" podID="f614b9022728cf315e60c057852e563e" containerID="c778df6f5c031669143db37980250c01473f3d9856acc44a6ef51852822f99ca" exitCode=1 Jan 29 15:32:33 crc kubenswrapper[5008]: I0129 15:32:33.106540 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerDied","Data":"c778df6f5c031669143db37980250c01473f3d9856acc44a6ef51852822f99ca"} Jan 29 15:32:33 crc kubenswrapper[5008]: I0129 15:32:33.107326 5008 scope.go:117] "RemoveContainer" containerID="c778df6f5c031669143db37980250c01473f3d9856acc44a6ef51852822f99ca" Jan 29 15:32:33 crc kubenswrapper[5008]: I0129 15:32:33.107976 5008 status_manager.go:851] "Failed to get status for pod" podUID="a954daed-802a-4b46-81ef-7079dcddbaa5" pod="openshift-marketplace/redhat-operators-lhtht" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-lhtht\": dial tcp 38.102.83.50:6443: connect: connection refused" Jan 29 15:32:33 crc kubenswrapper[5008]: I0129 15:32:33.108418 5008 status_manager.go:851] "Failed to get status for pod" podUID="af4b11bc-2d2f-4e68-ab59-cbc08fecba52" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.50:6443: connect: connection refused" Jan 29 15:32:33 crc kubenswrapper[5008]: I0129 15:32:33.108874 5008 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.50:6443: connect: connection refused" Jan 29 15:32:33 crc kubenswrapper[5008]: I0129 15:32:33.109518 5008 status_manager.go:851] "Failed to get status for pod" podUID="37742fc9-fce4-41f0-ba04-7232b6e647a7" pod="openshift-marketplace/redhat-marketplace-fd6nq" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-fd6nq\": dial tcp 38.102.83.50:6443: connect: connection refused" Jan 29 15:32:33 crc kubenswrapper[5008]: I0129 15:32:33.117219 5008 status_manager.go:851] "Failed to get status for pod" podUID="bf35ff68-68b3-4743-803f-e451a5f5c5bd" pod="openshift-route-controller-manager/route-controller-manager-556b59fcb8-5lkx4" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-556b59fcb8-5lkx4\": dial tcp 38.102.83.50:6443: connect: connection refused" Jan 29 15:32:33 crc kubenswrapper[5008]: I0129 15:32:33.118059 5008 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.50:6443: connect: connection refused" Jan 29 15:32:33 crc kubenswrapper[5008]: I0129 15:32:33.118604 5008 status_manager.go:851] "Failed to get status for pod" podUID="ea8deba9-72cb-4274-add1-e80591a9e7cc" pod="openshift-marketplace/redhat-operators-tst9c" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-tst9c\": dial tcp 38.102.83.50:6443: connect: connection refused" Jan 29 15:32:33 crc kubenswrapper[5008]: I0129 15:32:33.119025 5008 status_manager.go:851] "Failed to get status for pod" podUID="64612440-e59b-46bb-a60f-f10989166e58" pod="openshift-controller-manager/controller-manager-585448bccb-4m9fq" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-585448bccb-4m9fq\": dial tcp 38.102.83.50:6443: connect: connection refused" Jan 29 15:32:33 crc kubenswrapper[5008]: I0129 15:32:33.119809 5008 status_manager.go:851] "Failed to get status for pod" podUID="250e7db8-88dd-44fd-8d73-51a6f8f4ba96" pod="openshift-marketplace/certified-operators-z9t2h" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-z9t2h\": dial tcp 38.102.83.50:6443: connect: connection refused" Jan 29 15:32:33 crc kubenswrapper[5008]: I0129 15:32:33.129499 5008 status_manager.go:851] "Failed to get status for pod" podUID="6aebe040-289b-48c1-a825-f12b471a5ad6" pod="openshift-marketplace/certified-operators-cwgw5" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-cwgw5\": dial tcp 38.102.83.50:6443: connect: connection refused" Jan 29 15:32:34 crc kubenswrapper[5008]: I0129 15:32:34.115067 5008 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Jan 29 15:32:34 crc kubenswrapper[5008]: I0129 15:32:34.115362 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"5764ee38ac7740acad09b2b6419d8e3dc71434980dac60260fe3d6dd067682f4"} Jan 29 15:32:34 crc kubenswrapper[5008]: I0129 15:32:34.116685 5008 status_manager.go:851] "Failed to get status for pod" podUID="37742fc9-fce4-41f0-ba04-7232b6e647a7" pod="openshift-marketplace/redhat-marketplace-fd6nq" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-fd6nq\": dial tcp 38.102.83.50:6443: connect: connection refused" Jan 29 15:32:34 crc kubenswrapper[5008]: I0129 15:32:34.117416 5008 status_manager.go:851] "Failed to get status for pod" podUID="bf35ff68-68b3-4743-803f-e451a5f5c5bd" pod="openshift-route-controller-manager/route-controller-manager-556b59fcb8-5lkx4" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-556b59fcb8-5lkx4\": dial tcp 38.102.83.50:6443: connect: connection refused" Jan 29 15:32:34 crc kubenswrapper[5008]: I0129 15:32:34.117915 5008 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.50:6443: connect: connection refused" Jan 29 15:32:34 crc kubenswrapper[5008]: I0129 15:32:34.118396 5008 status_manager.go:851] "Failed to get status for pod" podUID="ea8deba9-72cb-4274-add1-e80591a9e7cc" pod="openshift-marketplace/redhat-operators-tst9c" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-tst9c\": dial tcp 38.102.83.50:6443: connect: connection refused" Jan 29 15:32:34 crc kubenswrapper[5008]: I0129 15:32:34.118872 5008 status_manager.go:851] "Failed to get status for pod" podUID="64612440-e59b-46bb-a60f-f10989166e58" pod="openshift-controller-manager/controller-manager-585448bccb-4m9fq" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-585448bccb-4m9fq\": dial tcp 38.102.83.50:6443: connect: connection refused" Jan 29 15:32:34 crc kubenswrapper[5008]: I0129 15:32:34.119252 5008 status_manager.go:851] "Failed to get status for pod" podUID="6aebe040-289b-48c1-a825-f12b471a5ad6" pod="openshift-marketplace/certified-operators-cwgw5" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-cwgw5\": dial tcp 38.102.83.50:6443: connect: connection refused" Jan 29 15:32:34 crc kubenswrapper[5008]: I0129 15:32:34.119544 5008 status_manager.go:851] "Failed to get status for pod" podUID="250e7db8-88dd-44fd-8d73-51a6f8f4ba96" pod="openshift-marketplace/certified-operators-z9t2h" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-z9t2h\": dial tcp 38.102.83.50:6443: connect: connection refused" Jan 29 15:32:34 crc kubenswrapper[5008]: I0129 15:32:34.119855 5008 status_manager.go:851] "Failed to get status for pod" podUID="a954daed-802a-4b46-81ef-7079dcddbaa5" pod="openshift-marketplace/redhat-operators-lhtht" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-lhtht\": dial tcp 38.102.83.50:6443: connect: connection refused" Jan 29 15:32:34 crc kubenswrapper[5008]: I0129 15:32:34.120161 5008 status_manager.go:851] "Failed to get status for pod" podUID="af4b11bc-2d2f-4e68-ab59-cbc08fecba52" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.50:6443: connect: connection refused" Jan 29 15:32:34 crc kubenswrapper[5008]: I0129 15:32:34.120455 5008 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.50:6443: connect: connection refused" Jan 29 15:32:34 crc kubenswrapper[5008]: I0129 15:32:34.322726 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 15:32:34 crc kubenswrapper[5008]: I0129 15:32:34.323904 5008 status_manager.go:851] "Failed to get status for pod" podUID="a954daed-802a-4b46-81ef-7079dcddbaa5" pod="openshift-marketplace/redhat-operators-lhtht" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-lhtht\": dial tcp 38.102.83.50:6443: connect: connection refused" Jan 29 15:32:34 crc kubenswrapper[5008]: I0129 15:32:34.324589 5008 status_manager.go:851] "Failed to get status for pod" podUID="af4b11bc-2d2f-4e68-ab59-cbc08fecba52" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.50:6443: connect: connection refused" Jan 29 15:32:34 crc kubenswrapper[5008]: I0129 15:32:34.325341 5008 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.50:6443: connect: connection refused" Jan 29 15:32:34 crc kubenswrapper[5008]: I0129 15:32:34.325833 5008 status_manager.go:851] "Failed to get status for pod" podUID="bf35ff68-68b3-4743-803f-e451a5f5c5bd" pod="openshift-route-controller-manager/route-controller-manager-556b59fcb8-5lkx4" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-556b59fcb8-5lkx4\": dial tcp 38.102.83.50:6443: connect: connection refused" Jan 29 15:32:34 crc kubenswrapper[5008]: I0129 15:32:34.326351 5008 status_manager.go:851] "Failed to get status for pod" podUID="37742fc9-fce4-41f0-ba04-7232b6e647a7" pod="openshift-marketplace/redhat-marketplace-fd6nq" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-fd6nq\": dial tcp 38.102.83.50:6443: connect: connection refused" Jan 29 15:32:34 crc kubenswrapper[5008]: I0129 15:32:34.326859 5008 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.50:6443: connect: connection refused" Jan 29 15:32:34 crc kubenswrapper[5008]: I0129 15:32:34.327349 5008 status_manager.go:851] "Failed to get status for pod" podUID="64612440-e59b-46bb-a60f-f10989166e58" pod="openshift-controller-manager/controller-manager-585448bccb-4m9fq" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-585448bccb-4m9fq\": dial tcp 38.102.83.50:6443: connect: connection refused" Jan 29 15:32:34 crc kubenswrapper[5008]: I0129 15:32:34.327931 5008 status_manager.go:851] "Failed to get status for pod" podUID="ea8deba9-72cb-4274-add1-e80591a9e7cc" pod="openshift-marketplace/redhat-operators-tst9c" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-tst9c\": dial tcp 38.102.83.50:6443: connect: connection refused" Jan 29 15:32:34 crc kubenswrapper[5008]: I0129 15:32:34.328468 5008 status_manager.go:851] "Failed to get status for pod" podUID="6aebe040-289b-48c1-a825-f12b471a5ad6" pod="openshift-marketplace/certified-operators-cwgw5" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-cwgw5\": dial tcp 38.102.83.50:6443: connect: connection refused" Jan 29 15:32:34 crc kubenswrapper[5008]: I0129 15:32:34.329030 5008 status_manager.go:851] "Failed to get status for pod" podUID="250e7db8-88dd-44fd-8d73-51a6f8f4ba96" pod="openshift-marketplace/certified-operators-z9t2h" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-z9t2h\": dial tcp 38.102.83.50:6443: connect: connection refused" Jan 29 15:32:34 crc kubenswrapper[5008]: I0129 15:32:34.342964 5008 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="d62f7cc2-d2d7-4c9a-9432-8b4fb9f3fcf2" Jan 29 15:32:34 crc kubenswrapper[5008]: I0129 15:32:34.343026 5008 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="d62f7cc2-d2d7-4c9a-9432-8b4fb9f3fcf2" Jan 29 15:32:34 crc kubenswrapper[5008]: E0129 15:32:34.343560 5008 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.50:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 15:32:34 crc kubenswrapper[5008]: I0129 15:32:34.344273 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 15:32:34 crc kubenswrapper[5008]: W0129 15:32:34.375593 5008 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod71bb4a3aecc4ba5b26c4b7318770ce13.slice/crio-1df3554e8c15c141cb2b6211852af25fcbdcccb5a410c3e992a90bab5a6d4263 WatchSource:0}: Error finding container 1df3554e8c15c141cb2b6211852af25fcbdcccb5a410c3e992a90bab5a6d4263: Status 404 returned error can't find the container with id 1df3554e8c15c141cb2b6211852af25fcbdcccb5a410c3e992a90bab5a6d4263 Jan 29 15:32:34 crc kubenswrapper[5008]: E0129 15:32:34.661734 5008 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.50:6443: connect: connection refused" interval="6.4s" Jan 29 15:32:35 crc kubenswrapper[5008]: I0129 15:32:35.123679 5008 generic.go:334] "Generic (PLEG): container finished" podID="71bb4a3aecc4ba5b26c4b7318770ce13" containerID="31a4d285f97f87314a2f653c2e112a58f2c450ea69c61ed5f562a53d36a3bc5c" exitCode=0 Jan 29 15:32:35 crc kubenswrapper[5008]: I0129 15:32:35.123727 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerDied","Data":"31a4d285f97f87314a2f653c2e112a58f2c450ea69c61ed5f562a53d36a3bc5c"} Jan 29 15:32:35 crc kubenswrapper[5008]: I0129 15:32:35.123755 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"1df3554e8c15c141cb2b6211852af25fcbdcccb5a410c3e992a90bab5a6d4263"} Jan 29 15:32:35 crc kubenswrapper[5008]: I0129 15:32:35.124184 5008 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="d62f7cc2-d2d7-4c9a-9432-8b4fb9f3fcf2" Jan 29 15:32:35 crc kubenswrapper[5008]: I0129 15:32:35.124211 5008 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="d62f7cc2-d2d7-4c9a-9432-8b4fb9f3fcf2" Jan 29 15:32:35 crc kubenswrapper[5008]: I0129 15:32:35.124818 5008 status_manager.go:851] "Failed to get status for pod" podUID="a954daed-802a-4b46-81ef-7079dcddbaa5" pod="openshift-marketplace/redhat-operators-lhtht" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-lhtht\": dial tcp 38.102.83.50:6443: connect: connection refused" Jan 29 15:32:35 crc kubenswrapper[5008]: E0129 15:32:35.124866 5008 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.50:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 15:32:35 crc kubenswrapper[5008]: I0129 15:32:35.125206 5008 status_manager.go:851] "Failed to get status for pod" podUID="af4b11bc-2d2f-4e68-ab59-cbc08fecba52" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.50:6443: connect: connection refused" Jan 29 15:32:35 crc kubenswrapper[5008]: I0129 15:32:35.125719 5008 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.50:6443: connect: connection refused" Jan 29 15:32:35 crc kubenswrapper[5008]: I0129 15:32:35.126050 5008 status_manager.go:851] "Failed to get status for pod" podUID="37742fc9-fce4-41f0-ba04-7232b6e647a7" pod="openshift-marketplace/redhat-marketplace-fd6nq" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-fd6nq\": dial tcp 38.102.83.50:6443: connect: connection refused" Jan 29 15:32:35 crc kubenswrapper[5008]: I0129 15:32:35.126395 5008 status_manager.go:851] "Failed to get status for pod" podUID="bf35ff68-68b3-4743-803f-e451a5f5c5bd" pod="openshift-route-controller-manager/route-controller-manager-556b59fcb8-5lkx4" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-556b59fcb8-5lkx4\": dial tcp 38.102.83.50:6443: connect: connection refused" Jan 29 15:32:35 crc kubenswrapper[5008]: I0129 15:32:35.126600 5008 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.50:6443: connect: connection refused" Jan 29 15:32:35 crc kubenswrapper[5008]: I0129 15:32:35.126888 5008 status_manager.go:851] "Failed to get status for pod" podUID="ea8deba9-72cb-4274-add1-e80591a9e7cc" pod="openshift-marketplace/redhat-operators-tst9c" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-tst9c\": dial tcp 38.102.83.50:6443: connect: connection refused" Jan 29 15:32:35 crc kubenswrapper[5008]: I0129 15:32:35.127250 5008 status_manager.go:851] "Failed to get status for pod" podUID="64612440-e59b-46bb-a60f-f10989166e58" pod="openshift-controller-manager/controller-manager-585448bccb-4m9fq" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-585448bccb-4m9fq\": dial tcp 38.102.83.50:6443: connect: connection refused" Jan 29 15:32:35 crc kubenswrapper[5008]: I0129 15:32:35.127810 5008 status_manager.go:851] "Failed to get status for pod" podUID="6aebe040-289b-48c1-a825-f12b471a5ad6" pod="openshift-marketplace/certified-operators-cwgw5" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-cwgw5\": dial tcp 38.102.83.50:6443: connect: connection refused" Jan 29 15:32:35 crc kubenswrapper[5008]: I0129 15:32:35.128149 5008 status_manager.go:851] "Failed to get status for pod" podUID="250e7db8-88dd-44fd-8d73-51a6f8f4ba96" pod="openshift-marketplace/certified-operators-z9t2h" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-z9t2h\": dial tcp 38.102.83.50:6443: connect: connection refused" Jan 29 15:32:35 crc kubenswrapper[5008]: I0129 15:32:35.322855 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-67df9d9956-9zzpb" Jan 29 15:32:35 crc kubenswrapper[5008]: I0129 15:32:35.323288 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-67df9d9956-9zzpb" Jan 29 15:32:35 crc kubenswrapper[5008]: I0129 15:32:35.875840 5008 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 29 15:32:35 crc kubenswrapper[5008]: I0129 15:32:35.879216 5008 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 29 15:32:36 crc kubenswrapper[5008]: I0129 15:32:36.134192 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"e37b92d8db62917c30f3a25b7211db52e69ef372abeabb774760c1ea044d6ce2"} Jan 29 15:32:36 crc kubenswrapper[5008]: I0129 15:32:36.134242 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"e8fb6efea22bd5a89d979de81f82ae27054eb0218de4e7ce13ec8133a6f83fa3"} Jan 29 15:32:36 crc kubenswrapper[5008]: I0129 15:32:36.134260 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"ed902175e4fe84e95e8ef5e5b1839935090008d516f3f7b4f665c490f435bea0"} Jan 29 15:32:36 crc kubenswrapper[5008]: I0129 15:32:36.134271 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"3705f6651751eff3964920cef0b92ba5043891505c354185f88451e8129c849e"} Jan 29 15:32:36 crc kubenswrapper[5008]: I0129 15:32:36.134373 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 29 15:32:37 crc kubenswrapper[5008]: I0129 15:32:37.026548 5008 cert_rotation.go:91] certificate rotation detected, shutting down client connections to start using new credentials Jan 29 15:32:37 crc kubenswrapper[5008]: I0129 15:32:37.143169 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"05ea3cc82ba327e01f17af25eec980aa89be6e7eeec14d2dcc0923ebf84569de"} Jan 29 15:32:37 crc kubenswrapper[5008]: I0129 15:32:37.143669 5008 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="d62f7cc2-d2d7-4c9a-9432-8b4fb9f3fcf2" Jan 29 15:32:37 crc kubenswrapper[5008]: I0129 15:32:37.143696 5008 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="d62f7cc2-d2d7-4c9a-9432-8b4fb9f3fcf2" Jan 29 15:32:39 crc kubenswrapper[5008]: I0129 15:32:39.344666 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 15:32:39 crc kubenswrapper[5008]: I0129 15:32:39.344982 5008 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 15:32:39 crc kubenswrapper[5008]: I0129 15:32:39.358105 5008 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 15:32:41 crc kubenswrapper[5008]: W0129 15:32:41.841473 5008 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod17f45bda_9243_4ae2_858a_e32e62abeebc.slice/crio-ff460f6e4a20ab94042fb5b7e4ffa51bff723245acb3725b04c391036ec1f691 WatchSource:0}: Error finding container ff460f6e4a20ab94042fb5b7e4ffa51bff723245acb3725b04c391036ec1f691: Status 404 returned error can't find the container with id ff460f6e4a20ab94042fb5b7e4ffa51bff723245acb3725b04c391036ec1f691 Jan 29 15:32:42 crc kubenswrapper[5008]: I0129 15:32:42.152640 5008 kubelet.go:1914] "Deleted mirror pod because it is outdated" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 15:32:42 crc kubenswrapper[5008]: I0129 15:32:42.175298 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-67df9d9956-9zzpb" event={"ID":"17f45bda-9243-4ae2-858a-e32e62abeebc","Type":"ContainerStarted","Data":"7aba4d7f50689c07d3cd7a99f1cf234a06ce38d42971a905509a9922cd6383ea"} Jan 29 15:32:42 crc kubenswrapper[5008]: I0129 15:32:42.175354 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-67df9d9956-9zzpb" event={"ID":"17f45bda-9243-4ae2-858a-e32e62abeebc","Type":"ContainerStarted","Data":"ff460f6e4a20ab94042fb5b7e4ffa51bff723245acb3725b04c391036ec1f691"} Jan 29 15:32:42 crc kubenswrapper[5008]: I0129 15:32:42.175706 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 15:32:42 crc kubenswrapper[5008]: I0129 15:32:42.175777 5008 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="d62f7cc2-d2d7-4c9a-9432-8b4fb9f3fcf2" Jan 29 15:32:42 crc kubenswrapper[5008]: I0129 15:32:42.175806 5008 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="d62f7cc2-d2d7-4c9a-9432-8b4fb9f3fcf2" Jan 29 15:32:42 crc kubenswrapper[5008]: I0129 15:32:42.180057 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 15:32:42 crc kubenswrapper[5008]: I0129 15:32:42.216430 5008 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="eb24cea0-8aff-4f3f-809b-ea8aee184ece" Jan 29 15:32:43 crc kubenswrapper[5008]: I0129 15:32:43.182431 5008 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="d62f7cc2-d2d7-4c9a-9432-8b4fb9f3fcf2" Jan 29 15:32:43 crc kubenswrapper[5008]: I0129 15:32:43.182495 5008 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="d62f7cc2-d2d7-4c9a-9432-8b4fb9f3fcf2" Jan 29 15:32:47 crc kubenswrapper[5008]: I0129 15:32:47.337825 5008 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="eb24cea0-8aff-4f3f-809b-ea8aee184ece" Jan 29 15:32:48 crc kubenswrapper[5008]: I0129 15:32:48.706615 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 29 15:32:49 crc kubenswrapper[5008]: I0129 15:32:49.303189 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-67df9d9956-9zzpb" Jan 29 15:32:49 crc kubenswrapper[5008]: I0129 15:32:49.309870 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-67df9d9956-9zzpb" Jan 29 15:32:51 crc kubenswrapper[5008]: I0129 15:32:51.503960 5008 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Jan 29 15:32:51 crc kubenswrapper[5008]: I0129 15:32:51.895665 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Jan 29 15:32:51 crc kubenswrapper[5008]: I0129 15:32:51.968167 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Jan 29 15:32:52 crc kubenswrapper[5008]: I0129 15:32:52.557744 5008 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 29 15:32:52 crc kubenswrapper[5008]: I0129 15:32:52.582876 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Jan 29 15:32:52 crc kubenswrapper[5008]: I0129 15:32:52.858595 5008 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Jan 29 15:32:52 crc kubenswrapper[5008]: I0129 15:32:52.995314 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 29 15:32:53 crc kubenswrapper[5008]: I0129 15:32:53.188653 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Jan 29 15:32:53 crc kubenswrapper[5008]: I0129 15:32:53.280575 5008 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Jan 29 15:32:53 crc kubenswrapper[5008]: I0129 15:32:53.414889 5008 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Jan 29 15:32:53 crc kubenswrapper[5008]: I0129 15:32:53.492755 5008 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Jan 29 15:32:53 crc kubenswrapper[5008]: I0129 15:32:53.548689 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Jan 29 15:32:53 crc kubenswrapper[5008]: I0129 15:32:53.658511 5008 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Jan 29 15:32:53 crc kubenswrapper[5008]: I0129 15:32:53.762517 5008 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Jan 29 15:32:53 crc kubenswrapper[5008]: I0129 15:32:53.849709 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Jan 29 15:32:53 crc kubenswrapper[5008]: I0129 15:32:53.965062 5008 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Jan 29 15:32:54 crc kubenswrapper[5008]: I0129 15:32:54.089378 5008 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Jan 29 15:32:54 crc kubenswrapper[5008]: I0129 15:32:54.225007 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Jan 29 15:32:54 crc kubenswrapper[5008]: I0129 15:32:54.317094 5008 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Jan 29 15:32:54 crc kubenswrapper[5008]: I0129 15:32:54.520889 5008 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Jan 29 15:32:54 crc kubenswrapper[5008]: I0129 15:32:54.647543 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Jan 29 15:32:54 crc kubenswrapper[5008]: I0129 15:32:54.724925 5008 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Jan 29 15:32:54 crc kubenswrapper[5008]: I0129 15:32:54.854899 5008 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Jan 29 15:32:54 crc kubenswrapper[5008]: I0129 15:32:54.886292 5008 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Jan 29 15:32:55 crc kubenswrapper[5008]: I0129 15:32:55.013634 5008 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 29 15:32:55 crc kubenswrapper[5008]: I0129 15:32:55.045327 5008 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Jan 29 15:32:55 crc kubenswrapper[5008]: I0129 15:32:55.285745 5008 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Jan 29 15:32:55 crc kubenswrapper[5008]: I0129 15:32:55.328165 5008 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Jan 29 15:32:55 crc kubenswrapper[5008]: I0129 15:32:55.378603 5008 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Jan 29 15:32:55 crc kubenswrapper[5008]: I0129 15:32:55.441026 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Jan 29 15:32:55 crc kubenswrapper[5008]: I0129 15:32:55.548453 5008 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Jan 29 15:32:55 crc kubenswrapper[5008]: I0129 15:32:55.561841 5008 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Jan 29 15:32:55 crc kubenswrapper[5008]: I0129 15:32:55.586344 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Jan 29 15:32:55 crc kubenswrapper[5008]: I0129 15:32:55.641219 5008 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Jan 29 15:32:55 crc kubenswrapper[5008]: I0129 15:32:55.832718 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Jan 29 15:32:55 crc kubenswrapper[5008]: I0129 15:32:55.954320 5008 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Jan 29 15:32:56 crc kubenswrapper[5008]: I0129 15:32:56.043577 5008 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Jan 29 15:32:56 crc kubenswrapper[5008]: I0129 15:32:56.090553 5008 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Jan 29 15:32:56 crc kubenswrapper[5008]: I0129 15:32:56.097446 5008 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Jan 29 15:32:56 crc kubenswrapper[5008]: I0129 15:32:56.119191 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Jan 29 15:32:56 crc kubenswrapper[5008]: I0129 15:32:56.253133 5008 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Jan 29 15:32:56 crc kubenswrapper[5008]: I0129 15:32:56.268267 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Jan 29 15:32:56 crc kubenswrapper[5008]: I0129 15:32:56.330218 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Jan 29 15:32:56 crc kubenswrapper[5008]: I0129 15:32:56.337356 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Jan 29 15:32:56 crc kubenswrapper[5008]: I0129 15:32:56.363919 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Jan 29 15:32:56 crc kubenswrapper[5008]: I0129 15:32:56.398177 5008 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Jan 29 15:32:56 crc kubenswrapper[5008]: I0129 15:32:56.413462 5008 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Jan 29 15:32:56 crc kubenswrapper[5008]: I0129 15:32:56.484198 5008 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 29 15:32:56 crc kubenswrapper[5008]: I0129 15:32:56.584272 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Jan 29 15:32:56 crc kubenswrapper[5008]: I0129 15:32:56.635881 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Jan 29 15:32:56 crc kubenswrapper[5008]: I0129 15:32:56.705949 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Jan 29 15:32:56 crc kubenswrapper[5008]: I0129 15:32:56.714690 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Jan 29 15:32:56 crc kubenswrapper[5008]: I0129 15:32:56.771501 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 29 15:32:56 crc kubenswrapper[5008]: I0129 15:32:56.809341 5008 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 29 15:32:56 crc kubenswrapper[5008]: I0129 15:32:56.818708 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Jan 29 15:32:56 crc kubenswrapper[5008]: I0129 15:32:56.833714 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Jan 29 15:32:56 crc kubenswrapper[5008]: I0129 15:32:56.974737 5008 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Jan 29 15:32:56 crc kubenswrapper[5008]: I0129 15:32:56.975259 5008 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Jan 29 15:32:56 crc kubenswrapper[5008]: I0129 15:32:56.991464 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Jan 29 15:32:57 crc kubenswrapper[5008]: I0129 15:32:57.127307 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Jan 29 15:32:57 crc kubenswrapper[5008]: I0129 15:32:57.133424 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Jan 29 15:32:57 crc kubenswrapper[5008]: I0129 15:32:57.154892 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Jan 29 15:32:57 crc kubenswrapper[5008]: I0129 15:32:57.183423 5008 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Jan 29 15:32:57 crc kubenswrapper[5008]: I0129 15:32:57.193180 5008 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Jan 29 15:32:57 crc kubenswrapper[5008]: I0129 15:32:57.238283 5008 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Jan 29 15:32:57 crc kubenswrapper[5008]: I0129 15:32:57.541842 5008 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Jan 29 15:32:57 crc kubenswrapper[5008]: I0129 15:32:57.581521 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Jan 29 15:32:57 crc kubenswrapper[5008]: I0129 15:32:57.645634 5008 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Jan 29 15:32:57 crc kubenswrapper[5008]: I0129 15:32:57.683198 5008 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Jan 29 15:32:57 crc kubenswrapper[5008]: I0129 15:32:57.715618 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Jan 29 15:32:57 crc kubenswrapper[5008]: I0129 15:32:57.781618 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Jan 29 15:32:57 crc kubenswrapper[5008]: I0129 15:32:57.819422 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Jan 29 15:32:57 crc kubenswrapper[5008]: I0129 15:32:57.865914 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Jan 29 15:32:57 crc kubenswrapper[5008]: I0129 15:32:57.909217 5008 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Jan 29 15:32:58 crc kubenswrapper[5008]: I0129 15:32:58.012748 5008 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Jan 29 15:32:58 crc kubenswrapper[5008]: I0129 15:32:58.053217 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Jan 29 15:32:58 crc kubenswrapper[5008]: I0129 15:32:58.101309 5008 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Jan 29 15:32:58 crc kubenswrapper[5008]: I0129 15:32:58.115903 5008 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Jan 29 15:32:58 crc kubenswrapper[5008]: I0129 15:32:58.118180 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Jan 29 15:32:58 crc kubenswrapper[5008]: I0129 15:32:58.162996 5008 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Jan 29 15:32:58 crc kubenswrapper[5008]: I0129 15:32:58.169996 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Jan 29 15:32:58 crc kubenswrapper[5008]: I0129 15:32:58.319402 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Jan 29 15:32:58 crc kubenswrapper[5008]: I0129 15:32:58.345338 5008 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Jan 29 15:32:58 crc kubenswrapper[5008]: I0129 15:32:58.365074 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Jan 29 15:32:58 crc kubenswrapper[5008]: I0129 15:32:58.382410 5008 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Jan 29 15:32:58 crc kubenswrapper[5008]: I0129 15:32:58.453638 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Jan 29 15:32:58 crc kubenswrapper[5008]: I0129 15:32:58.509157 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Jan 29 15:32:58 crc kubenswrapper[5008]: I0129 15:32:58.572746 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Jan 29 15:32:58 crc kubenswrapper[5008]: I0129 15:32:58.642737 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Jan 29 15:32:58 crc kubenswrapper[5008]: I0129 15:32:58.861755 5008 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Jan 29 15:32:58 crc kubenswrapper[5008]: I0129 15:32:58.943937 5008 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Jan 29 15:32:58 crc kubenswrapper[5008]: I0129 15:32:58.956208 5008 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Jan 29 15:32:58 crc kubenswrapper[5008]: I0129 15:32:58.965454 5008 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Jan 29 15:32:59 crc kubenswrapper[5008]: I0129 15:32:59.261593 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Jan 29 15:32:59 crc kubenswrapper[5008]: I0129 15:32:59.287955 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Jan 29 15:32:59 crc kubenswrapper[5008]: I0129 15:32:59.304412 5008 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Jan 29 15:32:59 crc kubenswrapper[5008]: I0129 15:32:59.380706 5008 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Jan 29 15:32:59 crc kubenswrapper[5008]: I0129 15:32:59.447589 5008 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Jan 29 15:32:59 crc kubenswrapper[5008]: I0129 15:32:59.470485 5008 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Jan 29 15:32:59 crc kubenswrapper[5008]: I0129 15:32:59.476399 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Jan 29 15:32:59 crc kubenswrapper[5008]: I0129 15:32:59.545307 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Jan 29 15:32:59 crc kubenswrapper[5008]: I0129 15:32:59.570602 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Jan 29 15:32:59 crc kubenswrapper[5008]: I0129 15:32:59.580924 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Jan 29 15:32:59 crc kubenswrapper[5008]: I0129 15:32:59.637594 5008 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Jan 29 15:32:59 crc kubenswrapper[5008]: I0129 15:32:59.658732 5008 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Jan 29 15:32:59 crc kubenswrapper[5008]: I0129 15:32:59.679758 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Jan 29 15:32:59 crc kubenswrapper[5008]: I0129 15:32:59.853309 5008 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Jan 29 15:32:59 crc kubenswrapper[5008]: I0129 15:32:59.917184 5008 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Jan 29 15:32:59 crc kubenswrapper[5008]: I0129 15:32:59.960587 5008 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Jan 29 15:33:00 crc kubenswrapper[5008]: I0129 15:33:00.025775 5008 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Jan 29 15:33:00 crc kubenswrapper[5008]: I0129 15:33:00.089459 5008 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Jan 29 15:33:00 crc kubenswrapper[5008]: I0129 15:33:00.093354 5008 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Jan 29 15:33:00 crc kubenswrapper[5008]: I0129 15:33:00.111263 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Jan 29 15:33:00 crc kubenswrapper[5008]: I0129 15:33:00.232277 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Jan 29 15:33:00 crc kubenswrapper[5008]: I0129 15:33:00.370447 5008 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Jan 29 15:33:00 crc kubenswrapper[5008]: I0129 15:33:00.581561 5008 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Jan 29 15:33:00 crc kubenswrapper[5008]: I0129 15:33:00.718545 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Jan 29 15:33:00 crc kubenswrapper[5008]: I0129 15:33:00.720191 5008 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Jan 29 15:33:00 crc kubenswrapper[5008]: I0129 15:33:00.731171 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Jan 29 15:33:00 crc kubenswrapper[5008]: I0129 15:33:00.736881 5008 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Jan 29 15:33:00 crc kubenswrapper[5008]: I0129 15:33:00.749991 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Jan 29 15:33:00 crc kubenswrapper[5008]: I0129 15:33:00.844677 5008 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Jan 29 15:33:00 crc kubenswrapper[5008]: I0129 15:33:00.856399 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Jan 29 15:33:00 crc kubenswrapper[5008]: I0129 15:33:00.907455 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Jan 29 15:33:00 crc kubenswrapper[5008]: I0129 15:33:00.974979 5008 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Jan 29 15:33:01 crc kubenswrapper[5008]: I0129 15:33:01.001659 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Jan 29 15:33:01 crc kubenswrapper[5008]: I0129 15:33:01.047507 5008 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Jan 29 15:33:01 crc kubenswrapper[5008]: I0129 15:33:01.070359 5008 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Jan 29 15:33:01 crc kubenswrapper[5008]: I0129 15:33:01.146360 5008 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Jan 29 15:33:01 crc kubenswrapper[5008]: I0129 15:33:01.188122 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Jan 29 15:33:01 crc kubenswrapper[5008]: I0129 15:33:01.306338 5008 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Jan 29 15:33:01 crc kubenswrapper[5008]: I0129 15:33:01.343704 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Jan 29 15:33:01 crc kubenswrapper[5008]: I0129 15:33:01.380416 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Jan 29 15:33:01 crc kubenswrapper[5008]: I0129 15:33:01.402823 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Jan 29 15:33:01 crc kubenswrapper[5008]: I0129 15:33:01.589129 5008 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Jan 29 15:33:01 crc kubenswrapper[5008]: I0129 15:33:01.672115 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Jan 29 15:33:01 crc kubenswrapper[5008]: I0129 15:33:01.677410 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Jan 29 15:33:01 crc kubenswrapper[5008]: I0129 15:33:01.726193 5008 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Jan 29 15:33:01 crc kubenswrapper[5008]: I0129 15:33:01.730971 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Jan 29 15:33:01 crc kubenswrapper[5008]: I0129 15:33:01.773898 5008 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Jan 29 15:33:01 crc kubenswrapper[5008]: I0129 15:33:01.894136 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Jan 29 15:33:01 crc kubenswrapper[5008]: I0129 15:33:01.974053 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Jan 29 15:33:02 crc kubenswrapper[5008]: I0129 15:33:02.005853 5008 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Jan 29 15:33:02 crc kubenswrapper[5008]: I0129 15:33:02.007662 5008 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Jan 29 15:33:02 crc kubenswrapper[5008]: I0129 15:33:02.027839 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Jan 29 15:33:02 crc kubenswrapper[5008]: I0129 15:33:02.052761 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Jan 29 15:33:02 crc kubenswrapper[5008]: I0129 15:33:02.057558 5008 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Jan 29 15:33:02 crc kubenswrapper[5008]: I0129 15:33:02.090582 5008 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Jan 29 15:33:02 crc kubenswrapper[5008]: I0129 15:33:02.115531 5008 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Jan 29 15:33:02 crc kubenswrapper[5008]: I0129 15:33:02.159371 5008 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Jan 29 15:33:02 crc kubenswrapper[5008]: I0129 15:33:02.261196 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Jan 29 15:33:02 crc kubenswrapper[5008]: I0129 15:33:02.349360 5008 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Jan 29 15:33:02 crc kubenswrapper[5008]: I0129 15:33:02.489015 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Jan 29 15:33:02 crc kubenswrapper[5008]: I0129 15:33:02.599230 5008 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Jan 29 15:33:02 crc kubenswrapper[5008]: I0129 15:33:02.623156 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Jan 29 15:33:02 crc kubenswrapper[5008]: I0129 15:33:02.647524 5008 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Jan 29 15:33:02 crc kubenswrapper[5008]: I0129 15:33:02.653467 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Jan 29 15:33:02 crc kubenswrapper[5008]: I0129 15:33:02.740083 5008 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Jan 29 15:33:02 crc kubenswrapper[5008]: I0129 15:33:02.789099 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Jan 29 15:33:02 crc kubenswrapper[5008]: I0129 15:33:02.857194 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Jan 29 15:33:02 crc kubenswrapper[5008]: I0129 15:33:02.870564 5008 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Jan 29 15:33:02 crc kubenswrapper[5008]: I0129 15:33:02.879088 5008 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Jan 29 15:33:02 crc kubenswrapper[5008]: I0129 15:33:02.957133 5008 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Jan 29 15:33:03 crc kubenswrapper[5008]: I0129 15:33:03.023502 5008 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Jan 29 15:33:03 crc kubenswrapper[5008]: I0129 15:33:03.045500 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Jan 29 15:33:03 crc kubenswrapper[5008]: I0129 15:33:03.057287 5008 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Jan 29 15:33:03 crc kubenswrapper[5008]: I0129 15:33:03.060684 5008 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Jan 29 15:33:03 crc kubenswrapper[5008]: I0129 15:33:03.073369 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Jan 29 15:33:03 crc kubenswrapper[5008]: I0129 15:33:03.220007 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Jan 29 15:33:03 crc kubenswrapper[5008]: I0129 15:33:03.285815 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Jan 29 15:33:03 crc kubenswrapper[5008]: I0129 15:33:03.302969 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Jan 29 15:33:03 crc kubenswrapper[5008]: I0129 15:33:03.310689 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Jan 29 15:33:03 crc kubenswrapper[5008]: I0129 15:33:03.500884 5008 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Jan 29 15:33:03 crc kubenswrapper[5008]: I0129 15:33:03.541841 5008 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Jan 29 15:33:03 crc kubenswrapper[5008]: I0129 15:33:03.560040 5008 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Jan 29 15:33:03 crc kubenswrapper[5008]: I0129 15:33:03.569930 5008 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Jan 29 15:33:03 crc kubenswrapper[5008]: I0129 15:33:03.676969 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Jan 29 15:33:03 crc kubenswrapper[5008]: I0129 15:33:03.676981 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Jan 29 15:33:03 crc kubenswrapper[5008]: I0129 15:33:03.685250 5008 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Jan 29 15:33:03 crc kubenswrapper[5008]: I0129 15:33:03.794988 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Jan 29 15:33:03 crc kubenswrapper[5008]: I0129 15:33:03.837715 5008 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Jan 29 15:33:03 crc kubenswrapper[5008]: I0129 15:33:03.848491 5008 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Jan 29 15:33:03 crc kubenswrapper[5008]: I0129 15:33:03.854553 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Jan 29 15:33:03 crc kubenswrapper[5008]: I0129 15:33:03.954512 5008 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Jan 29 15:33:03 crc kubenswrapper[5008]: I0129 15:33:03.968689 5008 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 29 15:33:04 crc kubenswrapper[5008]: I0129 15:33:04.011444 5008 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Jan 29 15:33:04 crc kubenswrapper[5008]: I0129 15:33:04.019125 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Jan 29 15:33:04 crc kubenswrapper[5008]: I0129 15:33:04.055351 5008 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Jan 29 15:33:04 crc kubenswrapper[5008]: I0129 15:33:04.102178 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Jan 29 15:33:04 crc kubenswrapper[5008]: I0129 15:33:04.126049 5008 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Jan 29 15:33:04 crc kubenswrapper[5008]: I0129 15:33:04.222135 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Jan 29 15:33:04 crc kubenswrapper[5008]: I0129 15:33:04.243064 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Jan 29 15:33:04 crc kubenswrapper[5008]: I0129 15:33:04.316727 5008 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Jan 29 15:33:04 crc kubenswrapper[5008]: I0129 15:33:04.346286 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Jan 29 15:33:04 crc kubenswrapper[5008]: I0129 15:33:04.440903 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Jan 29 15:33:04 crc kubenswrapper[5008]: I0129 15:33:04.455002 5008 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Jan 29 15:33:04 crc kubenswrapper[5008]: I0129 15:33:04.469206 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Jan 29 15:33:04 crc kubenswrapper[5008]: I0129 15:33:04.523888 5008 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Jan 29 15:33:04 crc kubenswrapper[5008]: I0129 15:33:04.583899 5008 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Jan 29 15:33:04 crc kubenswrapper[5008]: I0129 15:33:04.588114 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Jan 29 15:33:04 crc kubenswrapper[5008]: I0129 15:33:04.640492 5008 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Jan 29 15:33:04 crc kubenswrapper[5008]: I0129 15:33:04.692978 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Jan 29 15:33:04 crc kubenswrapper[5008]: I0129 15:33:04.709758 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Jan 29 15:33:04 crc kubenswrapper[5008]: I0129 15:33:04.734054 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Jan 29 15:33:04 crc kubenswrapper[5008]: I0129 15:33:04.949696 5008 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Jan 29 15:33:05 crc kubenswrapper[5008]: I0129 15:33:05.005072 5008 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Jan 29 15:33:05 crc kubenswrapper[5008]: I0129 15:33:05.035627 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Jan 29 15:33:05 crc kubenswrapper[5008]: I0129 15:33:05.115922 5008 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Jan 29 15:33:05 crc kubenswrapper[5008]: I0129 15:33:05.218848 5008 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Jan 29 15:33:05 crc kubenswrapper[5008]: I0129 15:33:05.278289 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Jan 29 15:33:05 crc kubenswrapper[5008]: I0129 15:33:05.323959 5008 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Jan 29 15:33:05 crc kubenswrapper[5008]: I0129 15:33:05.427817 5008 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Jan 29 15:33:05 crc kubenswrapper[5008]: I0129 15:33:05.482580 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Jan 29 15:33:05 crc kubenswrapper[5008]: I0129 15:33:05.489986 5008 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Jan 29 15:33:05 crc kubenswrapper[5008]: I0129 15:33:05.506461 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Jan 29 15:33:05 crc kubenswrapper[5008]: I0129 15:33:05.552698 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Jan 29 15:33:05 crc kubenswrapper[5008]: I0129 15:33:05.607654 5008 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Jan 29 15:33:05 crc kubenswrapper[5008]: I0129 15:33:05.764731 5008 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Jan 29 15:33:05 crc kubenswrapper[5008]: I0129 15:33:05.850701 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Jan 29 15:33:05 crc kubenswrapper[5008]: I0129 15:33:05.910080 5008 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Jan 29 15:33:05 crc kubenswrapper[5008]: I0129 15:33:05.945577 5008 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Jan 29 15:33:05 crc kubenswrapper[5008]: I0129 15:33:05.955008 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Jan 29 15:33:06 crc kubenswrapper[5008]: I0129 15:33:06.087746 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Jan 29 15:33:06 crc kubenswrapper[5008]: I0129 15:33:06.252983 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Jan 29 15:33:06 crc kubenswrapper[5008]: I0129 15:33:06.324393 5008 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Jan 29 15:33:06 crc kubenswrapper[5008]: I0129 15:33:06.589128 5008 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Jan 29 15:33:06 crc kubenswrapper[5008]: I0129 15:33:06.607645 5008 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Jan 29 15:33:06 crc kubenswrapper[5008]: I0129 15:33:06.745870 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Jan 29 15:33:06 crc kubenswrapper[5008]: I0129 15:33:06.801838 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Jan 29 15:33:06 crc kubenswrapper[5008]: I0129 15:33:06.820868 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Jan 29 15:33:06 crc kubenswrapper[5008]: I0129 15:33:06.922513 5008 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Jan 29 15:33:07 crc kubenswrapper[5008]: I0129 15:33:07.004686 5008 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Jan 29 15:33:07 crc kubenswrapper[5008]: I0129 15:33:07.073546 5008 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Jan 29 15:33:07 crc kubenswrapper[5008]: I0129 15:33:07.247591 5008 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Jan 29 15:33:07 crc kubenswrapper[5008]: I0129 15:33:07.265389 5008 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Jan 29 15:33:07 crc kubenswrapper[5008]: I0129 15:33:07.352630 5008 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Jan 29 15:33:07 crc kubenswrapper[5008]: I0129 15:33:07.500380 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Jan 29 15:33:07 crc kubenswrapper[5008]: I0129 15:33:07.658599 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Jan 29 15:33:07 crc kubenswrapper[5008]: I0129 15:33:07.685159 5008 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Jan 29 15:33:07 crc kubenswrapper[5008]: I0129 15:33:07.820856 5008 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Jan 29 15:33:07 crc kubenswrapper[5008]: I0129 15:33:07.978765 5008 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Jan 29 15:33:07 crc kubenswrapper[5008]: I0129 15:33:07.979459 5008 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podStartSLOduration=48.979426954 podStartE2EDuration="48.979426954s" podCreationTimestamp="2026-01-29 15:32:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 15:32:41.881920145 +0000 UTC m=+305.554774382" watchObservedRunningTime="2026-01-29 15:33:07.979426954 +0000 UTC m=+331.652281281" Jan 29 15:33:07 crc kubenswrapper[5008]: I0129 15:33:07.983560 5008 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-fd6nq" podStartSLOduration=54.132814111 podStartE2EDuration="2m49.9835435s" podCreationTimestamp="2026-01-29 15:30:18 +0000 UTC" firstStartedPulling="2026-01-29 15:30:22.076005481 +0000 UTC m=+165.748859718" lastFinishedPulling="2026-01-29 15:32:17.92673483 +0000 UTC m=+281.599589107" observedRunningTime="2026-01-29 15:32:41.908276116 +0000 UTC m=+305.581130363" watchObservedRunningTime="2026-01-29 15:33:07.9835435 +0000 UTC m=+331.656397767" Jan 29 15:33:07 crc kubenswrapper[5008]: I0129 15:33:07.987036 5008 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-67df9d9956-9zzpb" podStartSLOduration=51.987018185 podStartE2EDuration="51.987018185s" podCreationTimestamp="2026-01-29 15:32:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 15:32:42.210654702 +0000 UTC m=+305.883508939" watchObservedRunningTime="2026-01-29 15:33:07.987018185 +0000 UTC m=+331.659872462" Jan 29 15:33:07 crc kubenswrapper[5008]: I0129 15:33:07.988069 5008 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc","openshift-route-controller-manager/route-controller-manager-556b59fcb8-5lkx4","openshift-controller-manager/controller-manager-585448bccb-4m9fq"] Jan 29 15:33:07 crc kubenswrapper[5008]: I0129 15:33:07.988157 5008 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7d5696789b-pvrc7","openshift-kube-apiserver/kube-apiserver-crc"] Jan 29 15:33:07 crc kubenswrapper[5008]: E0129 15:33:07.988521 5008 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="af4b11bc-2d2f-4e68-ab59-cbc08fecba52" containerName="installer" Jan 29 15:33:07 crc kubenswrapper[5008]: I0129 15:33:07.988552 5008 state_mem.go:107] "Deleted CPUSet assignment" podUID="af4b11bc-2d2f-4e68-ab59-cbc08fecba52" containerName="installer" Jan 29 15:33:07 crc kubenswrapper[5008]: I0129 15:33:07.988585 5008 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="d62f7cc2-d2d7-4c9a-9432-8b4fb9f3fcf2" Jan 29 15:33:07 crc kubenswrapper[5008]: I0129 15:33:07.988604 5008 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="d62f7cc2-d2d7-4c9a-9432-8b4fb9f3fcf2" Jan 29 15:33:07 crc kubenswrapper[5008]: I0129 15:33:07.988736 5008 memory_manager.go:354] "RemoveStaleState removing state" podUID="af4b11bc-2d2f-4e68-ab59-cbc08fecba52" containerName="installer" Jan 29 15:33:07 crc kubenswrapper[5008]: I0129 15:33:07.989292 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-67df9d9956-9zzpb"] Jan 29 15:33:07 crc kubenswrapper[5008]: I0129 15:33:07.989501 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7d5696789b-pvrc7" Jan 29 15:33:07 crc kubenswrapper[5008]: I0129 15:33:07.995700 5008 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 29 15:33:07 crc kubenswrapper[5008]: I0129 15:33:07.996868 5008 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 29 15:33:07 crc kubenswrapper[5008]: I0129 15:33:07.996907 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 29 15:33:07 crc kubenswrapper[5008]: I0129 15:33:07.997134 5008 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 29 15:33:07 crc kubenswrapper[5008]: I0129 15:33:07.997481 5008 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 29 15:33:07 crc kubenswrapper[5008]: I0129 15:33:07.997701 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 29 15:33:07 crc kubenswrapper[5008]: I0129 15:33:07.998008 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 15:33:08 crc kubenswrapper[5008]: I0129 15:33:08.000960 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Jan 29 15:33:08 crc kubenswrapper[5008]: I0129 15:33:08.021458 5008 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=26.021434226 podStartE2EDuration="26.021434226s" podCreationTimestamp="2026-01-29 15:32:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 15:33:08.017327239 +0000 UTC m=+331.690181476" watchObservedRunningTime="2026-01-29 15:33:08.021434226 +0000 UTC m=+331.694288503" Jan 29 15:33:08 crc kubenswrapper[5008]: I0129 15:33:08.122329 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/50ca549b-5e64-416a-866b-1f63371db9dd-client-ca\") pod \"route-controller-manager-7d5696789b-pvrc7\" (UID: \"50ca549b-5e64-416a-866b-1f63371db9dd\") " pod="openshift-route-controller-manager/route-controller-manager-7d5696789b-pvrc7" Jan 29 15:33:08 crc kubenswrapper[5008]: I0129 15:33:08.122492 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jffgv\" (UniqueName: \"kubernetes.io/projected/50ca549b-5e64-416a-866b-1f63371db9dd-kube-api-access-jffgv\") pod \"route-controller-manager-7d5696789b-pvrc7\" (UID: \"50ca549b-5e64-416a-866b-1f63371db9dd\") " pod="openshift-route-controller-manager/route-controller-manager-7d5696789b-pvrc7" Jan 29 15:33:08 crc kubenswrapper[5008]: I0129 15:33:08.122683 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/50ca549b-5e64-416a-866b-1f63371db9dd-config\") pod \"route-controller-manager-7d5696789b-pvrc7\" (UID: \"50ca549b-5e64-416a-866b-1f63371db9dd\") " pod="openshift-route-controller-manager/route-controller-manager-7d5696789b-pvrc7" Jan 29 15:33:08 crc kubenswrapper[5008]: I0129 15:33:08.122808 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/50ca549b-5e64-416a-866b-1f63371db9dd-serving-cert\") pod \"route-controller-manager-7d5696789b-pvrc7\" (UID: \"50ca549b-5e64-416a-866b-1f63371db9dd\") " pod="openshift-route-controller-manager/route-controller-manager-7d5696789b-pvrc7" Jan 29 15:33:08 crc kubenswrapper[5008]: I0129 15:33:08.224201 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/50ca549b-5e64-416a-866b-1f63371db9dd-config\") pod \"route-controller-manager-7d5696789b-pvrc7\" (UID: \"50ca549b-5e64-416a-866b-1f63371db9dd\") " pod="openshift-route-controller-manager/route-controller-manager-7d5696789b-pvrc7" Jan 29 15:33:08 crc kubenswrapper[5008]: I0129 15:33:08.224318 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/50ca549b-5e64-416a-866b-1f63371db9dd-serving-cert\") pod \"route-controller-manager-7d5696789b-pvrc7\" (UID: \"50ca549b-5e64-416a-866b-1f63371db9dd\") " pod="openshift-route-controller-manager/route-controller-manager-7d5696789b-pvrc7" Jan 29 15:33:08 crc kubenswrapper[5008]: I0129 15:33:08.224389 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/50ca549b-5e64-416a-866b-1f63371db9dd-client-ca\") pod \"route-controller-manager-7d5696789b-pvrc7\" (UID: \"50ca549b-5e64-416a-866b-1f63371db9dd\") " pod="openshift-route-controller-manager/route-controller-manager-7d5696789b-pvrc7" Jan 29 15:33:08 crc kubenswrapper[5008]: I0129 15:33:08.224438 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jffgv\" (UniqueName: \"kubernetes.io/projected/50ca549b-5e64-416a-866b-1f63371db9dd-kube-api-access-jffgv\") pod \"route-controller-manager-7d5696789b-pvrc7\" (UID: \"50ca549b-5e64-416a-866b-1f63371db9dd\") " pod="openshift-route-controller-manager/route-controller-manager-7d5696789b-pvrc7" Jan 29 15:33:08 crc kubenswrapper[5008]: I0129 15:33:08.225883 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/50ca549b-5e64-416a-866b-1f63371db9dd-client-ca\") pod \"route-controller-manager-7d5696789b-pvrc7\" (UID: \"50ca549b-5e64-416a-866b-1f63371db9dd\") " pod="openshift-route-controller-manager/route-controller-manager-7d5696789b-pvrc7" Jan 29 15:33:08 crc kubenswrapper[5008]: I0129 15:33:08.226363 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/50ca549b-5e64-416a-866b-1f63371db9dd-config\") pod \"route-controller-manager-7d5696789b-pvrc7\" (UID: \"50ca549b-5e64-416a-866b-1f63371db9dd\") " pod="openshift-route-controller-manager/route-controller-manager-7d5696789b-pvrc7" Jan 29 15:33:08 crc kubenswrapper[5008]: I0129 15:33:08.240017 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/50ca549b-5e64-416a-866b-1f63371db9dd-serving-cert\") pod \"route-controller-manager-7d5696789b-pvrc7\" (UID: \"50ca549b-5e64-416a-866b-1f63371db9dd\") " pod="openshift-route-controller-manager/route-controller-manager-7d5696789b-pvrc7" Jan 29 15:33:08 crc kubenswrapper[5008]: I0129 15:33:08.245349 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jffgv\" (UniqueName: \"kubernetes.io/projected/50ca549b-5e64-416a-866b-1f63371db9dd-kube-api-access-jffgv\") pod \"route-controller-manager-7d5696789b-pvrc7\" (UID: \"50ca549b-5e64-416a-866b-1f63371db9dd\") " pod="openshift-route-controller-manager/route-controller-manager-7d5696789b-pvrc7" Jan 29 15:33:08 crc kubenswrapper[5008]: I0129 15:33:08.310995 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7d5696789b-pvrc7" Jan 29 15:33:08 crc kubenswrapper[5008]: I0129 15:33:08.589590 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Jan 29 15:33:09 crc kubenswrapper[5008]: I0129 15:33:09.331976 5008 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="64612440-e59b-46bb-a60f-f10989166e58" path="/var/lib/kubelet/pods/64612440-e59b-46bb-a60f-f10989166e58/volumes" Jan 29 15:33:09 crc kubenswrapper[5008]: I0129 15:33:09.332659 5008 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bf35ff68-68b3-4743-803f-e451a5f5c5bd" path="/var/lib/kubelet/pods/bf35ff68-68b3-4743-803f-e451a5f5c5bd/volumes" Jan 29 15:33:11 crc kubenswrapper[5008]: E0129 15:33:11.301960 5008 log.go:32] "RunPodSandbox from runtime service failed" err=< Jan 29 15:33:11 crc kubenswrapper[5008]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_route-controller-manager-7d5696789b-pvrc7_openshift-route-controller-manager_50ca549b-5e64-416a-866b-1f63371db9dd_0(d14a2233846de97336000a8435dd0cf8a115639fb7bf1be9ebdab33cb5d0e3fb): error adding pod openshift-route-controller-manager_route-controller-manager-7d5696789b-pvrc7 to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"d14a2233846de97336000a8435dd0cf8a115639fb7bf1be9ebdab33cb5d0e3fb" Netns:"/var/run/netns/97bcb98b-90aa-42dd-9855-3fa0a261fad6" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-route-controller-manager;K8S_POD_NAME=route-controller-manager-7d5696789b-pvrc7;K8S_POD_INFRA_CONTAINER_ID=d14a2233846de97336000a8435dd0cf8a115639fb7bf1be9ebdab33cb5d0e3fb;K8S_POD_UID=50ca549b-5e64-416a-866b-1f63371db9dd" Path:"" ERRORED: error configuring pod [openshift-route-controller-manager/route-controller-manager-7d5696789b-pvrc7] networking: Multus: [openshift-route-controller-manager/route-controller-manager-7d5696789b-pvrc7/50ca549b-5e64-416a-866b-1f63371db9dd]: error setting the networks status, pod was already deleted: SetPodNetworkStatusAnnotation: failed to query the pod route-controller-manager-7d5696789b-pvrc7 in out of cluster comm: pod "route-controller-manager-7d5696789b-pvrc7" not found Jan 29 15:33:11 crc kubenswrapper[5008]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Jan 29 15:33:11 crc kubenswrapper[5008]: > Jan 29 15:33:11 crc kubenswrapper[5008]: E0129 15:33:11.302407 5008 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err=< Jan 29 15:33:11 crc kubenswrapper[5008]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_route-controller-manager-7d5696789b-pvrc7_openshift-route-controller-manager_50ca549b-5e64-416a-866b-1f63371db9dd_0(d14a2233846de97336000a8435dd0cf8a115639fb7bf1be9ebdab33cb5d0e3fb): error adding pod openshift-route-controller-manager_route-controller-manager-7d5696789b-pvrc7 to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"d14a2233846de97336000a8435dd0cf8a115639fb7bf1be9ebdab33cb5d0e3fb" Netns:"/var/run/netns/97bcb98b-90aa-42dd-9855-3fa0a261fad6" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-route-controller-manager;K8S_POD_NAME=route-controller-manager-7d5696789b-pvrc7;K8S_POD_INFRA_CONTAINER_ID=d14a2233846de97336000a8435dd0cf8a115639fb7bf1be9ebdab33cb5d0e3fb;K8S_POD_UID=50ca549b-5e64-416a-866b-1f63371db9dd" Path:"" ERRORED: error configuring pod [openshift-route-controller-manager/route-controller-manager-7d5696789b-pvrc7] networking: Multus: [openshift-route-controller-manager/route-controller-manager-7d5696789b-pvrc7/50ca549b-5e64-416a-866b-1f63371db9dd]: error setting the networks status, pod was already deleted: SetPodNetworkStatusAnnotation: failed to query the pod route-controller-manager-7d5696789b-pvrc7 in out of cluster comm: pod "route-controller-manager-7d5696789b-pvrc7" not found Jan 29 15:33:11 crc kubenswrapper[5008]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Jan 29 15:33:11 crc kubenswrapper[5008]: > pod="openshift-route-controller-manager/route-controller-manager-7d5696789b-pvrc7" Jan 29 15:33:11 crc kubenswrapper[5008]: E0129 15:33:11.302426 5008 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err=< Jan 29 15:33:11 crc kubenswrapper[5008]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_route-controller-manager-7d5696789b-pvrc7_openshift-route-controller-manager_50ca549b-5e64-416a-866b-1f63371db9dd_0(d14a2233846de97336000a8435dd0cf8a115639fb7bf1be9ebdab33cb5d0e3fb): error adding pod openshift-route-controller-manager_route-controller-manager-7d5696789b-pvrc7 to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"d14a2233846de97336000a8435dd0cf8a115639fb7bf1be9ebdab33cb5d0e3fb" Netns:"/var/run/netns/97bcb98b-90aa-42dd-9855-3fa0a261fad6" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-route-controller-manager;K8S_POD_NAME=route-controller-manager-7d5696789b-pvrc7;K8S_POD_INFRA_CONTAINER_ID=d14a2233846de97336000a8435dd0cf8a115639fb7bf1be9ebdab33cb5d0e3fb;K8S_POD_UID=50ca549b-5e64-416a-866b-1f63371db9dd" Path:"" ERRORED: error configuring pod [openshift-route-controller-manager/route-controller-manager-7d5696789b-pvrc7] networking: Multus: [openshift-route-controller-manager/route-controller-manager-7d5696789b-pvrc7/50ca549b-5e64-416a-866b-1f63371db9dd]: error setting the networks status, pod was already deleted: SetPodNetworkStatusAnnotation: failed to query the pod route-controller-manager-7d5696789b-pvrc7 in out of cluster comm: pod "route-controller-manager-7d5696789b-pvrc7" not found Jan 29 15:33:11 crc kubenswrapper[5008]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Jan 29 15:33:11 crc kubenswrapper[5008]: > pod="openshift-route-controller-manager/route-controller-manager-7d5696789b-pvrc7" Jan 29 15:33:11 crc kubenswrapper[5008]: E0129 15:33:11.302482 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"route-controller-manager-7d5696789b-pvrc7_openshift-route-controller-manager(50ca549b-5e64-416a-866b-1f63371db9dd)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"route-controller-manager-7d5696789b-pvrc7_openshift-route-controller-manager(50ca549b-5e64-416a-866b-1f63371db9dd)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_route-controller-manager-7d5696789b-pvrc7_openshift-route-controller-manager_50ca549b-5e64-416a-866b-1f63371db9dd_0(d14a2233846de97336000a8435dd0cf8a115639fb7bf1be9ebdab33cb5d0e3fb): error adding pod openshift-route-controller-manager_route-controller-manager-7d5696789b-pvrc7 to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus-shim\\\" name=\\\"multus-cni-network\\\" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:\\\"d14a2233846de97336000a8435dd0cf8a115639fb7bf1be9ebdab33cb5d0e3fb\\\" Netns:\\\"/var/run/netns/97bcb98b-90aa-42dd-9855-3fa0a261fad6\\\" IfName:\\\"eth0\\\" Args:\\\"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-route-controller-manager;K8S_POD_NAME=route-controller-manager-7d5696789b-pvrc7;K8S_POD_INFRA_CONTAINER_ID=d14a2233846de97336000a8435dd0cf8a115639fb7bf1be9ebdab33cb5d0e3fb;K8S_POD_UID=50ca549b-5e64-416a-866b-1f63371db9dd\\\" Path:\\\"\\\" ERRORED: error configuring pod [openshift-route-controller-manager/route-controller-manager-7d5696789b-pvrc7] networking: Multus: [openshift-route-controller-manager/route-controller-manager-7d5696789b-pvrc7/50ca549b-5e64-416a-866b-1f63371db9dd]: error setting the networks status, pod was already deleted: SetPodNetworkStatusAnnotation: failed to query the pod route-controller-manager-7d5696789b-pvrc7 in out of cluster comm: pod \\\"route-controller-manager-7d5696789b-pvrc7\\\" not found\\n': StdinData: {\\\"binDir\\\":\\\"/var/lib/cni/bin\\\",\\\"clusterNetwork\\\":\\\"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf\\\",\\\"cniVersion\\\":\\\"0.3.1\\\",\\\"daemonSocketDir\\\":\\\"/run/multus/socket\\\",\\\"globalNamespaces\\\":\\\"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv\\\",\\\"logLevel\\\":\\\"verbose\\\",\\\"logToStderr\\\":true,\\\"name\\\":\\\"multus-cni-network\\\",\\\"namespaceIsolation\\\":true,\\\"type\\\":\\\"multus-shim\\\"}\"" pod="openshift-route-controller-manager/route-controller-manager-7d5696789b-pvrc7" podUID="50ca549b-5e64-416a-866b-1f63371db9dd" Jan 29 15:33:15 crc kubenswrapper[5008]: I0129 15:33:15.875918 5008 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 29 15:33:15 crc kubenswrapper[5008]: I0129 15:33:15.876367 5008 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" containerID="cri-o://91c8b8e183ceb639dc42455dc6714f740f7596aa5a568725b22cbea1339a8752" gracePeriod=5 Jan 29 15:33:16 crc kubenswrapper[5008]: I0129 15:33:16.845477 5008 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-67df9d9956-9zzpb"] Jan 29 15:33:16 crc kubenswrapper[5008]: I0129 15:33:16.845722 5008 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-67df9d9956-9zzpb" podUID="17f45bda-9243-4ae2-858a-e32e62abeebc" containerName="controller-manager" containerID="cri-o://7aba4d7f50689c07d3cd7a99f1cf234a06ce38d42971a905509a9922cd6383ea" gracePeriod=30 Jan 29 15:33:16 crc kubenswrapper[5008]: I0129 15:33:16.950589 5008 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7d5696789b-pvrc7"] Jan 29 15:33:16 crc kubenswrapper[5008]: I0129 15:33:16.950741 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7d5696789b-pvrc7" Jan 29 15:33:16 crc kubenswrapper[5008]: I0129 15:33:16.983553 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7d5696789b-pvrc7" Jan 29 15:33:17 crc kubenswrapper[5008]: I0129 15:33:17.150686 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jffgv\" (UniqueName: \"kubernetes.io/projected/50ca549b-5e64-416a-866b-1f63371db9dd-kube-api-access-jffgv\") pod \"50ca549b-5e64-416a-866b-1f63371db9dd\" (UID: \"50ca549b-5e64-416a-866b-1f63371db9dd\") " Jan 29 15:33:17 crc kubenswrapper[5008]: I0129 15:33:17.150970 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/50ca549b-5e64-416a-866b-1f63371db9dd-client-ca\") pod \"50ca549b-5e64-416a-866b-1f63371db9dd\" (UID: \"50ca549b-5e64-416a-866b-1f63371db9dd\") " Jan 29 15:33:17 crc kubenswrapper[5008]: I0129 15:33:17.150991 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/50ca549b-5e64-416a-866b-1f63371db9dd-serving-cert\") pod \"50ca549b-5e64-416a-866b-1f63371db9dd\" (UID: \"50ca549b-5e64-416a-866b-1f63371db9dd\") " Jan 29 15:33:17 crc kubenswrapper[5008]: I0129 15:33:17.151073 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/50ca549b-5e64-416a-866b-1f63371db9dd-config\") pod \"50ca549b-5e64-416a-866b-1f63371db9dd\" (UID: \"50ca549b-5e64-416a-866b-1f63371db9dd\") " Jan 29 15:33:17 crc kubenswrapper[5008]: I0129 15:33:17.151656 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/50ca549b-5e64-416a-866b-1f63371db9dd-client-ca" (OuterVolumeSpecName: "client-ca") pod "50ca549b-5e64-416a-866b-1f63371db9dd" (UID: "50ca549b-5e64-416a-866b-1f63371db9dd"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:33:17 crc kubenswrapper[5008]: I0129 15:33:17.151904 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/50ca549b-5e64-416a-866b-1f63371db9dd-config" (OuterVolumeSpecName: "config") pod "50ca549b-5e64-416a-866b-1f63371db9dd" (UID: "50ca549b-5e64-416a-866b-1f63371db9dd"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:33:17 crc kubenswrapper[5008]: I0129 15:33:17.157068 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/50ca549b-5e64-416a-866b-1f63371db9dd-kube-api-access-jffgv" (OuterVolumeSpecName: "kube-api-access-jffgv") pod "50ca549b-5e64-416a-866b-1f63371db9dd" (UID: "50ca549b-5e64-416a-866b-1f63371db9dd"). InnerVolumeSpecName "kube-api-access-jffgv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:33:17 crc kubenswrapper[5008]: I0129 15:33:17.160987 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/50ca549b-5e64-416a-866b-1f63371db9dd-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "50ca549b-5e64-416a-866b-1f63371db9dd" (UID: "50ca549b-5e64-416a-866b-1f63371db9dd"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:33:17 crc kubenswrapper[5008]: I0129 15:33:17.252575 5008 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/50ca549b-5e64-416a-866b-1f63371db9dd-config\") on node \"crc\" DevicePath \"\"" Jan 29 15:33:17 crc kubenswrapper[5008]: I0129 15:33:17.252614 5008 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jffgv\" (UniqueName: \"kubernetes.io/projected/50ca549b-5e64-416a-866b-1f63371db9dd-kube-api-access-jffgv\") on node \"crc\" DevicePath \"\"" Jan 29 15:33:17 crc kubenswrapper[5008]: I0129 15:33:17.252626 5008 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/50ca549b-5e64-416a-866b-1f63371db9dd-client-ca\") on node \"crc\" DevicePath \"\"" Jan 29 15:33:17 crc kubenswrapper[5008]: I0129 15:33:17.252635 5008 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/50ca549b-5e64-416a-866b-1f63371db9dd-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 29 15:33:17 crc kubenswrapper[5008]: I0129 15:33:17.302917 5008 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-67df9d9956-9zzpb" Jan 29 15:33:17 crc kubenswrapper[5008]: I0129 15:33:17.401234 5008 generic.go:334] "Generic (PLEG): container finished" podID="17f45bda-9243-4ae2-858a-e32e62abeebc" containerID="7aba4d7f50689c07d3cd7a99f1cf234a06ce38d42971a905509a9922cd6383ea" exitCode=0 Jan 29 15:33:17 crc kubenswrapper[5008]: I0129 15:33:17.401326 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7d5696789b-pvrc7" Jan 29 15:33:17 crc kubenswrapper[5008]: I0129 15:33:17.401839 5008 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-67df9d9956-9zzpb" Jan 29 15:33:17 crc kubenswrapper[5008]: I0129 15:33:17.401983 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-67df9d9956-9zzpb" event={"ID":"17f45bda-9243-4ae2-858a-e32e62abeebc","Type":"ContainerDied","Data":"7aba4d7f50689c07d3cd7a99f1cf234a06ce38d42971a905509a9922cd6383ea"} Jan 29 15:33:17 crc kubenswrapper[5008]: I0129 15:33:17.402056 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-67df9d9956-9zzpb" event={"ID":"17f45bda-9243-4ae2-858a-e32e62abeebc","Type":"ContainerDied","Data":"ff460f6e4a20ab94042fb5b7e4ffa51bff723245acb3725b04c391036ec1f691"} Jan 29 15:33:17 crc kubenswrapper[5008]: I0129 15:33:17.402296 5008 scope.go:117] "RemoveContainer" containerID="7aba4d7f50689c07d3cd7a99f1cf234a06ce38d42971a905509a9922cd6383ea" Jan 29 15:33:17 crc kubenswrapper[5008]: I0129 15:33:17.425910 5008 scope.go:117] "RemoveContainer" containerID="7aba4d7f50689c07d3cd7a99f1cf234a06ce38d42971a905509a9922cd6383ea" Jan 29 15:33:17 crc kubenswrapper[5008]: E0129 15:33:17.428769 5008 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7aba4d7f50689c07d3cd7a99f1cf234a06ce38d42971a905509a9922cd6383ea\": container with ID starting with 7aba4d7f50689c07d3cd7a99f1cf234a06ce38d42971a905509a9922cd6383ea not found: ID does not exist" containerID="7aba4d7f50689c07d3cd7a99f1cf234a06ce38d42971a905509a9922cd6383ea" Jan 29 15:33:17 crc kubenswrapper[5008]: I0129 15:33:17.428830 5008 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7aba4d7f50689c07d3cd7a99f1cf234a06ce38d42971a905509a9922cd6383ea"} err="failed to get container status \"7aba4d7f50689c07d3cd7a99f1cf234a06ce38d42971a905509a9922cd6383ea\": rpc error: code = NotFound desc = could not find container \"7aba4d7f50689c07d3cd7a99f1cf234a06ce38d42971a905509a9922cd6383ea\": container with ID starting with 7aba4d7f50689c07d3cd7a99f1cf234a06ce38d42971a905509a9922cd6383ea not found: ID does not exist" Jan 29 15:33:17 crc kubenswrapper[5008]: I0129 15:33:17.432306 5008 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7d5696789b-pvrc7"] Jan 29 15:33:17 crc kubenswrapper[5008]: I0129 15:33:17.436834 5008 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7d5696789b-pvrc7"] Jan 29 15:33:17 crc kubenswrapper[5008]: I0129 15:33:17.454382 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/17f45bda-9243-4ae2-858a-e32e62abeebc-config\") pod \"17f45bda-9243-4ae2-858a-e32e62abeebc\" (UID: \"17f45bda-9243-4ae2-858a-e32e62abeebc\") " Jan 29 15:33:17 crc kubenswrapper[5008]: I0129 15:33:17.454593 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/17f45bda-9243-4ae2-858a-e32e62abeebc-serving-cert\") pod \"17f45bda-9243-4ae2-858a-e32e62abeebc\" (UID: \"17f45bda-9243-4ae2-858a-e32e62abeebc\") " Jan 29 15:33:17 crc kubenswrapper[5008]: I0129 15:33:17.454733 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mxhk7\" (UniqueName: \"kubernetes.io/projected/17f45bda-9243-4ae2-858a-e32e62abeebc-kube-api-access-mxhk7\") pod \"17f45bda-9243-4ae2-858a-e32e62abeebc\" (UID: \"17f45bda-9243-4ae2-858a-e32e62abeebc\") " Jan 29 15:33:17 crc kubenswrapper[5008]: I0129 15:33:17.454900 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/17f45bda-9243-4ae2-858a-e32e62abeebc-client-ca\") pod \"17f45bda-9243-4ae2-858a-e32e62abeebc\" (UID: \"17f45bda-9243-4ae2-858a-e32e62abeebc\") " Jan 29 15:33:17 crc kubenswrapper[5008]: I0129 15:33:17.455050 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/17f45bda-9243-4ae2-858a-e32e62abeebc-proxy-ca-bundles\") pod \"17f45bda-9243-4ae2-858a-e32e62abeebc\" (UID: \"17f45bda-9243-4ae2-858a-e32e62abeebc\") " Jan 29 15:33:17 crc kubenswrapper[5008]: I0129 15:33:17.455902 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/17f45bda-9243-4ae2-858a-e32e62abeebc-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "17f45bda-9243-4ae2-858a-e32e62abeebc" (UID: "17f45bda-9243-4ae2-858a-e32e62abeebc"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:33:17 crc kubenswrapper[5008]: I0129 15:33:17.455907 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/17f45bda-9243-4ae2-858a-e32e62abeebc-client-ca" (OuterVolumeSpecName: "client-ca") pod "17f45bda-9243-4ae2-858a-e32e62abeebc" (UID: "17f45bda-9243-4ae2-858a-e32e62abeebc"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:33:17 crc kubenswrapper[5008]: I0129 15:33:17.455931 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/17f45bda-9243-4ae2-858a-e32e62abeebc-config" (OuterVolumeSpecName: "config") pod "17f45bda-9243-4ae2-858a-e32e62abeebc" (UID: "17f45bda-9243-4ae2-858a-e32e62abeebc"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:33:17 crc kubenswrapper[5008]: I0129 15:33:17.458887 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/17f45bda-9243-4ae2-858a-e32e62abeebc-kube-api-access-mxhk7" (OuterVolumeSpecName: "kube-api-access-mxhk7") pod "17f45bda-9243-4ae2-858a-e32e62abeebc" (UID: "17f45bda-9243-4ae2-858a-e32e62abeebc"). InnerVolumeSpecName "kube-api-access-mxhk7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:33:17 crc kubenswrapper[5008]: I0129 15:33:17.458953 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/17f45bda-9243-4ae2-858a-e32e62abeebc-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "17f45bda-9243-4ae2-858a-e32e62abeebc" (UID: "17f45bda-9243-4ae2-858a-e32e62abeebc"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:33:17 crc kubenswrapper[5008]: I0129 15:33:17.556239 5008 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/17f45bda-9243-4ae2-858a-e32e62abeebc-config\") on node \"crc\" DevicePath \"\"" Jan 29 15:33:17 crc kubenswrapper[5008]: I0129 15:33:17.556272 5008 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/17f45bda-9243-4ae2-858a-e32e62abeebc-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 29 15:33:17 crc kubenswrapper[5008]: I0129 15:33:17.556285 5008 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mxhk7\" (UniqueName: \"kubernetes.io/projected/17f45bda-9243-4ae2-858a-e32e62abeebc-kube-api-access-mxhk7\") on node \"crc\" DevicePath \"\"" Jan 29 15:33:17 crc kubenswrapper[5008]: I0129 15:33:17.556295 5008 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/17f45bda-9243-4ae2-858a-e32e62abeebc-client-ca\") on node \"crc\" DevicePath \"\"" Jan 29 15:33:17 crc kubenswrapper[5008]: I0129 15:33:17.556304 5008 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/17f45bda-9243-4ae2-858a-e32e62abeebc-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 29 15:33:17 crc kubenswrapper[5008]: I0129 15:33:17.740206 5008 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-67df9d9956-9zzpb"] Jan 29 15:33:17 crc kubenswrapper[5008]: I0129 15:33:17.746732 5008 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-67df9d9956-9zzpb"] Jan 29 15:33:18 crc kubenswrapper[5008]: I0129 15:33:18.163856 5008 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-56f55f798d-jgmg7"] Jan 29 15:33:18 crc kubenswrapper[5008]: E0129 15:33:18.164077 5008 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Jan 29 15:33:18 crc kubenswrapper[5008]: I0129 15:33:18.164091 5008 state_mem.go:107] "Deleted CPUSet assignment" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Jan 29 15:33:18 crc kubenswrapper[5008]: E0129 15:33:18.164112 5008 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="17f45bda-9243-4ae2-858a-e32e62abeebc" containerName="controller-manager" Jan 29 15:33:18 crc kubenswrapper[5008]: I0129 15:33:18.164122 5008 state_mem.go:107] "Deleted CPUSet assignment" podUID="17f45bda-9243-4ae2-858a-e32e62abeebc" containerName="controller-manager" Jan 29 15:33:18 crc kubenswrapper[5008]: I0129 15:33:18.164232 5008 memory_manager.go:354] "RemoveStaleState removing state" podUID="17f45bda-9243-4ae2-858a-e32e62abeebc" containerName="controller-manager" Jan 29 15:33:18 crc kubenswrapper[5008]: I0129 15:33:18.164245 5008 memory_manager.go:354] "RemoveStaleState removing state" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Jan 29 15:33:18 crc kubenswrapper[5008]: I0129 15:33:18.164649 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-56f55f798d-jgmg7" Jan 29 15:33:18 crc kubenswrapper[5008]: I0129 15:33:18.168867 5008 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 29 15:33:18 crc kubenswrapper[5008]: I0129 15:33:18.168954 5008 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 29 15:33:18 crc kubenswrapper[5008]: I0129 15:33:18.168867 5008 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 29 15:33:18 crc kubenswrapper[5008]: I0129 15:33:18.169113 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 29 15:33:18 crc kubenswrapper[5008]: I0129 15:33:18.169392 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 29 15:33:18 crc kubenswrapper[5008]: I0129 15:33:18.170738 5008 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 29 15:33:18 crc kubenswrapper[5008]: I0129 15:33:18.172549 5008 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-554dcd487f-wvdgc"] Jan 29 15:33:18 crc kubenswrapper[5008]: I0129 15:33:18.173575 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-554dcd487f-wvdgc" Jan 29 15:33:18 crc kubenswrapper[5008]: I0129 15:33:18.201865 5008 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 29 15:33:18 crc kubenswrapper[5008]: I0129 15:33:18.204615 5008 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 29 15:33:18 crc kubenswrapper[5008]: I0129 15:33:18.206583 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 29 15:33:18 crc kubenswrapper[5008]: I0129 15:33:18.208447 5008 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 29 15:33:18 crc kubenswrapper[5008]: I0129 15:33:18.208836 5008 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 29 15:33:18 crc kubenswrapper[5008]: I0129 15:33:18.209163 5008 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 29 15:33:18 crc kubenswrapper[5008]: I0129 15:33:18.209450 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 29 15:33:18 crc kubenswrapper[5008]: I0129 15:33:18.225108 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-554dcd487f-wvdgc"] Jan 29 15:33:18 crc kubenswrapper[5008]: I0129 15:33:18.234935 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-56f55f798d-jgmg7"] Jan 29 15:33:18 crc kubenswrapper[5008]: I0129 15:33:18.267055 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/397801e5-e82c-402b-9d5a-fd7853243b8e-client-ca\") pod \"controller-manager-56f55f798d-jgmg7\" (UID: \"397801e5-e82c-402b-9d5a-fd7853243b8e\") " pod="openshift-controller-manager/controller-manager-56f55f798d-jgmg7" Jan 29 15:33:18 crc kubenswrapper[5008]: I0129 15:33:18.267123 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zchzc\" (UniqueName: \"kubernetes.io/projected/397801e5-e82c-402b-9d5a-fd7853243b8e-kube-api-access-zchzc\") pod \"controller-manager-56f55f798d-jgmg7\" (UID: \"397801e5-e82c-402b-9d5a-fd7853243b8e\") " pod="openshift-controller-manager/controller-manager-56f55f798d-jgmg7" Jan 29 15:33:18 crc kubenswrapper[5008]: I0129 15:33:18.267150 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/397801e5-e82c-402b-9d5a-fd7853243b8e-serving-cert\") pod \"controller-manager-56f55f798d-jgmg7\" (UID: \"397801e5-e82c-402b-9d5a-fd7853243b8e\") " pod="openshift-controller-manager/controller-manager-56f55f798d-jgmg7" Jan 29 15:33:18 crc kubenswrapper[5008]: I0129 15:33:18.267226 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/397801e5-e82c-402b-9d5a-fd7853243b8e-proxy-ca-bundles\") pod \"controller-manager-56f55f798d-jgmg7\" (UID: \"397801e5-e82c-402b-9d5a-fd7853243b8e\") " pod="openshift-controller-manager/controller-manager-56f55f798d-jgmg7" Jan 29 15:33:18 crc kubenswrapper[5008]: I0129 15:33:18.267274 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/397801e5-e82c-402b-9d5a-fd7853243b8e-config\") pod \"controller-manager-56f55f798d-jgmg7\" (UID: \"397801e5-e82c-402b-9d5a-fd7853243b8e\") " pod="openshift-controller-manager/controller-manager-56f55f798d-jgmg7" Jan 29 15:33:18 crc kubenswrapper[5008]: I0129 15:33:18.368126 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6nrtm\" (UniqueName: \"kubernetes.io/projected/7f9d2aa9-16d5-44f9-af8b-1afc90aa4f9d-kube-api-access-6nrtm\") pod \"route-controller-manager-554dcd487f-wvdgc\" (UID: \"7f9d2aa9-16d5-44f9-af8b-1afc90aa4f9d\") " pod="openshift-route-controller-manager/route-controller-manager-554dcd487f-wvdgc" Jan 29 15:33:18 crc kubenswrapper[5008]: I0129 15:33:18.368179 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/397801e5-e82c-402b-9d5a-fd7853243b8e-proxy-ca-bundles\") pod \"controller-manager-56f55f798d-jgmg7\" (UID: \"397801e5-e82c-402b-9d5a-fd7853243b8e\") " pod="openshift-controller-manager/controller-manager-56f55f798d-jgmg7" Jan 29 15:33:18 crc kubenswrapper[5008]: I0129 15:33:18.368374 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7f9d2aa9-16d5-44f9-af8b-1afc90aa4f9d-client-ca\") pod \"route-controller-manager-554dcd487f-wvdgc\" (UID: \"7f9d2aa9-16d5-44f9-af8b-1afc90aa4f9d\") " pod="openshift-route-controller-manager/route-controller-manager-554dcd487f-wvdgc" Jan 29 15:33:18 crc kubenswrapper[5008]: I0129 15:33:18.368452 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7f9d2aa9-16d5-44f9-af8b-1afc90aa4f9d-config\") pod \"route-controller-manager-554dcd487f-wvdgc\" (UID: \"7f9d2aa9-16d5-44f9-af8b-1afc90aa4f9d\") " pod="openshift-route-controller-manager/route-controller-manager-554dcd487f-wvdgc" Jan 29 15:33:18 crc kubenswrapper[5008]: I0129 15:33:18.368522 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/397801e5-e82c-402b-9d5a-fd7853243b8e-config\") pod \"controller-manager-56f55f798d-jgmg7\" (UID: \"397801e5-e82c-402b-9d5a-fd7853243b8e\") " pod="openshift-controller-manager/controller-manager-56f55f798d-jgmg7" Jan 29 15:33:18 crc kubenswrapper[5008]: I0129 15:33:18.368831 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/397801e5-e82c-402b-9d5a-fd7853243b8e-client-ca\") pod \"controller-manager-56f55f798d-jgmg7\" (UID: \"397801e5-e82c-402b-9d5a-fd7853243b8e\") " pod="openshift-controller-manager/controller-manager-56f55f798d-jgmg7" Jan 29 15:33:18 crc kubenswrapper[5008]: I0129 15:33:18.368987 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zchzc\" (UniqueName: \"kubernetes.io/projected/397801e5-e82c-402b-9d5a-fd7853243b8e-kube-api-access-zchzc\") pod \"controller-manager-56f55f798d-jgmg7\" (UID: \"397801e5-e82c-402b-9d5a-fd7853243b8e\") " pod="openshift-controller-manager/controller-manager-56f55f798d-jgmg7" Jan 29 15:33:18 crc kubenswrapper[5008]: I0129 15:33:18.369085 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7f9d2aa9-16d5-44f9-af8b-1afc90aa4f9d-serving-cert\") pod \"route-controller-manager-554dcd487f-wvdgc\" (UID: \"7f9d2aa9-16d5-44f9-af8b-1afc90aa4f9d\") " pod="openshift-route-controller-manager/route-controller-manager-554dcd487f-wvdgc" Jan 29 15:33:18 crc kubenswrapper[5008]: I0129 15:33:18.369140 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/397801e5-e82c-402b-9d5a-fd7853243b8e-serving-cert\") pod \"controller-manager-56f55f798d-jgmg7\" (UID: \"397801e5-e82c-402b-9d5a-fd7853243b8e\") " pod="openshift-controller-manager/controller-manager-56f55f798d-jgmg7" Jan 29 15:33:18 crc kubenswrapper[5008]: I0129 15:33:18.369415 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/397801e5-e82c-402b-9d5a-fd7853243b8e-proxy-ca-bundles\") pod \"controller-manager-56f55f798d-jgmg7\" (UID: \"397801e5-e82c-402b-9d5a-fd7853243b8e\") " pod="openshift-controller-manager/controller-manager-56f55f798d-jgmg7" Jan 29 15:33:18 crc kubenswrapper[5008]: I0129 15:33:18.369829 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/397801e5-e82c-402b-9d5a-fd7853243b8e-client-ca\") pod \"controller-manager-56f55f798d-jgmg7\" (UID: \"397801e5-e82c-402b-9d5a-fd7853243b8e\") " pod="openshift-controller-manager/controller-manager-56f55f798d-jgmg7" Jan 29 15:33:18 crc kubenswrapper[5008]: I0129 15:33:18.370660 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/397801e5-e82c-402b-9d5a-fd7853243b8e-config\") pod \"controller-manager-56f55f798d-jgmg7\" (UID: \"397801e5-e82c-402b-9d5a-fd7853243b8e\") " pod="openshift-controller-manager/controller-manager-56f55f798d-jgmg7" Jan 29 15:33:18 crc kubenswrapper[5008]: I0129 15:33:18.379132 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/397801e5-e82c-402b-9d5a-fd7853243b8e-serving-cert\") pod \"controller-manager-56f55f798d-jgmg7\" (UID: \"397801e5-e82c-402b-9d5a-fd7853243b8e\") " pod="openshift-controller-manager/controller-manager-56f55f798d-jgmg7" Jan 29 15:33:18 crc kubenswrapper[5008]: I0129 15:33:18.394821 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zchzc\" (UniqueName: \"kubernetes.io/projected/397801e5-e82c-402b-9d5a-fd7853243b8e-kube-api-access-zchzc\") pod \"controller-manager-56f55f798d-jgmg7\" (UID: \"397801e5-e82c-402b-9d5a-fd7853243b8e\") " pod="openshift-controller-manager/controller-manager-56f55f798d-jgmg7" Jan 29 15:33:18 crc kubenswrapper[5008]: I0129 15:33:18.470578 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7f9d2aa9-16d5-44f9-af8b-1afc90aa4f9d-client-ca\") pod \"route-controller-manager-554dcd487f-wvdgc\" (UID: \"7f9d2aa9-16d5-44f9-af8b-1afc90aa4f9d\") " pod="openshift-route-controller-manager/route-controller-manager-554dcd487f-wvdgc" Jan 29 15:33:18 crc kubenswrapper[5008]: I0129 15:33:18.470651 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7f9d2aa9-16d5-44f9-af8b-1afc90aa4f9d-config\") pod \"route-controller-manager-554dcd487f-wvdgc\" (UID: \"7f9d2aa9-16d5-44f9-af8b-1afc90aa4f9d\") " pod="openshift-route-controller-manager/route-controller-manager-554dcd487f-wvdgc" Jan 29 15:33:18 crc kubenswrapper[5008]: I0129 15:33:18.470750 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7f9d2aa9-16d5-44f9-af8b-1afc90aa4f9d-serving-cert\") pod \"route-controller-manager-554dcd487f-wvdgc\" (UID: \"7f9d2aa9-16d5-44f9-af8b-1afc90aa4f9d\") " pod="openshift-route-controller-manager/route-controller-manager-554dcd487f-wvdgc" Jan 29 15:33:18 crc kubenswrapper[5008]: I0129 15:33:18.470811 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6nrtm\" (UniqueName: \"kubernetes.io/projected/7f9d2aa9-16d5-44f9-af8b-1afc90aa4f9d-kube-api-access-6nrtm\") pod \"route-controller-manager-554dcd487f-wvdgc\" (UID: \"7f9d2aa9-16d5-44f9-af8b-1afc90aa4f9d\") " pod="openshift-route-controller-manager/route-controller-manager-554dcd487f-wvdgc" Jan 29 15:33:18 crc kubenswrapper[5008]: I0129 15:33:18.472657 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7f9d2aa9-16d5-44f9-af8b-1afc90aa4f9d-config\") pod \"route-controller-manager-554dcd487f-wvdgc\" (UID: \"7f9d2aa9-16d5-44f9-af8b-1afc90aa4f9d\") " pod="openshift-route-controller-manager/route-controller-manager-554dcd487f-wvdgc" Jan 29 15:33:18 crc kubenswrapper[5008]: I0129 15:33:18.473602 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7f9d2aa9-16d5-44f9-af8b-1afc90aa4f9d-client-ca\") pod \"route-controller-manager-554dcd487f-wvdgc\" (UID: \"7f9d2aa9-16d5-44f9-af8b-1afc90aa4f9d\") " pod="openshift-route-controller-manager/route-controller-manager-554dcd487f-wvdgc" Jan 29 15:33:18 crc kubenswrapper[5008]: I0129 15:33:18.481426 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7f9d2aa9-16d5-44f9-af8b-1afc90aa4f9d-serving-cert\") pod \"route-controller-manager-554dcd487f-wvdgc\" (UID: \"7f9d2aa9-16d5-44f9-af8b-1afc90aa4f9d\") " pod="openshift-route-controller-manager/route-controller-manager-554dcd487f-wvdgc" Jan 29 15:33:18 crc kubenswrapper[5008]: I0129 15:33:18.489004 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6nrtm\" (UniqueName: \"kubernetes.io/projected/7f9d2aa9-16d5-44f9-af8b-1afc90aa4f9d-kube-api-access-6nrtm\") pod \"route-controller-manager-554dcd487f-wvdgc\" (UID: \"7f9d2aa9-16d5-44f9-af8b-1afc90aa4f9d\") " pod="openshift-route-controller-manager/route-controller-manager-554dcd487f-wvdgc" Jan 29 15:33:18 crc kubenswrapper[5008]: I0129 15:33:18.503043 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-56f55f798d-jgmg7" Jan 29 15:33:18 crc kubenswrapper[5008]: I0129 15:33:18.521056 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-554dcd487f-wvdgc" Jan 29 15:33:18 crc kubenswrapper[5008]: I0129 15:33:18.704595 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-56f55f798d-jgmg7"] Jan 29 15:33:18 crc kubenswrapper[5008]: I0129 15:33:18.945424 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-554dcd487f-wvdgc"] Jan 29 15:33:18 crc kubenswrapper[5008]: W0129 15:33:18.947761 5008 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7f9d2aa9_16d5_44f9_af8b_1afc90aa4f9d.slice/crio-c3a79cd014701fe839847179f647ae029bc947b8fc7408a7aba8909c6df42ca4 WatchSource:0}: Error finding container c3a79cd014701fe839847179f647ae029bc947b8fc7408a7aba8909c6df42ca4: Status 404 returned error can't find the container with id c3a79cd014701fe839847179f647ae029bc947b8fc7408a7aba8909c6df42ca4 Jan 29 15:33:19 crc kubenswrapper[5008]: I0129 15:33:19.330396 5008 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="17f45bda-9243-4ae2-858a-e32e62abeebc" path="/var/lib/kubelet/pods/17f45bda-9243-4ae2-858a-e32e62abeebc/volumes" Jan 29 15:33:19 crc kubenswrapper[5008]: I0129 15:33:19.331074 5008 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="50ca549b-5e64-416a-866b-1f63371db9dd" path="/var/lib/kubelet/pods/50ca549b-5e64-416a-866b-1f63371db9dd/volumes" Jan 29 15:33:19 crc kubenswrapper[5008]: I0129 15:33:19.412842 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-56f55f798d-jgmg7" event={"ID":"397801e5-e82c-402b-9d5a-fd7853243b8e","Type":"ContainerStarted","Data":"95a154a30a24540e8c25012c840395e72cefb05eb2ed2f5d55eef559756864c0"} Jan 29 15:33:19 crc kubenswrapper[5008]: I0129 15:33:19.412883 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-56f55f798d-jgmg7" event={"ID":"397801e5-e82c-402b-9d5a-fd7853243b8e","Type":"ContainerStarted","Data":"45480fb628d022117f602dfca00d9f038e3fadbd266d68d97c74b5c3565707f3"} Jan 29 15:33:19 crc kubenswrapper[5008]: I0129 15:33:19.413073 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-56f55f798d-jgmg7" Jan 29 15:33:19 crc kubenswrapper[5008]: I0129 15:33:19.414569 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-554dcd487f-wvdgc" event={"ID":"7f9d2aa9-16d5-44f9-af8b-1afc90aa4f9d","Type":"ContainerStarted","Data":"d342b236c148d8f1a38327bc0072f32f91cfb96c37b4135ca4e1c23c5b141ffd"} Jan 29 15:33:19 crc kubenswrapper[5008]: I0129 15:33:19.414603 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-554dcd487f-wvdgc" event={"ID":"7f9d2aa9-16d5-44f9-af8b-1afc90aa4f9d","Type":"ContainerStarted","Data":"c3a79cd014701fe839847179f647ae029bc947b8fc7408a7aba8909c6df42ca4"} Jan 29 15:33:19 crc kubenswrapper[5008]: I0129 15:33:19.414772 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-554dcd487f-wvdgc" Jan 29 15:33:19 crc kubenswrapper[5008]: I0129 15:33:19.417370 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-56f55f798d-jgmg7" Jan 29 15:33:19 crc kubenswrapper[5008]: I0129 15:33:19.444624 5008 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-56f55f798d-jgmg7" podStartSLOduration=3.444606795 podStartE2EDuration="3.444606795s" podCreationTimestamp="2026-01-29 15:33:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 15:33:19.440301165 +0000 UTC m=+343.113155412" watchObservedRunningTime="2026-01-29 15:33:19.444606795 +0000 UTC m=+343.117461032" Jan 29 15:33:19 crc kubenswrapper[5008]: I0129 15:33:19.464844 5008 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-554dcd487f-wvdgc" podStartSLOduration=3.464818842 podStartE2EDuration="3.464818842s" podCreationTimestamp="2026-01-29 15:33:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 15:33:19.45725365 +0000 UTC m=+343.130107917" watchObservedRunningTime="2026-01-29 15:33:19.464818842 +0000 UTC m=+343.137673139" Jan 29 15:33:19 crc kubenswrapper[5008]: I0129 15:33:19.757291 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-554dcd487f-wvdgc" Jan 29 15:33:21 crc kubenswrapper[5008]: I0129 15:33:21.430077 5008 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Jan 29 15:33:21 crc kubenswrapper[5008]: I0129 15:33:21.430452 5008 generic.go:334] "Generic (PLEG): container finished" podID="f85e55b1a89d02b0cb034b1ea31ed45a" containerID="91c8b8e183ceb639dc42455dc6714f740f7596aa5a568725b22cbea1339a8752" exitCode=137 Jan 29 15:33:21 crc kubenswrapper[5008]: I0129 15:33:21.489196 5008 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Jan 29 15:33:21 crc kubenswrapper[5008]: I0129 15:33:21.489274 5008 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 29 15:33:21 crc kubenswrapper[5008]: I0129 15:33:21.616843 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 29 15:33:21 crc kubenswrapper[5008]: I0129 15:33:21.616932 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 29 15:33:21 crc kubenswrapper[5008]: I0129 15:33:21.617030 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 29 15:33:21 crc kubenswrapper[5008]: I0129 15:33:21.617075 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 29 15:33:21 crc kubenswrapper[5008]: I0129 15:33:21.617118 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 29 15:33:21 crc kubenswrapper[5008]: I0129 15:33:21.617128 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock" (OuterVolumeSpecName: "var-lock") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 15:33:21 crc kubenswrapper[5008]: I0129 15:33:21.617176 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log" (OuterVolumeSpecName: "var-log") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 15:33:21 crc kubenswrapper[5008]: I0129 15:33:21.617218 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 15:33:21 crc kubenswrapper[5008]: I0129 15:33:21.617280 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests" (OuterVolumeSpecName: "manifests") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "manifests". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 15:33:21 crc kubenswrapper[5008]: I0129 15:33:21.617712 5008 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") on node \"crc\" DevicePath \"\"" Jan 29 15:33:21 crc kubenswrapper[5008]: I0129 15:33:21.617743 5008 reconciler_common.go:293] "Volume detached for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") on node \"crc\" DevicePath \"\"" Jan 29 15:33:21 crc kubenswrapper[5008]: I0129 15:33:21.617754 5008 reconciler_common.go:293] "Volume detached for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") on node \"crc\" DevicePath \"\"" Jan 29 15:33:21 crc kubenswrapper[5008]: I0129 15:33:21.617765 5008 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") on node \"crc\" DevicePath \"\"" Jan 29 15:33:21 crc kubenswrapper[5008]: I0129 15:33:21.634498 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir" (OuterVolumeSpecName: "pod-resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "pod-resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 15:33:21 crc kubenswrapper[5008]: I0129 15:33:21.719091 5008 reconciler_common.go:293] "Volume detached for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") on node \"crc\" DevicePath \"\"" Jan 29 15:33:22 crc kubenswrapper[5008]: I0129 15:33:22.439029 5008 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Jan 29 15:33:22 crc kubenswrapper[5008]: I0129 15:33:22.439098 5008 scope.go:117] "RemoveContainer" containerID="91c8b8e183ceb639dc42455dc6714f740f7596aa5a568725b22cbea1339a8752" Jan 29 15:33:22 crc kubenswrapper[5008]: I0129 15:33:22.439247 5008 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 29 15:33:23 crc kubenswrapper[5008]: I0129 15:33:23.329892 5008 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" path="/var/lib/kubelet/pods/f85e55b1a89d02b0cb034b1ea31ed45a/volumes" Jan 29 15:33:23 crc kubenswrapper[5008]: I0129 15:33:23.330124 5008 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="" Jan 29 15:33:23 crc kubenswrapper[5008]: I0129 15:33:23.341650 5008 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 29 15:33:23 crc kubenswrapper[5008]: I0129 15:33:23.341700 5008 kubelet.go:2649] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" mirrorPodUID="6d660cc1-4441-4e90-bef9-fe103703354d" Jan 29 15:33:23 crc kubenswrapper[5008]: I0129 15:33:23.346264 5008 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 29 15:33:23 crc kubenswrapper[5008]: I0129 15:33:23.346300 5008 kubelet.go:2673] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" mirrorPodUID="6d660cc1-4441-4e90-bef9-fe103703354d" Jan 29 15:33:25 crc kubenswrapper[5008]: I0129 15:33:25.665497 5008 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Jan 29 15:33:30 crc kubenswrapper[5008]: I0129 15:33:30.208412 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Jan 29 15:33:30 crc kubenswrapper[5008]: I0129 15:33:30.578740 5008 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Jan 29 15:33:36 crc kubenswrapper[5008]: I0129 15:33:36.850959 5008 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-56f55f798d-jgmg7"] Jan 29 15:33:36 crc kubenswrapper[5008]: I0129 15:33:36.851505 5008 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-56f55f798d-jgmg7" podUID="397801e5-e82c-402b-9d5a-fd7853243b8e" containerName="controller-manager" containerID="cri-o://95a154a30a24540e8c25012c840395e72cefb05eb2ed2f5d55eef559756864c0" gracePeriod=30 Jan 29 15:33:36 crc kubenswrapper[5008]: I0129 15:33:36.878386 5008 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-554dcd487f-wvdgc"] Jan 29 15:33:36 crc kubenswrapper[5008]: I0129 15:33:36.878687 5008 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-554dcd487f-wvdgc" podUID="7f9d2aa9-16d5-44f9-af8b-1afc90aa4f9d" containerName="route-controller-manager" containerID="cri-o://d342b236c148d8f1a38327bc0072f32f91cfb96c37b4135ca4e1c23c5b141ffd" gracePeriod=30 Jan 29 15:33:37 crc kubenswrapper[5008]: I0129 15:33:37.539705 5008 generic.go:334] "Generic (PLEG): container finished" podID="7f9d2aa9-16d5-44f9-af8b-1afc90aa4f9d" containerID="d342b236c148d8f1a38327bc0072f32f91cfb96c37b4135ca4e1c23c5b141ffd" exitCode=0 Jan 29 15:33:37 crc kubenswrapper[5008]: I0129 15:33:37.539923 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-554dcd487f-wvdgc" event={"ID":"7f9d2aa9-16d5-44f9-af8b-1afc90aa4f9d","Type":"ContainerDied","Data":"d342b236c148d8f1a38327bc0072f32f91cfb96c37b4135ca4e1c23c5b141ffd"} Jan 29 15:33:37 crc kubenswrapper[5008]: I0129 15:33:37.543701 5008 generic.go:334] "Generic (PLEG): container finished" podID="397801e5-e82c-402b-9d5a-fd7853243b8e" containerID="95a154a30a24540e8c25012c840395e72cefb05eb2ed2f5d55eef559756864c0" exitCode=0 Jan 29 15:33:37 crc kubenswrapper[5008]: I0129 15:33:37.543757 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-56f55f798d-jgmg7" event={"ID":"397801e5-e82c-402b-9d5a-fd7853243b8e","Type":"ContainerDied","Data":"95a154a30a24540e8c25012c840395e72cefb05eb2ed2f5d55eef559756864c0"} Jan 29 15:33:37 crc kubenswrapper[5008]: I0129 15:33:37.979722 5008 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-554dcd487f-wvdgc" Jan 29 15:33:37 crc kubenswrapper[5008]: I0129 15:33:37.984578 5008 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-56f55f798d-jgmg7" Jan 29 15:33:38 crc kubenswrapper[5008]: I0129 15:33:38.019088 5008 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-555476556f-pvck6"] Jan 29 15:33:38 crc kubenswrapper[5008]: E0129 15:33:38.019311 5008 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7f9d2aa9-16d5-44f9-af8b-1afc90aa4f9d" containerName="route-controller-manager" Jan 29 15:33:38 crc kubenswrapper[5008]: I0129 15:33:38.019323 5008 state_mem.go:107] "Deleted CPUSet assignment" podUID="7f9d2aa9-16d5-44f9-af8b-1afc90aa4f9d" containerName="route-controller-manager" Jan 29 15:33:38 crc kubenswrapper[5008]: E0129 15:33:38.019339 5008 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="397801e5-e82c-402b-9d5a-fd7853243b8e" containerName="controller-manager" Jan 29 15:33:38 crc kubenswrapper[5008]: I0129 15:33:38.019346 5008 state_mem.go:107] "Deleted CPUSet assignment" podUID="397801e5-e82c-402b-9d5a-fd7853243b8e" containerName="controller-manager" Jan 29 15:33:38 crc kubenswrapper[5008]: I0129 15:33:38.019435 5008 memory_manager.go:354] "RemoveStaleState removing state" podUID="7f9d2aa9-16d5-44f9-af8b-1afc90aa4f9d" containerName="route-controller-manager" Jan 29 15:33:38 crc kubenswrapper[5008]: I0129 15:33:38.019453 5008 memory_manager.go:354] "RemoveStaleState removing state" podUID="397801e5-e82c-402b-9d5a-fd7853243b8e" containerName="controller-manager" Jan 29 15:33:38 crc kubenswrapper[5008]: I0129 15:33:38.019805 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-555476556f-pvck6" Jan 29 15:33:38 crc kubenswrapper[5008]: I0129 15:33:38.021670 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-555476556f-pvck6"] Jan 29 15:33:38 crc kubenswrapper[5008]: I0129 15:33:38.040617 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7f9d2aa9-16d5-44f9-af8b-1afc90aa4f9d-serving-cert\") pod \"7f9d2aa9-16d5-44f9-af8b-1afc90aa4f9d\" (UID: \"7f9d2aa9-16d5-44f9-af8b-1afc90aa4f9d\") " Jan 29 15:33:38 crc kubenswrapper[5008]: I0129 15:33:38.040703 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/397801e5-e82c-402b-9d5a-fd7853243b8e-config\") pod \"397801e5-e82c-402b-9d5a-fd7853243b8e\" (UID: \"397801e5-e82c-402b-9d5a-fd7853243b8e\") " Jan 29 15:33:38 crc kubenswrapper[5008]: I0129 15:33:38.040746 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/397801e5-e82c-402b-9d5a-fd7853243b8e-proxy-ca-bundles\") pod \"397801e5-e82c-402b-9d5a-fd7853243b8e\" (UID: \"397801e5-e82c-402b-9d5a-fd7853243b8e\") " Jan 29 15:33:38 crc kubenswrapper[5008]: I0129 15:33:38.040770 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7f9d2aa9-16d5-44f9-af8b-1afc90aa4f9d-config\") pod \"7f9d2aa9-16d5-44f9-af8b-1afc90aa4f9d\" (UID: \"7f9d2aa9-16d5-44f9-af8b-1afc90aa4f9d\") " Jan 29 15:33:38 crc kubenswrapper[5008]: I0129 15:33:38.040832 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zchzc\" (UniqueName: \"kubernetes.io/projected/397801e5-e82c-402b-9d5a-fd7853243b8e-kube-api-access-zchzc\") pod \"397801e5-e82c-402b-9d5a-fd7853243b8e\" (UID: \"397801e5-e82c-402b-9d5a-fd7853243b8e\") " Jan 29 15:33:38 crc kubenswrapper[5008]: I0129 15:33:38.040901 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/397801e5-e82c-402b-9d5a-fd7853243b8e-serving-cert\") pod \"397801e5-e82c-402b-9d5a-fd7853243b8e\" (UID: \"397801e5-e82c-402b-9d5a-fd7853243b8e\") " Jan 29 15:33:38 crc kubenswrapper[5008]: I0129 15:33:38.040926 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6nrtm\" (UniqueName: \"kubernetes.io/projected/7f9d2aa9-16d5-44f9-af8b-1afc90aa4f9d-kube-api-access-6nrtm\") pod \"7f9d2aa9-16d5-44f9-af8b-1afc90aa4f9d\" (UID: \"7f9d2aa9-16d5-44f9-af8b-1afc90aa4f9d\") " Jan 29 15:33:38 crc kubenswrapper[5008]: I0129 15:33:38.040958 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7f9d2aa9-16d5-44f9-af8b-1afc90aa4f9d-client-ca\") pod \"7f9d2aa9-16d5-44f9-af8b-1afc90aa4f9d\" (UID: \"7f9d2aa9-16d5-44f9-af8b-1afc90aa4f9d\") " Jan 29 15:33:38 crc kubenswrapper[5008]: I0129 15:33:38.041013 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/397801e5-e82c-402b-9d5a-fd7853243b8e-client-ca\") pod \"397801e5-e82c-402b-9d5a-fd7853243b8e\" (UID: \"397801e5-e82c-402b-9d5a-fd7853243b8e\") " Jan 29 15:33:38 crc kubenswrapper[5008]: I0129 15:33:38.041408 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/397801e5-e82c-402b-9d5a-fd7853243b8e-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "397801e5-e82c-402b-9d5a-fd7853243b8e" (UID: "397801e5-e82c-402b-9d5a-fd7853243b8e"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:33:38 crc kubenswrapper[5008]: I0129 15:33:38.041410 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7f9d2aa9-16d5-44f9-af8b-1afc90aa4f9d-config" (OuterVolumeSpecName: "config") pod "7f9d2aa9-16d5-44f9-af8b-1afc90aa4f9d" (UID: "7f9d2aa9-16d5-44f9-af8b-1afc90aa4f9d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:33:38 crc kubenswrapper[5008]: I0129 15:33:38.041717 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7f9d2aa9-16d5-44f9-af8b-1afc90aa4f9d-client-ca" (OuterVolumeSpecName: "client-ca") pod "7f9d2aa9-16d5-44f9-af8b-1afc90aa4f9d" (UID: "7f9d2aa9-16d5-44f9-af8b-1afc90aa4f9d"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:33:38 crc kubenswrapper[5008]: I0129 15:33:38.041889 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/397801e5-e82c-402b-9d5a-fd7853243b8e-client-ca" (OuterVolumeSpecName: "client-ca") pod "397801e5-e82c-402b-9d5a-fd7853243b8e" (UID: "397801e5-e82c-402b-9d5a-fd7853243b8e"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:33:38 crc kubenswrapper[5008]: I0129 15:33:38.042069 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/397801e5-e82c-402b-9d5a-fd7853243b8e-config" (OuterVolumeSpecName: "config") pod "397801e5-e82c-402b-9d5a-fd7853243b8e" (UID: "397801e5-e82c-402b-9d5a-fd7853243b8e"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:33:38 crc kubenswrapper[5008]: I0129 15:33:38.042134 5008 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/397801e5-e82c-402b-9d5a-fd7853243b8e-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 29 15:33:38 crc kubenswrapper[5008]: I0129 15:33:38.050956 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/397801e5-e82c-402b-9d5a-fd7853243b8e-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "397801e5-e82c-402b-9d5a-fd7853243b8e" (UID: "397801e5-e82c-402b-9d5a-fd7853243b8e"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:33:38 crc kubenswrapper[5008]: I0129 15:33:38.054916 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7f9d2aa9-16d5-44f9-af8b-1afc90aa4f9d-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7f9d2aa9-16d5-44f9-af8b-1afc90aa4f9d" (UID: "7f9d2aa9-16d5-44f9-af8b-1afc90aa4f9d"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:33:38 crc kubenswrapper[5008]: I0129 15:33:38.054925 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7f9d2aa9-16d5-44f9-af8b-1afc90aa4f9d-kube-api-access-6nrtm" (OuterVolumeSpecName: "kube-api-access-6nrtm") pod "7f9d2aa9-16d5-44f9-af8b-1afc90aa4f9d" (UID: "7f9d2aa9-16d5-44f9-af8b-1afc90aa4f9d"). InnerVolumeSpecName "kube-api-access-6nrtm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:33:38 crc kubenswrapper[5008]: I0129 15:33:38.061478 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/397801e5-e82c-402b-9d5a-fd7853243b8e-kube-api-access-zchzc" (OuterVolumeSpecName: "kube-api-access-zchzc") pod "397801e5-e82c-402b-9d5a-fd7853243b8e" (UID: "397801e5-e82c-402b-9d5a-fd7853243b8e"). InnerVolumeSpecName "kube-api-access-zchzc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:33:38 crc kubenswrapper[5008]: I0129 15:33:38.142987 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9ffb7e45-37e9-49cf-981c-d88916bba44b-config\") pod \"route-controller-manager-555476556f-pvck6\" (UID: \"9ffb7e45-37e9-49cf-981c-d88916bba44b\") " pod="openshift-route-controller-manager/route-controller-manager-555476556f-pvck6" Jan 29 15:33:38 crc kubenswrapper[5008]: I0129 15:33:38.143033 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dcvpb\" (UniqueName: \"kubernetes.io/projected/9ffb7e45-37e9-49cf-981c-d88916bba44b-kube-api-access-dcvpb\") pod \"route-controller-manager-555476556f-pvck6\" (UID: \"9ffb7e45-37e9-49cf-981c-d88916bba44b\") " pod="openshift-route-controller-manager/route-controller-manager-555476556f-pvck6" Jan 29 15:33:38 crc kubenswrapper[5008]: I0129 15:33:38.143069 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/9ffb7e45-37e9-49cf-981c-d88916bba44b-client-ca\") pod \"route-controller-manager-555476556f-pvck6\" (UID: \"9ffb7e45-37e9-49cf-981c-d88916bba44b\") " pod="openshift-route-controller-manager/route-controller-manager-555476556f-pvck6" Jan 29 15:33:38 crc kubenswrapper[5008]: I0129 15:33:38.143112 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9ffb7e45-37e9-49cf-981c-d88916bba44b-serving-cert\") pod \"route-controller-manager-555476556f-pvck6\" (UID: \"9ffb7e45-37e9-49cf-981c-d88916bba44b\") " pod="openshift-route-controller-manager/route-controller-manager-555476556f-pvck6" Jan 29 15:33:38 crc kubenswrapper[5008]: I0129 15:33:38.143169 5008 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7f9d2aa9-16d5-44f9-af8b-1afc90aa4f9d-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 29 15:33:38 crc kubenswrapper[5008]: I0129 15:33:38.143221 5008 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/397801e5-e82c-402b-9d5a-fd7853243b8e-config\") on node \"crc\" DevicePath \"\"" Jan 29 15:33:38 crc kubenswrapper[5008]: I0129 15:33:38.143231 5008 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7f9d2aa9-16d5-44f9-af8b-1afc90aa4f9d-config\") on node \"crc\" DevicePath \"\"" Jan 29 15:33:38 crc kubenswrapper[5008]: I0129 15:33:38.143241 5008 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zchzc\" (UniqueName: \"kubernetes.io/projected/397801e5-e82c-402b-9d5a-fd7853243b8e-kube-api-access-zchzc\") on node \"crc\" DevicePath \"\"" Jan 29 15:33:38 crc kubenswrapper[5008]: I0129 15:33:38.143252 5008 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6nrtm\" (UniqueName: \"kubernetes.io/projected/7f9d2aa9-16d5-44f9-af8b-1afc90aa4f9d-kube-api-access-6nrtm\") on node \"crc\" DevicePath \"\"" Jan 29 15:33:38 crc kubenswrapper[5008]: I0129 15:33:38.143279 5008 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/397801e5-e82c-402b-9d5a-fd7853243b8e-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 29 15:33:38 crc kubenswrapper[5008]: I0129 15:33:38.143289 5008 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7f9d2aa9-16d5-44f9-af8b-1afc90aa4f9d-client-ca\") on node \"crc\" DevicePath \"\"" Jan 29 15:33:38 crc kubenswrapper[5008]: I0129 15:33:38.143297 5008 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/397801e5-e82c-402b-9d5a-fd7853243b8e-client-ca\") on node \"crc\" DevicePath \"\"" Jan 29 15:33:38 crc kubenswrapper[5008]: I0129 15:33:38.244687 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9ffb7e45-37e9-49cf-981c-d88916bba44b-config\") pod \"route-controller-manager-555476556f-pvck6\" (UID: \"9ffb7e45-37e9-49cf-981c-d88916bba44b\") " pod="openshift-route-controller-manager/route-controller-manager-555476556f-pvck6" Jan 29 15:33:38 crc kubenswrapper[5008]: I0129 15:33:38.245065 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dcvpb\" (UniqueName: \"kubernetes.io/projected/9ffb7e45-37e9-49cf-981c-d88916bba44b-kube-api-access-dcvpb\") pod \"route-controller-manager-555476556f-pvck6\" (UID: \"9ffb7e45-37e9-49cf-981c-d88916bba44b\") " pod="openshift-route-controller-manager/route-controller-manager-555476556f-pvck6" Jan 29 15:33:38 crc kubenswrapper[5008]: I0129 15:33:38.245209 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/9ffb7e45-37e9-49cf-981c-d88916bba44b-client-ca\") pod \"route-controller-manager-555476556f-pvck6\" (UID: \"9ffb7e45-37e9-49cf-981c-d88916bba44b\") " pod="openshift-route-controller-manager/route-controller-manager-555476556f-pvck6" Jan 29 15:33:38 crc kubenswrapper[5008]: I0129 15:33:38.245360 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9ffb7e45-37e9-49cf-981c-d88916bba44b-serving-cert\") pod \"route-controller-manager-555476556f-pvck6\" (UID: \"9ffb7e45-37e9-49cf-981c-d88916bba44b\") " pod="openshift-route-controller-manager/route-controller-manager-555476556f-pvck6" Jan 29 15:33:38 crc kubenswrapper[5008]: I0129 15:33:38.246323 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/9ffb7e45-37e9-49cf-981c-d88916bba44b-client-ca\") pod \"route-controller-manager-555476556f-pvck6\" (UID: \"9ffb7e45-37e9-49cf-981c-d88916bba44b\") " pod="openshift-route-controller-manager/route-controller-manager-555476556f-pvck6" Jan 29 15:33:38 crc kubenswrapper[5008]: I0129 15:33:38.247137 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9ffb7e45-37e9-49cf-981c-d88916bba44b-config\") pod \"route-controller-manager-555476556f-pvck6\" (UID: \"9ffb7e45-37e9-49cf-981c-d88916bba44b\") " pod="openshift-route-controller-manager/route-controller-manager-555476556f-pvck6" Jan 29 15:33:38 crc kubenswrapper[5008]: I0129 15:33:38.257552 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9ffb7e45-37e9-49cf-981c-d88916bba44b-serving-cert\") pod \"route-controller-manager-555476556f-pvck6\" (UID: \"9ffb7e45-37e9-49cf-981c-d88916bba44b\") " pod="openshift-route-controller-manager/route-controller-manager-555476556f-pvck6" Jan 29 15:33:38 crc kubenswrapper[5008]: I0129 15:33:38.261702 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dcvpb\" (UniqueName: \"kubernetes.io/projected/9ffb7e45-37e9-49cf-981c-d88916bba44b-kube-api-access-dcvpb\") pod \"route-controller-manager-555476556f-pvck6\" (UID: \"9ffb7e45-37e9-49cf-981c-d88916bba44b\") " pod="openshift-route-controller-manager/route-controller-manager-555476556f-pvck6" Jan 29 15:33:38 crc kubenswrapper[5008]: I0129 15:33:38.341485 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-555476556f-pvck6" Jan 29 15:33:38 crc kubenswrapper[5008]: I0129 15:33:38.551523 5008 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-56f55f798d-jgmg7" Jan 29 15:33:38 crc kubenswrapper[5008]: I0129 15:33:38.551519 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-56f55f798d-jgmg7" event={"ID":"397801e5-e82c-402b-9d5a-fd7853243b8e","Type":"ContainerDied","Data":"45480fb628d022117f602dfca00d9f038e3fadbd266d68d97c74b5c3565707f3"} Jan 29 15:33:38 crc kubenswrapper[5008]: I0129 15:33:38.551681 5008 scope.go:117] "RemoveContainer" containerID="95a154a30a24540e8c25012c840395e72cefb05eb2ed2f5d55eef559756864c0" Jan 29 15:33:38 crc kubenswrapper[5008]: I0129 15:33:38.553030 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-554dcd487f-wvdgc" event={"ID":"7f9d2aa9-16d5-44f9-af8b-1afc90aa4f9d","Type":"ContainerDied","Data":"c3a79cd014701fe839847179f647ae029bc947b8fc7408a7aba8909c6df42ca4"} Jan 29 15:33:38 crc kubenswrapper[5008]: I0129 15:33:38.553090 5008 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-554dcd487f-wvdgc" Jan 29 15:33:38 crc kubenswrapper[5008]: I0129 15:33:38.569064 5008 scope.go:117] "RemoveContainer" containerID="d342b236c148d8f1a38327bc0072f32f91cfb96c37b4135ca4e1c23c5b141ffd" Jan 29 15:33:38 crc kubenswrapper[5008]: I0129 15:33:38.585164 5008 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-56f55f798d-jgmg7"] Jan 29 15:33:38 crc kubenswrapper[5008]: I0129 15:33:38.590306 5008 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-56f55f798d-jgmg7"] Jan 29 15:33:38 crc kubenswrapper[5008]: I0129 15:33:38.594506 5008 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-554dcd487f-wvdgc"] Jan 29 15:33:38 crc kubenswrapper[5008]: I0129 15:33:38.598455 5008 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-554dcd487f-wvdgc"] Jan 29 15:33:38 crc kubenswrapper[5008]: I0129 15:33:38.789530 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-555476556f-pvck6"] Jan 29 15:33:38 crc kubenswrapper[5008]: W0129 15:33:38.789992 5008 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9ffb7e45_37e9_49cf_981c_d88916bba44b.slice/crio-6390f3c64efd012633ef552d358d0db88be60b32e8bc4b6efb83125ea4fe673d WatchSource:0}: Error finding container 6390f3c64efd012633ef552d358d0db88be60b32e8bc4b6efb83125ea4fe673d: Status 404 returned error can't find the container with id 6390f3c64efd012633ef552d358d0db88be60b32e8bc4b6efb83125ea4fe673d Jan 29 15:33:39 crc kubenswrapper[5008]: I0129 15:33:39.330041 5008 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="397801e5-e82c-402b-9d5a-fd7853243b8e" path="/var/lib/kubelet/pods/397801e5-e82c-402b-9d5a-fd7853243b8e/volumes" Jan 29 15:33:39 crc kubenswrapper[5008]: I0129 15:33:39.330876 5008 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7f9d2aa9-16d5-44f9-af8b-1afc90aa4f9d" path="/var/lib/kubelet/pods/7f9d2aa9-16d5-44f9-af8b-1afc90aa4f9d/volumes" Jan 29 15:33:39 crc kubenswrapper[5008]: I0129 15:33:39.565479 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-555476556f-pvck6" event={"ID":"9ffb7e45-37e9-49cf-981c-d88916bba44b","Type":"ContainerStarted","Data":"c5a81b7d6a5eb5b94e027d72a4da3dbb692c825c9c6bd8260d78e97a8e3f3e2b"} Jan 29 15:33:39 crc kubenswrapper[5008]: I0129 15:33:39.565569 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-555476556f-pvck6" event={"ID":"9ffb7e45-37e9-49cf-981c-d88916bba44b","Type":"ContainerStarted","Data":"6390f3c64efd012633ef552d358d0db88be60b32e8bc4b6efb83125ea4fe673d"} Jan 29 15:33:39 crc kubenswrapper[5008]: I0129 15:33:39.565801 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-555476556f-pvck6" Jan 29 15:33:39 crc kubenswrapper[5008]: I0129 15:33:39.572952 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-555476556f-pvck6" Jan 29 15:33:39 crc kubenswrapper[5008]: I0129 15:33:39.583821 5008 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-555476556f-pvck6" podStartSLOduration=3.583767707 podStartE2EDuration="3.583767707s" podCreationTimestamp="2026-01-29 15:33:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 15:33:39.580331554 +0000 UTC m=+363.253185811" watchObservedRunningTime="2026-01-29 15:33:39.583767707 +0000 UTC m=+363.256621964" Jan 29 15:33:40 crc kubenswrapper[5008]: I0129 15:33:40.180666 5008 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-6fb6f5d5c7-g6fg4"] Jan 29 15:33:40 crc kubenswrapper[5008]: I0129 15:33:40.181658 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6fb6f5d5c7-g6fg4" Jan 29 15:33:40 crc kubenswrapper[5008]: I0129 15:33:40.184968 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 29 15:33:40 crc kubenswrapper[5008]: I0129 15:33:40.185008 5008 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 29 15:33:40 crc kubenswrapper[5008]: I0129 15:33:40.185104 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 29 15:33:40 crc kubenswrapper[5008]: I0129 15:33:40.185670 5008 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 29 15:33:40 crc kubenswrapper[5008]: I0129 15:33:40.187556 5008 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 29 15:33:40 crc kubenswrapper[5008]: I0129 15:33:40.189520 5008 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 29 15:33:40 crc kubenswrapper[5008]: I0129 15:33:40.194498 5008 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 29 15:33:40 crc kubenswrapper[5008]: I0129 15:33:40.196379 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-6fb6f5d5c7-g6fg4"] Jan 29 15:33:40 crc kubenswrapper[5008]: I0129 15:33:40.273281 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/93ec6db8-09a1-4b3b-900d-867f728452cb-serving-cert\") pod \"controller-manager-6fb6f5d5c7-g6fg4\" (UID: \"93ec6db8-09a1-4b3b-900d-867f728452cb\") " pod="openshift-controller-manager/controller-manager-6fb6f5d5c7-g6fg4" Jan 29 15:33:40 crc kubenswrapper[5008]: I0129 15:33:40.273528 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/93ec6db8-09a1-4b3b-900d-867f728452cb-config\") pod \"controller-manager-6fb6f5d5c7-g6fg4\" (UID: \"93ec6db8-09a1-4b3b-900d-867f728452cb\") " pod="openshift-controller-manager/controller-manager-6fb6f5d5c7-g6fg4" Jan 29 15:33:40 crc kubenswrapper[5008]: I0129 15:33:40.273653 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/93ec6db8-09a1-4b3b-900d-867f728452cb-proxy-ca-bundles\") pod \"controller-manager-6fb6f5d5c7-g6fg4\" (UID: \"93ec6db8-09a1-4b3b-900d-867f728452cb\") " pod="openshift-controller-manager/controller-manager-6fb6f5d5c7-g6fg4" Jan 29 15:33:40 crc kubenswrapper[5008]: I0129 15:33:40.273810 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5dz6b\" (UniqueName: \"kubernetes.io/projected/93ec6db8-09a1-4b3b-900d-867f728452cb-kube-api-access-5dz6b\") pod \"controller-manager-6fb6f5d5c7-g6fg4\" (UID: \"93ec6db8-09a1-4b3b-900d-867f728452cb\") " pod="openshift-controller-manager/controller-manager-6fb6f5d5c7-g6fg4" Jan 29 15:33:40 crc kubenswrapper[5008]: I0129 15:33:40.273956 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/93ec6db8-09a1-4b3b-900d-867f728452cb-client-ca\") pod \"controller-manager-6fb6f5d5c7-g6fg4\" (UID: \"93ec6db8-09a1-4b3b-900d-867f728452cb\") " pod="openshift-controller-manager/controller-manager-6fb6f5d5c7-g6fg4" Jan 29 15:33:40 crc kubenswrapper[5008]: I0129 15:33:40.374697 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/93ec6db8-09a1-4b3b-900d-867f728452cb-serving-cert\") pod \"controller-manager-6fb6f5d5c7-g6fg4\" (UID: \"93ec6db8-09a1-4b3b-900d-867f728452cb\") " pod="openshift-controller-manager/controller-manager-6fb6f5d5c7-g6fg4" Jan 29 15:33:40 crc kubenswrapper[5008]: I0129 15:33:40.375758 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/93ec6db8-09a1-4b3b-900d-867f728452cb-config\") pod \"controller-manager-6fb6f5d5c7-g6fg4\" (UID: \"93ec6db8-09a1-4b3b-900d-867f728452cb\") " pod="openshift-controller-manager/controller-manager-6fb6f5d5c7-g6fg4" Jan 29 15:33:40 crc kubenswrapper[5008]: I0129 15:33:40.375955 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/93ec6db8-09a1-4b3b-900d-867f728452cb-proxy-ca-bundles\") pod \"controller-manager-6fb6f5d5c7-g6fg4\" (UID: \"93ec6db8-09a1-4b3b-900d-867f728452cb\") " pod="openshift-controller-manager/controller-manager-6fb6f5d5c7-g6fg4" Jan 29 15:33:40 crc kubenswrapper[5008]: I0129 15:33:40.376143 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5dz6b\" (UniqueName: \"kubernetes.io/projected/93ec6db8-09a1-4b3b-900d-867f728452cb-kube-api-access-5dz6b\") pod \"controller-manager-6fb6f5d5c7-g6fg4\" (UID: \"93ec6db8-09a1-4b3b-900d-867f728452cb\") " pod="openshift-controller-manager/controller-manager-6fb6f5d5c7-g6fg4" Jan 29 15:33:40 crc kubenswrapper[5008]: I0129 15:33:40.376321 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/93ec6db8-09a1-4b3b-900d-867f728452cb-client-ca\") pod \"controller-manager-6fb6f5d5c7-g6fg4\" (UID: \"93ec6db8-09a1-4b3b-900d-867f728452cb\") " pod="openshift-controller-manager/controller-manager-6fb6f5d5c7-g6fg4" Jan 29 15:33:40 crc kubenswrapper[5008]: I0129 15:33:40.377224 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/93ec6db8-09a1-4b3b-900d-867f728452cb-client-ca\") pod \"controller-manager-6fb6f5d5c7-g6fg4\" (UID: \"93ec6db8-09a1-4b3b-900d-867f728452cb\") " pod="openshift-controller-manager/controller-manager-6fb6f5d5c7-g6fg4" Jan 29 15:33:40 crc kubenswrapper[5008]: I0129 15:33:40.377346 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/93ec6db8-09a1-4b3b-900d-867f728452cb-proxy-ca-bundles\") pod \"controller-manager-6fb6f5d5c7-g6fg4\" (UID: \"93ec6db8-09a1-4b3b-900d-867f728452cb\") " pod="openshift-controller-manager/controller-manager-6fb6f5d5c7-g6fg4" Jan 29 15:33:40 crc kubenswrapper[5008]: I0129 15:33:40.377421 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/93ec6db8-09a1-4b3b-900d-867f728452cb-config\") pod \"controller-manager-6fb6f5d5c7-g6fg4\" (UID: \"93ec6db8-09a1-4b3b-900d-867f728452cb\") " pod="openshift-controller-manager/controller-manager-6fb6f5d5c7-g6fg4" Jan 29 15:33:40 crc kubenswrapper[5008]: I0129 15:33:40.383640 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/93ec6db8-09a1-4b3b-900d-867f728452cb-serving-cert\") pod \"controller-manager-6fb6f5d5c7-g6fg4\" (UID: \"93ec6db8-09a1-4b3b-900d-867f728452cb\") " pod="openshift-controller-manager/controller-manager-6fb6f5d5c7-g6fg4" Jan 29 15:33:40 crc kubenswrapper[5008]: I0129 15:33:40.394175 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5dz6b\" (UniqueName: \"kubernetes.io/projected/93ec6db8-09a1-4b3b-900d-867f728452cb-kube-api-access-5dz6b\") pod \"controller-manager-6fb6f5d5c7-g6fg4\" (UID: \"93ec6db8-09a1-4b3b-900d-867f728452cb\") " pod="openshift-controller-manager/controller-manager-6fb6f5d5c7-g6fg4" Jan 29 15:33:40 crc kubenswrapper[5008]: I0129 15:33:40.505184 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6fb6f5d5c7-g6fg4" Jan 29 15:33:40 crc kubenswrapper[5008]: I0129 15:33:40.978056 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-6fb6f5d5c7-g6fg4"] Jan 29 15:33:40 crc kubenswrapper[5008]: W0129 15:33:40.982235 5008 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod93ec6db8_09a1_4b3b_900d_867f728452cb.slice/crio-5887ca4850db20b4f0627a5f2b1d2ee4799a7e5d8d086bbb5ed85795193b59c4 WatchSource:0}: Error finding container 5887ca4850db20b4f0627a5f2b1d2ee4799a7e5d8d086bbb5ed85795193b59c4: Status 404 returned error can't find the container with id 5887ca4850db20b4f0627a5f2b1d2ee4799a7e5d8d086bbb5ed85795193b59c4 Jan 29 15:33:41 crc kubenswrapper[5008]: I0129 15:33:41.577083 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6fb6f5d5c7-g6fg4" event={"ID":"93ec6db8-09a1-4b3b-900d-867f728452cb","Type":"ContainerStarted","Data":"2e22995b163eebe80e37c0570ab875dae72b5630c85b948dc8057b5763467b37"} Jan 29 15:33:41 crc kubenswrapper[5008]: I0129 15:33:41.577443 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6fb6f5d5c7-g6fg4" event={"ID":"93ec6db8-09a1-4b3b-900d-867f728452cb","Type":"ContainerStarted","Data":"5887ca4850db20b4f0627a5f2b1d2ee4799a7e5d8d086bbb5ed85795193b59c4"} Jan 29 15:33:41 crc kubenswrapper[5008]: I0129 15:33:41.577466 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-6fb6f5d5c7-g6fg4" Jan 29 15:33:41 crc kubenswrapper[5008]: I0129 15:33:41.582994 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-6fb6f5d5c7-g6fg4" Jan 29 15:33:41 crc kubenswrapper[5008]: I0129 15:33:41.602064 5008 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-6fb6f5d5c7-g6fg4" podStartSLOduration=5.602050122 podStartE2EDuration="5.602050122s" podCreationTimestamp="2026-01-29 15:33:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 15:33:41.597247163 +0000 UTC m=+365.270101440" watchObservedRunningTime="2026-01-29 15:33:41.602050122 +0000 UTC m=+365.274904369" Jan 29 15:33:43 crc kubenswrapper[5008]: I0129 15:33:43.991166 5008 patch_prober.go:28] interesting pod/machine-config-daemon-gk9q8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 15:33:43 crc kubenswrapper[5008]: I0129 15:33:43.991278 5008 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-gk9q8" podUID="ca0fcb2d-733d-4bde-9bbf-3f7082d0e244" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 15:34:02 crc kubenswrapper[5008]: I0129 15:34:02.856085 5008 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-z9t2h"] Jan 29 15:34:02 crc kubenswrapper[5008]: I0129 15:34:02.857245 5008 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-z9t2h" podUID="250e7db8-88dd-44fd-8d73-51a6f8f4ba96" containerName="registry-server" containerID="cri-o://437e7c2a1dc758509d30fbbc79bf01370b5111c6588abe44eded360be5897c51" gracePeriod=2 Jan 29 15:34:03 crc kubenswrapper[5008]: I0129 15:34:03.381619 5008 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-z9t2h" Jan 29 15:34:03 crc kubenswrapper[5008]: I0129 15:34:03.387557 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/250e7db8-88dd-44fd-8d73-51a6f8f4ba96-utilities\") pod \"250e7db8-88dd-44fd-8d73-51a6f8f4ba96\" (UID: \"250e7db8-88dd-44fd-8d73-51a6f8f4ba96\") " Jan 29 15:34:03 crc kubenswrapper[5008]: I0129 15:34:03.387635 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z5sl4\" (UniqueName: \"kubernetes.io/projected/250e7db8-88dd-44fd-8d73-51a6f8f4ba96-kube-api-access-z5sl4\") pod \"250e7db8-88dd-44fd-8d73-51a6f8f4ba96\" (UID: \"250e7db8-88dd-44fd-8d73-51a6f8f4ba96\") " Jan 29 15:34:03 crc kubenswrapper[5008]: I0129 15:34:03.387694 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/250e7db8-88dd-44fd-8d73-51a6f8f4ba96-catalog-content\") pod \"250e7db8-88dd-44fd-8d73-51a6f8f4ba96\" (UID: \"250e7db8-88dd-44fd-8d73-51a6f8f4ba96\") " Jan 29 15:34:03 crc kubenswrapper[5008]: I0129 15:34:03.389664 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/250e7db8-88dd-44fd-8d73-51a6f8f4ba96-utilities" (OuterVolumeSpecName: "utilities") pod "250e7db8-88dd-44fd-8d73-51a6f8f4ba96" (UID: "250e7db8-88dd-44fd-8d73-51a6f8f4ba96"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 15:34:03 crc kubenswrapper[5008]: I0129 15:34:03.399127 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/250e7db8-88dd-44fd-8d73-51a6f8f4ba96-kube-api-access-z5sl4" (OuterVolumeSpecName: "kube-api-access-z5sl4") pod "250e7db8-88dd-44fd-8d73-51a6f8f4ba96" (UID: "250e7db8-88dd-44fd-8d73-51a6f8f4ba96"). InnerVolumeSpecName "kube-api-access-z5sl4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:34:03 crc kubenswrapper[5008]: I0129 15:34:03.454746 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/250e7db8-88dd-44fd-8d73-51a6f8f4ba96-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "250e7db8-88dd-44fd-8d73-51a6f8f4ba96" (UID: "250e7db8-88dd-44fd-8d73-51a6f8f4ba96"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 15:34:03 crc kubenswrapper[5008]: I0129 15:34:03.489054 5008 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/250e7db8-88dd-44fd-8d73-51a6f8f4ba96-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 15:34:03 crc kubenswrapper[5008]: I0129 15:34:03.489106 5008 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z5sl4\" (UniqueName: \"kubernetes.io/projected/250e7db8-88dd-44fd-8d73-51a6f8f4ba96-kube-api-access-z5sl4\") on node \"crc\" DevicePath \"\"" Jan 29 15:34:03 crc kubenswrapper[5008]: I0129 15:34:03.489127 5008 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/250e7db8-88dd-44fd-8d73-51a6f8f4ba96-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 15:34:03 crc kubenswrapper[5008]: I0129 15:34:03.721729 5008 generic.go:334] "Generic (PLEG): container finished" podID="250e7db8-88dd-44fd-8d73-51a6f8f4ba96" containerID="437e7c2a1dc758509d30fbbc79bf01370b5111c6588abe44eded360be5897c51" exitCode=0 Jan 29 15:34:03 crc kubenswrapper[5008]: I0129 15:34:03.721804 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-z9t2h" event={"ID":"250e7db8-88dd-44fd-8d73-51a6f8f4ba96","Type":"ContainerDied","Data":"437e7c2a1dc758509d30fbbc79bf01370b5111c6588abe44eded360be5897c51"} Jan 29 15:34:03 crc kubenswrapper[5008]: I0129 15:34:03.721840 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-z9t2h" event={"ID":"250e7db8-88dd-44fd-8d73-51a6f8f4ba96","Type":"ContainerDied","Data":"616df5323044bc3ebd3a98d75f3ea061e944f69d5bc62803ba635bd69dee1996"} Jan 29 15:34:03 crc kubenswrapper[5008]: I0129 15:34:03.721845 5008 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-z9t2h" Jan 29 15:34:03 crc kubenswrapper[5008]: I0129 15:34:03.721858 5008 scope.go:117] "RemoveContainer" containerID="437e7c2a1dc758509d30fbbc79bf01370b5111c6588abe44eded360be5897c51" Jan 29 15:34:03 crc kubenswrapper[5008]: I0129 15:34:03.748807 5008 scope.go:117] "RemoveContainer" containerID="e1c843618cf47e0f0dd906fe965d45ec9a3b4948ac0b8fb36792a472149a1987" Jan 29 15:34:03 crc kubenswrapper[5008]: I0129 15:34:03.759536 5008 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-z9t2h"] Jan 29 15:34:03 crc kubenswrapper[5008]: I0129 15:34:03.763980 5008 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-z9t2h"] Jan 29 15:34:03 crc kubenswrapper[5008]: I0129 15:34:03.787202 5008 scope.go:117] "RemoveContainer" containerID="e071e2b226079246f9ca57f9959626bc9e073f0d12b52ede6ad72f288413a3f9" Jan 29 15:34:03 crc kubenswrapper[5008]: I0129 15:34:03.810457 5008 scope.go:117] "RemoveContainer" containerID="437e7c2a1dc758509d30fbbc79bf01370b5111c6588abe44eded360be5897c51" Jan 29 15:34:03 crc kubenswrapper[5008]: E0129 15:34:03.811209 5008 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"437e7c2a1dc758509d30fbbc79bf01370b5111c6588abe44eded360be5897c51\": container with ID starting with 437e7c2a1dc758509d30fbbc79bf01370b5111c6588abe44eded360be5897c51 not found: ID does not exist" containerID="437e7c2a1dc758509d30fbbc79bf01370b5111c6588abe44eded360be5897c51" Jan 29 15:34:03 crc kubenswrapper[5008]: I0129 15:34:03.811264 5008 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"437e7c2a1dc758509d30fbbc79bf01370b5111c6588abe44eded360be5897c51"} err="failed to get container status \"437e7c2a1dc758509d30fbbc79bf01370b5111c6588abe44eded360be5897c51\": rpc error: code = NotFound desc = could not find container \"437e7c2a1dc758509d30fbbc79bf01370b5111c6588abe44eded360be5897c51\": container with ID starting with 437e7c2a1dc758509d30fbbc79bf01370b5111c6588abe44eded360be5897c51 not found: ID does not exist" Jan 29 15:34:03 crc kubenswrapper[5008]: I0129 15:34:03.811297 5008 scope.go:117] "RemoveContainer" containerID="e1c843618cf47e0f0dd906fe965d45ec9a3b4948ac0b8fb36792a472149a1987" Jan 29 15:34:03 crc kubenswrapper[5008]: E0129 15:34:03.811894 5008 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e1c843618cf47e0f0dd906fe965d45ec9a3b4948ac0b8fb36792a472149a1987\": container with ID starting with e1c843618cf47e0f0dd906fe965d45ec9a3b4948ac0b8fb36792a472149a1987 not found: ID does not exist" containerID="e1c843618cf47e0f0dd906fe965d45ec9a3b4948ac0b8fb36792a472149a1987" Jan 29 15:34:03 crc kubenswrapper[5008]: I0129 15:34:03.811959 5008 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e1c843618cf47e0f0dd906fe965d45ec9a3b4948ac0b8fb36792a472149a1987"} err="failed to get container status \"e1c843618cf47e0f0dd906fe965d45ec9a3b4948ac0b8fb36792a472149a1987\": rpc error: code = NotFound desc = could not find container \"e1c843618cf47e0f0dd906fe965d45ec9a3b4948ac0b8fb36792a472149a1987\": container with ID starting with e1c843618cf47e0f0dd906fe965d45ec9a3b4948ac0b8fb36792a472149a1987 not found: ID does not exist" Jan 29 15:34:03 crc kubenswrapper[5008]: I0129 15:34:03.811995 5008 scope.go:117] "RemoveContainer" containerID="e071e2b226079246f9ca57f9959626bc9e073f0d12b52ede6ad72f288413a3f9" Jan 29 15:34:03 crc kubenswrapper[5008]: E0129 15:34:03.812403 5008 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e071e2b226079246f9ca57f9959626bc9e073f0d12b52ede6ad72f288413a3f9\": container with ID starting with e071e2b226079246f9ca57f9959626bc9e073f0d12b52ede6ad72f288413a3f9 not found: ID does not exist" containerID="e071e2b226079246f9ca57f9959626bc9e073f0d12b52ede6ad72f288413a3f9" Jan 29 15:34:03 crc kubenswrapper[5008]: I0129 15:34:03.812427 5008 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e071e2b226079246f9ca57f9959626bc9e073f0d12b52ede6ad72f288413a3f9"} err="failed to get container status \"e071e2b226079246f9ca57f9959626bc9e073f0d12b52ede6ad72f288413a3f9\": rpc error: code = NotFound desc = could not find container \"e071e2b226079246f9ca57f9959626bc9e073f0d12b52ede6ad72f288413a3f9\": container with ID starting with e071e2b226079246f9ca57f9959626bc9e073f0d12b52ede6ad72f288413a3f9 not found: ID does not exist" Jan 29 15:34:04 crc kubenswrapper[5008]: I0129 15:34:04.030681 5008 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-6zjns"] Jan 29 15:34:05 crc kubenswrapper[5008]: I0129 15:34:05.045291 5008 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-fd6nq"] Jan 29 15:34:05 crc kubenswrapper[5008]: I0129 15:34:05.045564 5008 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-fd6nq" podUID="37742fc9-fce4-41f0-ba04-7232b6e647a7" containerName="registry-server" containerID="cri-o://a8d67992841dda8d8ecfe4b7861b1a552c63f6a32f809f7c1c99d45b6eba1024" gracePeriod=2 Jan 29 15:34:05 crc kubenswrapper[5008]: I0129 15:34:05.248325 5008 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-lhtht"] Jan 29 15:34:05 crc kubenswrapper[5008]: I0129 15:34:05.248702 5008 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-lhtht" podUID="a954daed-802a-4b46-81ef-7079dcddbaa5" containerName="registry-server" containerID="cri-o://a279fd865e1e761fdf4aa984a1b9d5a9d26fdcf44f1cb482fe636069d4d8f0ee" gracePeriod=2 Jan 29 15:34:05 crc kubenswrapper[5008]: I0129 15:34:05.345460 5008 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="250e7db8-88dd-44fd-8d73-51a6f8f4ba96" path="/var/lib/kubelet/pods/250e7db8-88dd-44fd-8d73-51a6f8f4ba96/volumes" Jan 29 15:34:05 crc kubenswrapper[5008]: I0129 15:34:05.501650 5008 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-fd6nq" Jan 29 15:34:05 crc kubenswrapper[5008]: I0129 15:34:05.516418 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lw6k4\" (UniqueName: \"kubernetes.io/projected/37742fc9-fce4-41f0-ba04-7232b6e647a7-kube-api-access-lw6k4\") pod \"37742fc9-fce4-41f0-ba04-7232b6e647a7\" (UID: \"37742fc9-fce4-41f0-ba04-7232b6e647a7\") " Jan 29 15:34:05 crc kubenswrapper[5008]: I0129 15:34:05.516479 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/37742fc9-fce4-41f0-ba04-7232b6e647a7-utilities\") pod \"37742fc9-fce4-41f0-ba04-7232b6e647a7\" (UID: \"37742fc9-fce4-41f0-ba04-7232b6e647a7\") " Jan 29 15:34:05 crc kubenswrapper[5008]: I0129 15:34:05.516522 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/37742fc9-fce4-41f0-ba04-7232b6e647a7-catalog-content\") pod \"37742fc9-fce4-41f0-ba04-7232b6e647a7\" (UID: \"37742fc9-fce4-41f0-ba04-7232b6e647a7\") " Jan 29 15:34:05 crc kubenswrapper[5008]: I0129 15:34:05.517498 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/37742fc9-fce4-41f0-ba04-7232b6e647a7-utilities" (OuterVolumeSpecName: "utilities") pod "37742fc9-fce4-41f0-ba04-7232b6e647a7" (UID: "37742fc9-fce4-41f0-ba04-7232b6e647a7"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 15:34:05 crc kubenswrapper[5008]: I0129 15:34:05.538241 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/37742fc9-fce4-41f0-ba04-7232b6e647a7-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "37742fc9-fce4-41f0-ba04-7232b6e647a7" (UID: "37742fc9-fce4-41f0-ba04-7232b6e647a7"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 15:34:05 crc kubenswrapper[5008]: I0129 15:34:05.538634 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/37742fc9-fce4-41f0-ba04-7232b6e647a7-kube-api-access-lw6k4" (OuterVolumeSpecName: "kube-api-access-lw6k4") pod "37742fc9-fce4-41f0-ba04-7232b6e647a7" (UID: "37742fc9-fce4-41f0-ba04-7232b6e647a7"). InnerVolumeSpecName "kube-api-access-lw6k4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:34:05 crc kubenswrapper[5008]: I0129 15:34:05.617621 5008 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lw6k4\" (UniqueName: \"kubernetes.io/projected/37742fc9-fce4-41f0-ba04-7232b6e647a7-kube-api-access-lw6k4\") on node \"crc\" DevicePath \"\"" Jan 29 15:34:05 crc kubenswrapper[5008]: I0129 15:34:05.617671 5008 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/37742fc9-fce4-41f0-ba04-7232b6e647a7-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 15:34:05 crc kubenswrapper[5008]: I0129 15:34:05.617687 5008 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/37742fc9-fce4-41f0-ba04-7232b6e647a7-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 15:34:05 crc kubenswrapper[5008]: I0129 15:34:05.643705 5008 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-lhtht" Jan 29 15:34:05 crc kubenswrapper[5008]: I0129 15:34:05.718156 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a954daed-802a-4b46-81ef-7079dcddbaa5-catalog-content\") pod \"a954daed-802a-4b46-81ef-7079dcddbaa5\" (UID: \"a954daed-802a-4b46-81ef-7079dcddbaa5\") " Jan 29 15:34:05 crc kubenswrapper[5008]: I0129 15:34:05.718237 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6pfbb\" (UniqueName: \"kubernetes.io/projected/a954daed-802a-4b46-81ef-7079dcddbaa5-kube-api-access-6pfbb\") pod \"a954daed-802a-4b46-81ef-7079dcddbaa5\" (UID: \"a954daed-802a-4b46-81ef-7079dcddbaa5\") " Jan 29 15:34:05 crc kubenswrapper[5008]: I0129 15:34:05.718279 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a954daed-802a-4b46-81ef-7079dcddbaa5-utilities\") pod \"a954daed-802a-4b46-81ef-7079dcddbaa5\" (UID: \"a954daed-802a-4b46-81ef-7079dcddbaa5\") " Jan 29 15:34:05 crc kubenswrapper[5008]: I0129 15:34:05.719183 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a954daed-802a-4b46-81ef-7079dcddbaa5-utilities" (OuterVolumeSpecName: "utilities") pod "a954daed-802a-4b46-81ef-7079dcddbaa5" (UID: "a954daed-802a-4b46-81ef-7079dcddbaa5"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 15:34:05 crc kubenswrapper[5008]: I0129 15:34:05.720962 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a954daed-802a-4b46-81ef-7079dcddbaa5-kube-api-access-6pfbb" (OuterVolumeSpecName: "kube-api-access-6pfbb") pod "a954daed-802a-4b46-81ef-7079dcddbaa5" (UID: "a954daed-802a-4b46-81ef-7079dcddbaa5"). InnerVolumeSpecName "kube-api-access-6pfbb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:34:05 crc kubenswrapper[5008]: I0129 15:34:05.742157 5008 generic.go:334] "Generic (PLEG): container finished" podID="a954daed-802a-4b46-81ef-7079dcddbaa5" containerID="a279fd865e1e761fdf4aa984a1b9d5a9d26fdcf44f1cb482fe636069d4d8f0ee" exitCode=0 Jan 29 15:34:05 crc kubenswrapper[5008]: I0129 15:34:05.742325 5008 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-lhtht" Jan 29 15:34:05 crc kubenswrapper[5008]: I0129 15:34:05.742424 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lhtht" event={"ID":"a954daed-802a-4b46-81ef-7079dcddbaa5","Type":"ContainerDied","Data":"a279fd865e1e761fdf4aa984a1b9d5a9d26fdcf44f1cb482fe636069d4d8f0ee"} Jan 29 15:34:05 crc kubenswrapper[5008]: I0129 15:34:05.742490 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lhtht" event={"ID":"a954daed-802a-4b46-81ef-7079dcddbaa5","Type":"ContainerDied","Data":"c7bb2d8d5dfc5bd460b51cbe8abe72fb7d9bc5d3e8c022f6997fb845b267cc34"} Jan 29 15:34:05 crc kubenswrapper[5008]: I0129 15:34:05.742519 5008 scope.go:117] "RemoveContainer" containerID="a279fd865e1e761fdf4aa984a1b9d5a9d26fdcf44f1cb482fe636069d4d8f0ee" Jan 29 15:34:05 crc kubenswrapper[5008]: I0129 15:34:05.745909 5008 generic.go:334] "Generic (PLEG): container finished" podID="37742fc9-fce4-41f0-ba04-7232b6e647a7" containerID="a8d67992841dda8d8ecfe4b7861b1a552c63f6a32f809f7c1c99d45b6eba1024" exitCode=0 Jan 29 15:34:05 crc kubenswrapper[5008]: I0129 15:34:05.745944 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fd6nq" event={"ID":"37742fc9-fce4-41f0-ba04-7232b6e647a7","Type":"ContainerDied","Data":"a8d67992841dda8d8ecfe4b7861b1a552c63f6a32f809f7c1c99d45b6eba1024"} Jan 29 15:34:05 crc kubenswrapper[5008]: I0129 15:34:05.746152 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fd6nq" event={"ID":"37742fc9-fce4-41f0-ba04-7232b6e647a7","Type":"ContainerDied","Data":"335be0a36e05771a7a88d81fee1b61fe29f073571f151738b87168e8e0776f1d"} Jan 29 15:34:05 crc kubenswrapper[5008]: I0129 15:34:05.746223 5008 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-fd6nq" Jan 29 15:34:05 crc kubenswrapper[5008]: I0129 15:34:05.772993 5008 scope.go:117] "RemoveContainer" containerID="3eeb9aabc3dc27af90cd2bf8cd8e6832ded1925edec96187d03601420f52e277" Jan 29 15:34:05 crc kubenswrapper[5008]: I0129 15:34:05.785001 5008 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-fd6nq"] Jan 29 15:34:05 crc kubenswrapper[5008]: I0129 15:34:05.793553 5008 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-fd6nq"] Jan 29 15:34:05 crc kubenswrapper[5008]: I0129 15:34:05.807335 5008 scope.go:117] "RemoveContainer" containerID="01e163bc6a4525960ce048e49dcc3353c6751e2f22fe5f912048f843ee4812a5" Jan 29 15:34:05 crc kubenswrapper[5008]: I0129 15:34:05.819143 5008 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6pfbb\" (UniqueName: \"kubernetes.io/projected/a954daed-802a-4b46-81ef-7079dcddbaa5-kube-api-access-6pfbb\") on node \"crc\" DevicePath \"\"" Jan 29 15:34:05 crc kubenswrapper[5008]: I0129 15:34:05.819174 5008 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a954daed-802a-4b46-81ef-7079dcddbaa5-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 15:34:05 crc kubenswrapper[5008]: I0129 15:34:05.820376 5008 scope.go:117] "RemoveContainer" containerID="a279fd865e1e761fdf4aa984a1b9d5a9d26fdcf44f1cb482fe636069d4d8f0ee" Jan 29 15:34:05 crc kubenswrapper[5008]: E0129 15:34:05.820684 5008 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a279fd865e1e761fdf4aa984a1b9d5a9d26fdcf44f1cb482fe636069d4d8f0ee\": container with ID starting with a279fd865e1e761fdf4aa984a1b9d5a9d26fdcf44f1cb482fe636069d4d8f0ee not found: ID does not exist" containerID="a279fd865e1e761fdf4aa984a1b9d5a9d26fdcf44f1cb482fe636069d4d8f0ee" Jan 29 15:34:05 crc kubenswrapper[5008]: I0129 15:34:05.820712 5008 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a279fd865e1e761fdf4aa984a1b9d5a9d26fdcf44f1cb482fe636069d4d8f0ee"} err="failed to get container status \"a279fd865e1e761fdf4aa984a1b9d5a9d26fdcf44f1cb482fe636069d4d8f0ee\": rpc error: code = NotFound desc = could not find container \"a279fd865e1e761fdf4aa984a1b9d5a9d26fdcf44f1cb482fe636069d4d8f0ee\": container with ID starting with a279fd865e1e761fdf4aa984a1b9d5a9d26fdcf44f1cb482fe636069d4d8f0ee not found: ID does not exist" Jan 29 15:34:05 crc kubenswrapper[5008]: I0129 15:34:05.820735 5008 scope.go:117] "RemoveContainer" containerID="3eeb9aabc3dc27af90cd2bf8cd8e6832ded1925edec96187d03601420f52e277" Jan 29 15:34:05 crc kubenswrapper[5008]: E0129 15:34:05.821268 5008 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3eeb9aabc3dc27af90cd2bf8cd8e6832ded1925edec96187d03601420f52e277\": container with ID starting with 3eeb9aabc3dc27af90cd2bf8cd8e6832ded1925edec96187d03601420f52e277 not found: ID does not exist" containerID="3eeb9aabc3dc27af90cd2bf8cd8e6832ded1925edec96187d03601420f52e277" Jan 29 15:34:05 crc kubenswrapper[5008]: I0129 15:34:05.821309 5008 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3eeb9aabc3dc27af90cd2bf8cd8e6832ded1925edec96187d03601420f52e277"} err="failed to get container status \"3eeb9aabc3dc27af90cd2bf8cd8e6832ded1925edec96187d03601420f52e277\": rpc error: code = NotFound desc = could not find container \"3eeb9aabc3dc27af90cd2bf8cd8e6832ded1925edec96187d03601420f52e277\": container with ID starting with 3eeb9aabc3dc27af90cd2bf8cd8e6832ded1925edec96187d03601420f52e277 not found: ID does not exist" Jan 29 15:34:05 crc kubenswrapper[5008]: I0129 15:34:05.821339 5008 scope.go:117] "RemoveContainer" containerID="01e163bc6a4525960ce048e49dcc3353c6751e2f22fe5f912048f843ee4812a5" Jan 29 15:34:05 crc kubenswrapper[5008]: E0129 15:34:05.821677 5008 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"01e163bc6a4525960ce048e49dcc3353c6751e2f22fe5f912048f843ee4812a5\": container with ID starting with 01e163bc6a4525960ce048e49dcc3353c6751e2f22fe5f912048f843ee4812a5 not found: ID does not exist" containerID="01e163bc6a4525960ce048e49dcc3353c6751e2f22fe5f912048f843ee4812a5" Jan 29 15:34:05 crc kubenswrapper[5008]: I0129 15:34:05.821739 5008 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"01e163bc6a4525960ce048e49dcc3353c6751e2f22fe5f912048f843ee4812a5"} err="failed to get container status \"01e163bc6a4525960ce048e49dcc3353c6751e2f22fe5f912048f843ee4812a5\": rpc error: code = NotFound desc = could not find container \"01e163bc6a4525960ce048e49dcc3353c6751e2f22fe5f912048f843ee4812a5\": container with ID starting with 01e163bc6a4525960ce048e49dcc3353c6751e2f22fe5f912048f843ee4812a5 not found: ID does not exist" Jan 29 15:34:05 crc kubenswrapper[5008]: I0129 15:34:05.821800 5008 scope.go:117] "RemoveContainer" containerID="a8d67992841dda8d8ecfe4b7861b1a552c63f6a32f809f7c1c99d45b6eba1024" Jan 29 15:34:05 crc kubenswrapper[5008]: I0129 15:34:05.831283 5008 scope.go:117] "RemoveContainer" containerID="20a33ecc180de094bba9265fa7129b16b4f9de45343188f6197cb71f4f1ca528" Jan 29 15:34:05 crc kubenswrapper[5008]: I0129 15:34:05.843504 5008 scope.go:117] "RemoveContainer" containerID="07a2fa9e941811bcc7892420659a52c45d0ac131e896badbed2f3faf0a10a2bc" Jan 29 15:34:05 crc kubenswrapper[5008]: I0129 15:34:05.855068 5008 scope.go:117] "RemoveContainer" containerID="a8d67992841dda8d8ecfe4b7861b1a552c63f6a32f809f7c1c99d45b6eba1024" Jan 29 15:34:05 crc kubenswrapper[5008]: E0129 15:34:05.855460 5008 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a8d67992841dda8d8ecfe4b7861b1a552c63f6a32f809f7c1c99d45b6eba1024\": container with ID starting with a8d67992841dda8d8ecfe4b7861b1a552c63f6a32f809f7c1c99d45b6eba1024 not found: ID does not exist" containerID="a8d67992841dda8d8ecfe4b7861b1a552c63f6a32f809f7c1c99d45b6eba1024" Jan 29 15:34:05 crc kubenswrapper[5008]: I0129 15:34:05.855501 5008 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a8d67992841dda8d8ecfe4b7861b1a552c63f6a32f809f7c1c99d45b6eba1024"} err="failed to get container status \"a8d67992841dda8d8ecfe4b7861b1a552c63f6a32f809f7c1c99d45b6eba1024\": rpc error: code = NotFound desc = could not find container \"a8d67992841dda8d8ecfe4b7861b1a552c63f6a32f809f7c1c99d45b6eba1024\": container with ID starting with a8d67992841dda8d8ecfe4b7861b1a552c63f6a32f809f7c1c99d45b6eba1024 not found: ID does not exist" Jan 29 15:34:05 crc kubenswrapper[5008]: I0129 15:34:05.855541 5008 scope.go:117] "RemoveContainer" containerID="20a33ecc180de094bba9265fa7129b16b4f9de45343188f6197cb71f4f1ca528" Jan 29 15:34:05 crc kubenswrapper[5008]: E0129 15:34:05.855957 5008 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"20a33ecc180de094bba9265fa7129b16b4f9de45343188f6197cb71f4f1ca528\": container with ID starting with 20a33ecc180de094bba9265fa7129b16b4f9de45343188f6197cb71f4f1ca528 not found: ID does not exist" containerID="20a33ecc180de094bba9265fa7129b16b4f9de45343188f6197cb71f4f1ca528" Jan 29 15:34:05 crc kubenswrapper[5008]: I0129 15:34:05.856005 5008 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"20a33ecc180de094bba9265fa7129b16b4f9de45343188f6197cb71f4f1ca528"} err="failed to get container status \"20a33ecc180de094bba9265fa7129b16b4f9de45343188f6197cb71f4f1ca528\": rpc error: code = NotFound desc = could not find container \"20a33ecc180de094bba9265fa7129b16b4f9de45343188f6197cb71f4f1ca528\": container with ID starting with 20a33ecc180de094bba9265fa7129b16b4f9de45343188f6197cb71f4f1ca528 not found: ID does not exist" Jan 29 15:34:05 crc kubenswrapper[5008]: I0129 15:34:05.856032 5008 scope.go:117] "RemoveContainer" containerID="07a2fa9e941811bcc7892420659a52c45d0ac131e896badbed2f3faf0a10a2bc" Jan 29 15:34:05 crc kubenswrapper[5008]: E0129 15:34:05.856308 5008 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"07a2fa9e941811bcc7892420659a52c45d0ac131e896badbed2f3faf0a10a2bc\": container with ID starting with 07a2fa9e941811bcc7892420659a52c45d0ac131e896badbed2f3faf0a10a2bc not found: ID does not exist" containerID="07a2fa9e941811bcc7892420659a52c45d0ac131e896badbed2f3faf0a10a2bc" Jan 29 15:34:05 crc kubenswrapper[5008]: I0129 15:34:05.856349 5008 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"07a2fa9e941811bcc7892420659a52c45d0ac131e896badbed2f3faf0a10a2bc"} err="failed to get container status \"07a2fa9e941811bcc7892420659a52c45d0ac131e896badbed2f3faf0a10a2bc\": rpc error: code = NotFound desc = could not find container \"07a2fa9e941811bcc7892420659a52c45d0ac131e896badbed2f3faf0a10a2bc\": container with ID starting with 07a2fa9e941811bcc7892420659a52c45d0ac131e896badbed2f3faf0a10a2bc not found: ID does not exist" Jan 29 15:34:05 crc kubenswrapper[5008]: I0129 15:34:05.884064 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a954daed-802a-4b46-81ef-7079dcddbaa5-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "a954daed-802a-4b46-81ef-7079dcddbaa5" (UID: "a954daed-802a-4b46-81ef-7079dcddbaa5"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 15:34:05 crc kubenswrapper[5008]: I0129 15:34:05.920549 5008 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a954daed-802a-4b46-81ef-7079dcddbaa5-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 15:34:06 crc kubenswrapper[5008]: I0129 15:34:06.084094 5008 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-lhtht"] Jan 29 15:34:06 crc kubenswrapper[5008]: I0129 15:34:06.090151 5008 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-lhtht"] Jan 29 15:34:07 crc kubenswrapper[5008]: I0129 15:34:07.338627 5008 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="37742fc9-fce4-41f0-ba04-7232b6e647a7" path="/var/lib/kubelet/pods/37742fc9-fce4-41f0-ba04-7232b6e647a7/volumes" Jan 29 15:34:07 crc kubenswrapper[5008]: I0129 15:34:07.340275 5008 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a954daed-802a-4b46-81ef-7079dcddbaa5" path="/var/lib/kubelet/pods/a954daed-802a-4b46-81ef-7079dcddbaa5/volumes" Jan 29 15:34:13 crc kubenswrapper[5008]: I0129 15:34:13.990647 5008 patch_prober.go:28] interesting pod/machine-config-daemon-gk9q8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 15:34:13 crc kubenswrapper[5008]: I0129 15:34:13.992563 5008 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-gk9q8" podUID="ca0fcb2d-733d-4bde-9bbf-3f7082d0e244" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 15:34:29 crc kubenswrapper[5008]: I0129 15:34:29.071014 5008 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-authentication/oauth-openshift-558db77b4-6zjns" podUID="30a4c50c-34f7-4c9c-9cbd-baaf50ed16e1" containerName="oauth-openshift" containerID="cri-o://2fdcfc92513722a0ed1839d1becd6b4c7cf2ef93e9416fff2dde6f74896351b7" gracePeriod=15 Jan 29 15:34:29 crc kubenswrapper[5008]: I0129 15:34:29.476025 5008 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-6zjns" Jan 29 15:34:29 crc kubenswrapper[5008]: I0129 15:34:29.509154 5008 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-6586599bc4-dbtw8"] Jan 29 15:34:29 crc kubenswrapper[5008]: E0129 15:34:29.509445 5008 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a954daed-802a-4b46-81ef-7079dcddbaa5" containerName="registry-server" Jan 29 15:34:29 crc kubenswrapper[5008]: I0129 15:34:29.509462 5008 state_mem.go:107] "Deleted CPUSet assignment" podUID="a954daed-802a-4b46-81ef-7079dcddbaa5" containerName="registry-server" Jan 29 15:34:29 crc kubenswrapper[5008]: E0129 15:34:29.509473 5008 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a954daed-802a-4b46-81ef-7079dcddbaa5" containerName="extract-content" Jan 29 15:34:29 crc kubenswrapper[5008]: I0129 15:34:29.509481 5008 state_mem.go:107] "Deleted CPUSet assignment" podUID="a954daed-802a-4b46-81ef-7079dcddbaa5" containerName="extract-content" Jan 29 15:34:29 crc kubenswrapper[5008]: E0129 15:34:29.509489 5008 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="250e7db8-88dd-44fd-8d73-51a6f8f4ba96" containerName="extract-content" Jan 29 15:34:29 crc kubenswrapper[5008]: I0129 15:34:29.509496 5008 state_mem.go:107] "Deleted CPUSet assignment" podUID="250e7db8-88dd-44fd-8d73-51a6f8f4ba96" containerName="extract-content" Jan 29 15:34:29 crc kubenswrapper[5008]: E0129 15:34:29.509538 5008 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="30a4c50c-34f7-4c9c-9cbd-baaf50ed16e1" containerName="oauth-openshift" Jan 29 15:34:29 crc kubenswrapper[5008]: I0129 15:34:29.509546 5008 state_mem.go:107] "Deleted CPUSet assignment" podUID="30a4c50c-34f7-4c9c-9cbd-baaf50ed16e1" containerName="oauth-openshift" Jan 29 15:34:29 crc kubenswrapper[5008]: E0129 15:34:29.509554 5008 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="37742fc9-fce4-41f0-ba04-7232b6e647a7" containerName="extract-content" Jan 29 15:34:29 crc kubenswrapper[5008]: I0129 15:34:29.509561 5008 state_mem.go:107] "Deleted CPUSet assignment" podUID="37742fc9-fce4-41f0-ba04-7232b6e647a7" containerName="extract-content" Jan 29 15:34:29 crc kubenswrapper[5008]: E0129 15:34:29.509576 5008 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="250e7db8-88dd-44fd-8d73-51a6f8f4ba96" containerName="extract-utilities" Jan 29 15:34:29 crc kubenswrapper[5008]: I0129 15:34:29.509607 5008 state_mem.go:107] "Deleted CPUSet assignment" podUID="250e7db8-88dd-44fd-8d73-51a6f8f4ba96" containerName="extract-utilities" Jan 29 15:34:29 crc kubenswrapper[5008]: E0129 15:34:29.509619 5008 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="250e7db8-88dd-44fd-8d73-51a6f8f4ba96" containerName="registry-server" Jan 29 15:34:29 crc kubenswrapper[5008]: I0129 15:34:29.509627 5008 state_mem.go:107] "Deleted CPUSet assignment" podUID="250e7db8-88dd-44fd-8d73-51a6f8f4ba96" containerName="registry-server" Jan 29 15:34:29 crc kubenswrapper[5008]: E0129 15:34:29.509637 5008 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="37742fc9-fce4-41f0-ba04-7232b6e647a7" containerName="extract-utilities" Jan 29 15:34:29 crc kubenswrapper[5008]: I0129 15:34:29.509646 5008 state_mem.go:107] "Deleted CPUSet assignment" podUID="37742fc9-fce4-41f0-ba04-7232b6e647a7" containerName="extract-utilities" Jan 29 15:34:29 crc kubenswrapper[5008]: E0129 15:34:29.509655 5008 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a954daed-802a-4b46-81ef-7079dcddbaa5" containerName="extract-utilities" Jan 29 15:34:29 crc kubenswrapper[5008]: I0129 15:34:29.509683 5008 state_mem.go:107] "Deleted CPUSet assignment" podUID="a954daed-802a-4b46-81ef-7079dcddbaa5" containerName="extract-utilities" Jan 29 15:34:29 crc kubenswrapper[5008]: E0129 15:34:29.509693 5008 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="37742fc9-fce4-41f0-ba04-7232b6e647a7" containerName="registry-server" Jan 29 15:34:29 crc kubenswrapper[5008]: I0129 15:34:29.509701 5008 state_mem.go:107] "Deleted CPUSet assignment" podUID="37742fc9-fce4-41f0-ba04-7232b6e647a7" containerName="registry-server" Jan 29 15:34:29 crc kubenswrapper[5008]: I0129 15:34:29.509847 5008 memory_manager.go:354] "RemoveStaleState removing state" podUID="250e7db8-88dd-44fd-8d73-51a6f8f4ba96" containerName="registry-server" Jan 29 15:34:29 crc kubenswrapper[5008]: I0129 15:34:29.509882 5008 memory_manager.go:354] "RemoveStaleState removing state" podUID="37742fc9-fce4-41f0-ba04-7232b6e647a7" containerName="registry-server" Jan 29 15:34:29 crc kubenswrapper[5008]: I0129 15:34:29.509899 5008 memory_manager.go:354] "RemoveStaleState removing state" podUID="30a4c50c-34f7-4c9c-9cbd-baaf50ed16e1" containerName="oauth-openshift" Jan 29 15:34:29 crc kubenswrapper[5008]: I0129 15:34:29.509965 5008 memory_manager.go:354] "RemoveStaleState removing state" podUID="a954daed-802a-4b46-81ef-7079dcddbaa5" containerName="registry-server" Jan 29 15:34:29 crc kubenswrapper[5008]: I0129 15:34:29.510551 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-6586599bc4-dbtw8" Jan 29 15:34:29 crc kubenswrapper[5008]: I0129 15:34:29.526873 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-6586599bc4-dbtw8"] Jan 29 15:34:29 crc kubenswrapper[5008]: I0129 15:34:29.592702 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/30a4c50c-34f7-4c9c-9cbd-baaf50ed16e1-v4-0-config-system-cliconfig\") pod \"30a4c50c-34f7-4c9c-9cbd-baaf50ed16e1\" (UID: \"30a4c50c-34f7-4c9c-9cbd-baaf50ed16e1\") " Jan 29 15:34:29 crc kubenswrapper[5008]: I0129 15:34:29.592742 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/30a4c50c-34f7-4c9c-9cbd-baaf50ed16e1-v4-0-config-system-trusted-ca-bundle\") pod \"30a4c50c-34f7-4c9c-9cbd-baaf50ed16e1\" (UID: \"30a4c50c-34f7-4c9c-9cbd-baaf50ed16e1\") " Jan 29 15:34:29 crc kubenswrapper[5008]: I0129 15:34:29.592764 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/30a4c50c-34f7-4c9c-9cbd-baaf50ed16e1-v4-0-config-user-idp-0-file-data\") pod \"30a4c50c-34f7-4c9c-9cbd-baaf50ed16e1\" (UID: \"30a4c50c-34f7-4c9c-9cbd-baaf50ed16e1\") " Jan 29 15:34:29 crc kubenswrapper[5008]: I0129 15:34:29.592808 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/30a4c50c-34f7-4c9c-9cbd-baaf50ed16e1-v4-0-config-user-template-error\") pod \"30a4c50c-34f7-4c9c-9cbd-baaf50ed16e1\" (UID: \"30a4c50c-34f7-4c9c-9cbd-baaf50ed16e1\") " Jan 29 15:34:29 crc kubenswrapper[5008]: I0129 15:34:29.592825 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/30a4c50c-34f7-4c9c-9cbd-baaf50ed16e1-v4-0-config-system-ocp-branding-template\") pod \"30a4c50c-34f7-4c9c-9cbd-baaf50ed16e1\" (UID: \"30a4c50c-34f7-4c9c-9cbd-baaf50ed16e1\") " Jan 29 15:34:29 crc kubenswrapper[5008]: I0129 15:34:29.592870 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/30a4c50c-34f7-4c9c-9cbd-baaf50ed16e1-v4-0-config-user-template-login\") pod \"30a4c50c-34f7-4c9c-9cbd-baaf50ed16e1\" (UID: \"30a4c50c-34f7-4c9c-9cbd-baaf50ed16e1\") " Jan 29 15:34:29 crc kubenswrapper[5008]: I0129 15:34:29.592890 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/30a4c50c-34f7-4c9c-9cbd-baaf50ed16e1-v4-0-config-system-router-certs\") pod \"30a4c50c-34f7-4c9c-9cbd-baaf50ed16e1\" (UID: \"30a4c50c-34f7-4c9c-9cbd-baaf50ed16e1\") " Jan 29 15:34:29 crc kubenswrapper[5008]: I0129 15:34:29.592916 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/30a4c50c-34f7-4c9c-9cbd-baaf50ed16e1-v4-0-config-system-service-ca\") pod \"30a4c50c-34f7-4c9c-9cbd-baaf50ed16e1\" (UID: \"30a4c50c-34f7-4c9c-9cbd-baaf50ed16e1\") " Jan 29 15:34:29 crc kubenswrapper[5008]: I0129 15:34:29.592936 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/30a4c50c-34f7-4c9c-9cbd-baaf50ed16e1-v4-0-config-system-serving-cert\") pod \"30a4c50c-34f7-4c9c-9cbd-baaf50ed16e1\" (UID: \"30a4c50c-34f7-4c9c-9cbd-baaf50ed16e1\") " Jan 29 15:34:29 crc kubenswrapper[5008]: I0129 15:34:29.592956 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/30a4c50c-34f7-4c9c-9cbd-baaf50ed16e1-audit-dir\") pod \"30a4c50c-34f7-4c9c-9cbd-baaf50ed16e1\" (UID: \"30a4c50c-34f7-4c9c-9cbd-baaf50ed16e1\") " Jan 29 15:34:29 crc kubenswrapper[5008]: I0129 15:34:29.592977 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xfxpn\" (UniqueName: \"kubernetes.io/projected/30a4c50c-34f7-4c9c-9cbd-baaf50ed16e1-kube-api-access-xfxpn\") pod \"30a4c50c-34f7-4c9c-9cbd-baaf50ed16e1\" (UID: \"30a4c50c-34f7-4c9c-9cbd-baaf50ed16e1\") " Jan 29 15:34:29 crc kubenswrapper[5008]: I0129 15:34:29.593008 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/30a4c50c-34f7-4c9c-9cbd-baaf50ed16e1-v4-0-config-user-template-provider-selection\") pod \"30a4c50c-34f7-4c9c-9cbd-baaf50ed16e1\" (UID: \"30a4c50c-34f7-4c9c-9cbd-baaf50ed16e1\") " Jan 29 15:34:29 crc kubenswrapper[5008]: I0129 15:34:29.593027 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/30a4c50c-34f7-4c9c-9cbd-baaf50ed16e1-v4-0-config-system-session\") pod \"30a4c50c-34f7-4c9c-9cbd-baaf50ed16e1\" (UID: \"30a4c50c-34f7-4c9c-9cbd-baaf50ed16e1\") " Jan 29 15:34:29 crc kubenswrapper[5008]: I0129 15:34:29.593043 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/30a4c50c-34f7-4c9c-9cbd-baaf50ed16e1-audit-policies\") pod \"30a4c50c-34f7-4c9c-9cbd-baaf50ed16e1\" (UID: \"30a4c50c-34f7-4c9c-9cbd-baaf50ed16e1\") " Jan 29 15:34:29 crc kubenswrapper[5008]: I0129 15:34:29.593194 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/28fd5d8a-b558-4ede-9bcd-7ac80456d2ca-v4-0-config-system-cliconfig\") pod \"oauth-openshift-6586599bc4-dbtw8\" (UID: \"28fd5d8a-b558-4ede-9bcd-7ac80456d2ca\") " pod="openshift-authentication/oauth-openshift-6586599bc4-dbtw8" Jan 29 15:34:29 crc kubenswrapper[5008]: I0129 15:34:29.593215 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/28fd5d8a-b558-4ede-9bcd-7ac80456d2ca-v4-0-config-user-template-error\") pod \"oauth-openshift-6586599bc4-dbtw8\" (UID: \"28fd5d8a-b558-4ede-9bcd-7ac80456d2ca\") " pod="openshift-authentication/oauth-openshift-6586599bc4-dbtw8" Jan 29 15:34:29 crc kubenswrapper[5008]: I0129 15:34:29.593231 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lz5jz\" (UniqueName: \"kubernetes.io/projected/28fd5d8a-b558-4ede-9bcd-7ac80456d2ca-kube-api-access-lz5jz\") pod \"oauth-openshift-6586599bc4-dbtw8\" (UID: \"28fd5d8a-b558-4ede-9bcd-7ac80456d2ca\") " pod="openshift-authentication/oauth-openshift-6586599bc4-dbtw8" Jan 29 15:34:29 crc kubenswrapper[5008]: I0129 15:34:29.593249 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/28fd5d8a-b558-4ede-9bcd-7ac80456d2ca-v4-0-config-system-router-certs\") pod \"oauth-openshift-6586599bc4-dbtw8\" (UID: \"28fd5d8a-b558-4ede-9bcd-7ac80456d2ca\") " pod="openshift-authentication/oauth-openshift-6586599bc4-dbtw8" Jan 29 15:34:29 crc kubenswrapper[5008]: I0129 15:34:29.593266 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/28fd5d8a-b558-4ede-9bcd-7ac80456d2ca-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-6586599bc4-dbtw8\" (UID: \"28fd5d8a-b558-4ede-9bcd-7ac80456d2ca\") " pod="openshift-authentication/oauth-openshift-6586599bc4-dbtw8" Jan 29 15:34:29 crc kubenswrapper[5008]: I0129 15:34:29.593294 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/28fd5d8a-b558-4ede-9bcd-7ac80456d2ca-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-6586599bc4-dbtw8\" (UID: \"28fd5d8a-b558-4ede-9bcd-7ac80456d2ca\") " pod="openshift-authentication/oauth-openshift-6586599bc4-dbtw8" Jan 29 15:34:29 crc kubenswrapper[5008]: I0129 15:34:29.593309 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/28fd5d8a-b558-4ede-9bcd-7ac80456d2ca-v4-0-config-system-service-ca\") pod \"oauth-openshift-6586599bc4-dbtw8\" (UID: \"28fd5d8a-b558-4ede-9bcd-7ac80456d2ca\") " pod="openshift-authentication/oauth-openshift-6586599bc4-dbtw8" Jan 29 15:34:29 crc kubenswrapper[5008]: I0129 15:34:29.593334 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/28fd5d8a-b558-4ede-9bcd-7ac80456d2ca-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-6586599bc4-dbtw8\" (UID: \"28fd5d8a-b558-4ede-9bcd-7ac80456d2ca\") " pod="openshift-authentication/oauth-openshift-6586599bc4-dbtw8" Jan 29 15:34:29 crc kubenswrapper[5008]: I0129 15:34:29.593349 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/28fd5d8a-b558-4ede-9bcd-7ac80456d2ca-audit-policies\") pod \"oauth-openshift-6586599bc4-dbtw8\" (UID: \"28fd5d8a-b558-4ede-9bcd-7ac80456d2ca\") " pod="openshift-authentication/oauth-openshift-6586599bc4-dbtw8" Jan 29 15:34:29 crc kubenswrapper[5008]: I0129 15:34:29.593364 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/28fd5d8a-b558-4ede-9bcd-7ac80456d2ca-v4-0-config-system-serving-cert\") pod \"oauth-openshift-6586599bc4-dbtw8\" (UID: \"28fd5d8a-b558-4ede-9bcd-7ac80456d2ca\") " pod="openshift-authentication/oauth-openshift-6586599bc4-dbtw8" Jan 29 15:34:29 crc kubenswrapper[5008]: I0129 15:34:29.593381 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/28fd5d8a-b558-4ede-9bcd-7ac80456d2ca-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-6586599bc4-dbtw8\" (UID: \"28fd5d8a-b558-4ede-9bcd-7ac80456d2ca\") " pod="openshift-authentication/oauth-openshift-6586599bc4-dbtw8" Jan 29 15:34:29 crc kubenswrapper[5008]: I0129 15:34:29.593397 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/28fd5d8a-b558-4ede-9bcd-7ac80456d2ca-audit-dir\") pod \"oauth-openshift-6586599bc4-dbtw8\" (UID: \"28fd5d8a-b558-4ede-9bcd-7ac80456d2ca\") " pod="openshift-authentication/oauth-openshift-6586599bc4-dbtw8" Jan 29 15:34:29 crc kubenswrapper[5008]: I0129 15:34:29.593418 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/28fd5d8a-b558-4ede-9bcd-7ac80456d2ca-v4-0-config-system-session\") pod \"oauth-openshift-6586599bc4-dbtw8\" (UID: \"28fd5d8a-b558-4ede-9bcd-7ac80456d2ca\") " pod="openshift-authentication/oauth-openshift-6586599bc4-dbtw8" Jan 29 15:34:29 crc kubenswrapper[5008]: I0129 15:34:29.593437 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/28fd5d8a-b558-4ede-9bcd-7ac80456d2ca-v4-0-config-user-template-login\") pod \"oauth-openshift-6586599bc4-dbtw8\" (UID: \"28fd5d8a-b558-4ede-9bcd-7ac80456d2ca\") " pod="openshift-authentication/oauth-openshift-6586599bc4-dbtw8" Jan 29 15:34:29 crc kubenswrapper[5008]: I0129 15:34:29.593613 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/30a4c50c-34f7-4c9c-9cbd-baaf50ed16e1-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "30a4c50c-34f7-4c9c-9cbd-baaf50ed16e1" (UID: "30a4c50c-34f7-4c9c-9cbd-baaf50ed16e1"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:34:29 crc kubenswrapper[5008]: I0129 15:34:29.593625 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/30a4c50c-34f7-4c9c-9cbd-baaf50ed16e1-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "30a4c50c-34f7-4c9c-9cbd-baaf50ed16e1" (UID: "30a4c50c-34f7-4c9c-9cbd-baaf50ed16e1"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:34:29 crc kubenswrapper[5008]: I0129 15:34:29.594849 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/30a4c50c-34f7-4c9c-9cbd-baaf50ed16e1-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "30a4c50c-34f7-4c9c-9cbd-baaf50ed16e1" (UID: "30a4c50c-34f7-4c9c-9cbd-baaf50ed16e1"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 15:34:29 crc kubenswrapper[5008]: I0129 15:34:29.600658 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/30a4c50c-34f7-4c9c-9cbd-baaf50ed16e1-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "30a4c50c-34f7-4c9c-9cbd-baaf50ed16e1" (UID: "30a4c50c-34f7-4c9c-9cbd-baaf50ed16e1"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:34:29 crc kubenswrapper[5008]: I0129 15:34:29.600878 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/30a4c50c-34f7-4c9c-9cbd-baaf50ed16e1-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "30a4c50c-34f7-4c9c-9cbd-baaf50ed16e1" (UID: "30a4c50c-34f7-4c9c-9cbd-baaf50ed16e1"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:34:29 crc kubenswrapper[5008]: I0129 15:34:29.601424 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/30a4c50c-34f7-4c9c-9cbd-baaf50ed16e1-kube-api-access-xfxpn" (OuterVolumeSpecName: "kube-api-access-xfxpn") pod "30a4c50c-34f7-4c9c-9cbd-baaf50ed16e1" (UID: "30a4c50c-34f7-4c9c-9cbd-baaf50ed16e1"). InnerVolumeSpecName "kube-api-access-xfxpn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:34:29 crc kubenswrapper[5008]: I0129 15:34:29.602744 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/30a4c50c-34f7-4c9c-9cbd-baaf50ed16e1-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "30a4c50c-34f7-4c9c-9cbd-baaf50ed16e1" (UID: "30a4c50c-34f7-4c9c-9cbd-baaf50ed16e1"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:34:29 crc kubenswrapper[5008]: I0129 15:34:29.603115 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/30a4c50c-34f7-4c9c-9cbd-baaf50ed16e1-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "30a4c50c-34f7-4c9c-9cbd-baaf50ed16e1" (UID: "30a4c50c-34f7-4c9c-9cbd-baaf50ed16e1"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:34:29 crc kubenswrapper[5008]: I0129 15:34:29.603378 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/30a4c50c-34f7-4c9c-9cbd-baaf50ed16e1-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "30a4c50c-34f7-4c9c-9cbd-baaf50ed16e1" (UID: "30a4c50c-34f7-4c9c-9cbd-baaf50ed16e1"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:34:29 crc kubenswrapper[5008]: I0129 15:34:29.603541 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/30a4c50c-34f7-4c9c-9cbd-baaf50ed16e1-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "30a4c50c-34f7-4c9c-9cbd-baaf50ed16e1" (UID: "30a4c50c-34f7-4c9c-9cbd-baaf50ed16e1"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:34:29 crc kubenswrapper[5008]: I0129 15:34:29.605093 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/30a4c50c-34f7-4c9c-9cbd-baaf50ed16e1-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "30a4c50c-34f7-4c9c-9cbd-baaf50ed16e1" (UID: "30a4c50c-34f7-4c9c-9cbd-baaf50ed16e1"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:34:29 crc kubenswrapper[5008]: I0129 15:34:29.606218 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/30a4c50c-34f7-4c9c-9cbd-baaf50ed16e1-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "30a4c50c-34f7-4c9c-9cbd-baaf50ed16e1" (UID: "30a4c50c-34f7-4c9c-9cbd-baaf50ed16e1"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:34:29 crc kubenswrapper[5008]: I0129 15:34:29.607460 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/30a4c50c-34f7-4c9c-9cbd-baaf50ed16e1-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "30a4c50c-34f7-4c9c-9cbd-baaf50ed16e1" (UID: "30a4c50c-34f7-4c9c-9cbd-baaf50ed16e1"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:34:29 crc kubenswrapper[5008]: I0129 15:34:29.614262 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/30a4c50c-34f7-4c9c-9cbd-baaf50ed16e1-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "30a4c50c-34f7-4c9c-9cbd-baaf50ed16e1" (UID: "30a4c50c-34f7-4c9c-9cbd-baaf50ed16e1"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:34:29 crc kubenswrapper[5008]: I0129 15:34:29.694835 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/28fd5d8a-b558-4ede-9bcd-7ac80456d2ca-v4-0-config-system-session\") pod \"oauth-openshift-6586599bc4-dbtw8\" (UID: \"28fd5d8a-b558-4ede-9bcd-7ac80456d2ca\") " pod="openshift-authentication/oauth-openshift-6586599bc4-dbtw8" Jan 29 15:34:29 crc kubenswrapper[5008]: I0129 15:34:29.694892 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/28fd5d8a-b558-4ede-9bcd-7ac80456d2ca-v4-0-config-user-template-login\") pod \"oauth-openshift-6586599bc4-dbtw8\" (UID: \"28fd5d8a-b558-4ede-9bcd-7ac80456d2ca\") " pod="openshift-authentication/oauth-openshift-6586599bc4-dbtw8" Jan 29 15:34:29 crc kubenswrapper[5008]: I0129 15:34:29.694950 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/28fd5d8a-b558-4ede-9bcd-7ac80456d2ca-v4-0-config-user-template-error\") pod \"oauth-openshift-6586599bc4-dbtw8\" (UID: \"28fd5d8a-b558-4ede-9bcd-7ac80456d2ca\") " pod="openshift-authentication/oauth-openshift-6586599bc4-dbtw8" Jan 29 15:34:29 crc kubenswrapper[5008]: I0129 15:34:29.694974 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/28fd5d8a-b558-4ede-9bcd-7ac80456d2ca-v4-0-config-system-cliconfig\") pod \"oauth-openshift-6586599bc4-dbtw8\" (UID: \"28fd5d8a-b558-4ede-9bcd-7ac80456d2ca\") " pod="openshift-authentication/oauth-openshift-6586599bc4-dbtw8" Jan 29 15:34:29 crc kubenswrapper[5008]: I0129 15:34:29.694996 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lz5jz\" (UniqueName: \"kubernetes.io/projected/28fd5d8a-b558-4ede-9bcd-7ac80456d2ca-kube-api-access-lz5jz\") pod \"oauth-openshift-6586599bc4-dbtw8\" (UID: \"28fd5d8a-b558-4ede-9bcd-7ac80456d2ca\") " pod="openshift-authentication/oauth-openshift-6586599bc4-dbtw8" Jan 29 15:34:29 crc kubenswrapper[5008]: I0129 15:34:29.695033 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/28fd5d8a-b558-4ede-9bcd-7ac80456d2ca-v4-0-config-system-router-certs\") pod \"oauth-openshift-6586599bc4-dbtw8\" (UID: \"28fd5d8a-b558-4ede-9bcd-7ac80456d2ca\") " pod="openshift-authentication/oauth-openshift-6586599bc4-dbtw8" Jan 29 15:34:29 crc kubenswrapper[5008]: I0129 15:34:29.695059 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/28fd5d8a-b558-4ede-9bcd-7ac80456d2ca-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-6586599bc4-dbtw8\" (UID: \"28fd5d8a-b558-4ede-9bcd-7ac80456d2ca\") " pod="openshift-authentication/oauth-openshift-6586599bc4-dbtw8" Jan 29 15:34:29 crc kubenswrapper[5008]: I0129 15:34:29.695101 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/28fd5d8a-b558-4ede-9bcd-7ac80456d2ca-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-6586599bc4-dbtw8\" (UID: \"28fd5d8a-b558-4ede-9bcd-7ac80456d2ca\") " pod="openshift-authentication/oauth-openshift-6586599bc4-dbtw8" Jan 29 15:34:29 crc kubenswrapper[5008]: I0129 15:34:29.695121 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/28fd5d8a-b558-4ede-9bcd-7ac80456d2ca-v4-0-config-system-service-ca\") pod \"oauth-openshift-6586599bc4-dbtw8\" (UID: \"28fd5d8a-b558-4ede-9bcd-7ac80456d2ca\") " pod="openshift-authentication/oauth-openshift-6586599bc4-dbtw8" Jan 29 15:34:29 crc kubenswrapper[5008]: I0129 15:34:29.695158 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/28fd5d8a-b558-4ede-9bcd-7ac80456d2ca-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-6586599bc4-dbtw8\" (UID: \"28fd5d8a-b558-4ede-9bcd-7ac80456d2ca\") " pod="openshift-authentication/oauth-openshift-6586599bc4-dbtw8" Jan 29 15:34:29 crc kubenswrapper[5008]: I0129 15:34:29.695181 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/28fd5d8a-b558-4ede-9bcd-7ac80456d2ca-audit-policies\") pod \"oauth-openshift-6586599bc4-dbtw8\" (UID: \"28fd5d8a-b558-4ede-9bcd-7ac80456d2ca\") " pod="openshift-authentication/oauth-openshift-6586599bc4-dbtw8" Jan 29 15:34:29 crc kubenswrapper[5008]: I0129 15:34:29.695205 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/28fd5d8a-b558-4ede-9bcd-7ac80456d2ca-v4-0-config-system-serving-cert\") pod \"oauth-openshift-6586599bc4-dbtw8\" (UID: \"28fd5d8a-b558-4ede-9bcd-7ac80456d2ca\") " pod="openshift-authentication/oauth-openshift-6586599bc4-dbtw8" Jan 29 15:34:29 crc kubenswrapper[5008]: I0129 15:34:29.695227 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/28fd5d8a-b558-4ede-9bcd-7ac80456d2ca-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-6586599bc4-dbtw8\" (UID: \"28fd5d8a-b558-4ede-9bcd-7ac80456d2ca\") " pod="openshift-authentication/oauth-openshift-6586599bc4-dbtw8" Jan 29 15:34:29 crc kubenswrapper[5008]: I0129 15:34:29.695248 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/28fd5d8a-b558-4ede-9bcd-7ac80456d2ca-audit-dir\") pod \"oauth-openshift-6586599bc4-dbtw8\" (UID: \"28fd5d8a-b558-4ede-9bcd-7ac80456d2ca\") " pod="openshift-authentication/oauth-openshift-6586599bc4-dbtw8" Jan 29 15:34:29 crc kubenswrapper[5008]: I0129 15:34:29.695302 5008 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/30a4c50c-34f7-4c9c-9cbd-baaf50ed16e1-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Jan 29 15:34:29 crc kubenswrapper[5008]: I0129 15:34:29.695316 5008 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/30a4c50c-34f7-4c9c-9cbd-baaf50ed16e1-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 15:34:29 crc kubenswrapper[5008]: I0129 15:34:29.695330 5008 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/30a4c50c-34f7-4c9c-9cbd-baaf50ed16e1-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Jan 29 15:34:29 crc kubenswrapper[5008]: I0129 15:34:29.695343 5008 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/30a4c50c-34f7-4c9c-9cbd-baaf50ed16e1-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Jan 29 15:34:29 crc kubenswrapper[5008]: I0129 15:34:29.695357 5008 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/30a4c50c-34f7-4c9c-9cbd-baaf50ed16e1-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Jan 29 15:34:29 crc kubenswrapper[5008]: I0129 15:34:29.695370 5008 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/30a4c50c-34f7-4c9c-9cbd-baaf50ed16e1-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Jan 29 15:34:29 crc kubenswrapper[5008]: I0129 15:34:29.695382 5008 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/30a4c50c-34f7-4c9c-9cbd-baaf50ed16e1-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Jan 29 15:34:29 crc kubenswrapper[5008]: I0129 15:34:29.695395 5008 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/30a4c50c-34f7-4c9c-9cbd-baaf50ed16e1-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Jan 29 15:34:29 crc kubenswrapper[5008]: I0129 15:34:29.695408 5008 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/30a4c50c-34f7-4c9c-9cbd-baaf50ed16e1-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 29 15:34:29 crc kubenswrapper[5008]: I0129 15:34:29.695422 5008 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/30a4c50c-34f7-4c9c-9cbd-baaf50ed16e1-audit-dir\") on node \"crc\" DevicePath \"\"" Jan 29 15:34:29 crc kubenswrapper[5008]: I0129 15:34:29.695435 5008 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xfxpn\" (UniqueName: \"kubernetes.io/projected/30a4c50c-34f7-4c9c-9cbd-baaf50ed16e1-kube-api-access-xfxpn\") on node \"crc\" DevicePath \"\"" Jan 29 15:34:29 crc kubenswrapper[5008]: I0129 15:34:29.695449 5008 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/30a4c50c-34f7-4c9c-9cbd-baaf50ed16e1-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Jan 29 15:34:29 crc kubenswrapper[5008]: I0129 15:34:29.695461 5008 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/30a4c50c-34f7-4c9c-9cbd-baaf50ed16e1-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Jan 29 15:34:29 crc kubenswrapper[5008]: I0129 15:34:29.695475 5008 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/30a4c50c-34f7-4c9c-9cbd-baaf50ed16e1-audit-policies\") on node \"crc\" DevicePath \"\"" Jan 29 15:34:29 crc kubenswrapper[5008]: I0129 15:34:29.695515 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/28fd5d8a-b558-4ede-9bcd-7ac80456d2ca-audit-dir\") pod \"oauth-openshift-6586599bc4-dbtw8\" (UID: \"28fd5d8a-b558-4ede-9bcd-7ac80456d2ca\") " pod="openshift-authentication/oauth-openshift-6586599bc4-dbtw8" Jan 29 15:34:29 crc kubenswrapper[5008]: I0129 15:34:29.696931 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/28fd5d8a-b558-4ede-9bcd-7ac80456d2ca-v4-0-config-system-cliconfig\") pod \"oauth-openshift-6586599bc4-dbtw8\" (UID: \"28fd5d8a-b558-4ede-9bcd-7ac80456d2ca\") " pod="openshift-authentication/oauth-openshift-6586599bc4-dbtw8" Jan 29 15:34:29 crc kubenswrapper[5008]: I0129 15:34:29.696926 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/28fd5d8a-b558-4ede-9bcd-7ac80456d2ca-audit-policies\") pod \"oauth-openshift-6586599bc4-dbtw8\" (UID: \"28fd5d8a-b558-4ede-9bcd-7ac80456d2ca\") " pod="openshift-authentication/oauth-openshift-6586599bc4-dbtw8" Jan 29 15:34:29 crc kubenswrapper[5008]: I0129 15:34:29.698040 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/28fd5d8a-b558-4ede-9bcd-7ac80456d2ca-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-6586599bc4-dbtw8\" (UID: \"28fd5d8a-b558-4ede-9bcd-7ac80456d2ca\") " pod="openshift-authentication/oauth-openshift-6586599bc4-dbtw8" Jan 29 15:34:29 crc kubenswrapper[5008]: I0129 15:34:29.698045 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/28fd5d8a-b558-4ede-9bcd-7ac80456d2ca-v4-0-config-user-template-login\") pod \"oauth-openshift-6586599bc4-dbtw8\" (UID: \"28fd5d8a-b558-4ede-9bcd-7ac80456d2ca\") " pod="openshift-authentication/oauth-openshift-6586599bc4-dbtw8" Jan 29 15:34:29 crc kubenswrapper[5008]: I0129 15:34:29.698738 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/28fd5d8a-b558-4ede-9bcd-7ac80456d2ca-v4-0-config-system-service-ca\") pod \"oauth-openshift-6586599bc4-dbtw8\" (UID: \"28fd5d8a-b558-4ede-9bcd-7ac80456d2ca\") " pod="openshift-authentication/oauth-openshift-6586599bc4-dbtw8" Jan 29 15:34:29 crc kubenswrapper[5008]: I0129 15:34:29.700335 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/28fd5d8a-b558-4ede-9bcd-7ac80456d2ca-v4-0-config-system-serving-cert\") pod \"oauth-openshift-6586599bc4-dbtw8\" (UID: \"28fd5d8a-b558-4ede-9bcd-7ac80456d2ca\") " pod="openshift-authentication/oauth-openshift-6586599bc4-dbtw8" Jan 29 15:34:29 crc kubenswrapper[5008]: I0129 15:34:29.700418 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/28fd5d8a-b558-4ede-9bcd-7ac80456d2ca-v4-0-config-system-router-certs\") pod \"oauth-openshift-6586599bc4-dbtw8\" (UID: \"28fd5d8a-b558-4ede-9bcd-7ac80456d2ca\") " pod="openshift-authentication/oauth-openshift-6586599bc4-dbtw8" Jan 29 15:34:29 crc kubenswrapper[5008]: I0129 15:34:29.700695 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/28fd5d8a-b558-4ede-9bcd-7ac80456d2ca-v4-0-config-user-template-error\") pod \"oauth-openshift-6586599bc4-dbtw8\" (UID: \"28fd5d8a-b558-4ede-9bcd-7ac80456d2ca\") " pod="openshift-authentication/oauth-openshift-6586599bc4-dbtw8" Jan 29 15:34:29 crc kubenswrapper[5008]: I0129 15:34:29.702021 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/28fd5d8a-b558-4ede-9bcd-7ac80456d2ca-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-6586599bc4-dbtw8\" (UID: \"28fd5d8a-b558-4ede-9bcd-7ac80456d2ca\") " pod="openshift-authentication/oauth-openshift-6586599bc4-dbtw8" Jan 29 15:34:29 crc kubenswrapper[5008]: I0129 15:34:29.704503 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/28fd5d8a-b558-4ede-9bcd-7ac80456d2ca-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-6586599bc4-dbtw8\" (UID: \"28fd5d8a-b558-4ede-9bcd-7ac80456d2ca\") " pod="openshift-authentication/oauth-openshift-6586599bc4-dbtw8" Jan 29 15:34:29 crc kubenswrapper[5008]: I0129 15:34:29.705137 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/28fd5d8a-b558-4ede-9bcd-7ac80456d2ca-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-6586599bc4-dbtw8\" (UID: \"28fd5d8a-b558-4ede-9bcd-7ac80456d2ca\") " pod="openshift-authentication/oauth-openshift-6586599bc4-dbtw8" Jan 29 15:34:29 crc kubenswrapper[5008]: I0129 15:34:29.709575 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/28fd5d8a-b558-4ede-9bcd-7ac80456d2ca-v4-0-config-system-session\") pod \"oauth-openshift-6586599bc4-dbtw8\" (UID: \"28fd5d8a-b558-4ede-9bcd-7ac80456d2ca\") " pod="openshift-authentication/oauth-openshift-6586599bc4-dbtw8" Jan 29 15:34:29 crc kubenswrapper[5008]: I0129 15:34:29.712948 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lz5jz\" (UniqueName: \"kubernetes.io/projected/28fd5d8a-b558-4ede-9bcd-7ac80456d2ca-kube-api-access-lz5jz\") pod \"oauth-openshift-6586599bc4-dbtw8\" (UID: \"28fd5d8a-b558-4ede-9bcd-7ac80456d2ca\") " pod="openshift-authentication/oauth-openshift-6586599bc4-dbtw8" Jan 29 15:34:29 crc kubenswrapper[5008]: I0129 15:34:29.833440 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-6586599bc4-dbtw8" Jan 29 15:34:30 crc kubenswrapper[5008]: I0129 15:34:30.022240 5008 generic.go:334] "Generic (PLEG): container finished" podID="30a4c50c-34f7-4c9c-9cbd-baaf50ed16e1" containerID="2fdcfc92513722a0ed1839d1becd6b4c7cf2ef93e9416fff2dde6f74896351b7" exitCode=0 Jan 29 15:34:30 crc kubenswrapper[5008]: I0129 15:34:30.022453 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-6zjns" event={"ID":"30a4c50c-34f7-4c9c-9cbd-baaf50ed16e1","Type":"ContainerDied","Data":"2fdcfc92513722a0ed1839d1becd6b4c7cf2ef93e9416fff2dde6f74896351b7"} Jan 29 15:34:30 crc kubenswrapper[5008]: I0129 15:34:30.022657 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-6zjns" event={"ID":"30a4c50c-34f7-4c9c-9cbd-baaf50ed16e1","Type":"ContainerDied","Data":"06359078d405bd0e54235a406ebdf31eea4653e6c329abc798e56c3dfc469667"} Jan 29 15:34:30 crc kubenswrapper[5008]: I0129 15:34:30.022725 5008 scope.go:117] "RemoveContainer" containerID="2fdcfc92513722a0ed1839d1becd6b4c7cf2ef93e9416fff2dde6f74896351b7" Jan 29 15:34:30 crc kubenswrapper[5008]: I0129 15:34:30.022535 5008 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-6zjns" Jan 29 15:34:30 crc kubenswrapper[5008]: I0129 15:34:30.046939 5008 scope.go:117] "RemoveContainer" containerID="2fdcfc92513722a0ed1839d1becd6b4c7cf2ef93e9416fff2dde6f74896351b7" Jan 29 15:34:30 crc kubenswrapper[5008]: E0129 15:34:30.047495 5008 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2fdcfc92513722a0ed1839d1becd6b4c7cf2ef93e9416fff2dde6f74896351b7\": container with ID starting with 2fdcfc92513722a0ed1839d1becd6b4c7cf2ef93e9416fff2dde6f74896351b7 not found: ID does not exist" containerID="2fdcfc92513722a0ed1839d1becd6b4c7cf2ef93e9416fff2dde6f74896351b7" Jan 29 15:34:30 crc kubenswrapper[5008]: I0129 15:34:30.048167 5008 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2fdcfc92513722a0ed1839d1becd6b4c7cf2ef93e9416fff2dde6f74896351b7"} err="failed to get container status \"2fdcfc92513722a0ed1839d1becd6b4c7cf2ef93e9416fff2dde6f74896351b7\": rpc error: code = NotFound desc = could not find container \"2fdcfc92513722a0ed1839d1becd6b4c7cf2ef93e9416fff2dde6f74896351b7\": container with ID starting with 2fdcfc92513722a0ed1839d1becd6b4c7cf2ef93e9416fff2dde6f74896351b7 not found: ID does not exist" Jan 29 15:34:30 crc kubenswrapper[5008]: I0129 15:34:30.064708 5008 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-6zjns"] Jan 29 15:34:30 crc kubenswrapper[5008]: I0129 15:34:30.064752 5008 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-6zjns"] Jan 29 15:34:30 crc kubenswrapper[5008]: I0129 15:34:30.083110 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-6586599bc4-dbtw8"] Jan 29 15:34:31 crc kubenswrapper[5008]: I0129 15:34:31.029248 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-6586599bc4-dbtw8" event={"ID":"28fd5d8a-b558-4ede-9bcd-7ac80456d2ca","Type":"ContainerStarted","Data":"5d4dd487c926696523442afdcba3dcf59ce21fd22ceb8ff6d4be8453a2851820"} Jan 29 15:34:31 crc kubenswrapper[5008]: I0129 15:34:31.029515 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-6586599bc4-dbtw8" event={"ID":"28fd5d8a-b558-4ede-9bcd-7ac80456d2ca","Type":"ContainerStarted","Data":"1ec1133f12712a69bb2b3eb98694534f34af6427fd961001ad600a0cdab82fcc"} Jan 29 15:34:31 crc kubenswrapper[5008]: I0129 15:34:31.030898 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-6586599bc4-dbtw8" Jan 29 15:34:31 crc kubenswrapper[5008]: I0129 15:34:31.037557 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-6586599bc4-dbtw8" Jan 29 15:34:31 crc kubenswrapper[5008]: I0129 15:34:31.056408 5008 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-6586599bc4-dbtw8" podStartSLOduration=27.056386138 podStartE2EDuration="27.056386138s" podCreationTimestamp="2026-01-29 15:34:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 15:34:31.054463032 +0000 UTC m=+414.727317289" watchObservedRunningTime="2026-01-29 15:34:31.056386138 +0000 UTC m=+414.729240395" Jan 29 15:34:31 crc kubenswrapper[5008]: I0129 15:34:31.329208 5008 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="30a4c50c-34f7-4c9c-9cbd-baaf50ed16e1" path="/var/lib/kubelet/pods/30a4c50c-34f7-4c9c-9cbd-baaf50ed16e1/volumes" Jan 29 15:34:32 crc kubenswrapper[5008]: I0129 15:34:32.812832 5008 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-nppsr"] Jan 29 15:34:32 crc kubenswrapper[5008]: I0129 15:34:32.813954 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-nppsr" Jan 29 15:34:32 crc kubenswrapper[5008]: I0129 15:34:32.840007 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-nppsr"] Jan 29 15:34:32 crc kubenswrapper[5008]: I0129 15:34:32.937469 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mnh6v\" (UniqueName: \"kubernetes.io/projected/13cb1565-085a-43d5-8c2c-8bc9ad134dbd-kube-api-access-mnh6v\") pod \"image-registry-66df7c8f76-nppsr\" (UID: \"13cb1565-085a-43d5-8c2c-8bc9ad134dbd\") " pod="openshift-image-registry/image-registry-66df7c8f76-nppsr" Jan 29 15:34:32 crc kubenswrapper[5008]: I0129 15:34:32.937536 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/13cb1565-085a-43d5-8c2c-8bc9ad134dbd-bound-sa-token\") pod \"image-registry-66df7c8f76-nppsr\" (UID: \"13cb1565-085a-43d5-8c2c-8bc9ad134dbd\") " pod="openshift-image-registry/image-registry-66df7c8f76-nppsr" Jan 29 15:34:32 crc kubenswrapper[5008]: I0129 15:34:32.937571 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/13cb1565-085a-43d5-8c2c-8bc9ad134dbd-trusted-ca\") pod \"image-registry-66df7c8f76-nppsr\" (UID: \"13cb1565-085a-43d5-8c2c-8bc9ad134dbd\") " pod="openshift-image-registry/image-registry-66df7c8f76-nppsr" Jan 29 15:34:32 crc kubenswrapper[5008]: I0129 15:34:32.937593 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/13cb1565-085a-43d5-8c2c-8bc9ad134dbd-registry-tls\") pod \"image-registry-66df7c8f76-nppsr\" (UID: \"13cb1565-085a-43d5-8c2c-8bc9ad134dbd\") " pod="openshift-image-registry/image-registry-66df7c8f76-nppsr" Jan 29 15:34:32 crc kubenswrapper[5008]: I0129 15:34:32.937675 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/13cb1565-085a-43d5-8c2c-8bc9ad134dbd-ca-trust-extracted\") pod \"image-registry-66df7c8f76-nppsr\" (UID: \"13cb1565-085a-43d5-8c2c-8bc9ad134dbd\") " pod="openshift-image-registry/image-registry-66df7c8f76-nppsr" Jan 29 15:34:32 crc kubenswrapper[5008]: I0129 15:34:32.937726 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-nppsr\" (UID: \"13cb1565-085a-43d5-8c2c-8bc9ad134dbd\") " pod="openshift-image-registry/image-registry-66df7c8f76-nppsr" Jan 29 15:34:32 crc kubenswrapper[5008]: I0129 15:34:32.937820 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/13cb1565-085a-43d5-8c2c-8bc9ad134dbd-registry-certificates\") pod \"image-registry-66df7c8f76-nppsr\" (UID: \"13cb1565-085a-43d5-8c2c-8bc9ad134dbd\") " pod="openshift-image-registry/image-registry-66df7c8f76-nppsr" Jan 29 15:34:32 crc kubenswrapper[5008]: I0129 15:34:32.937872 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/13cb1565-085a-43d5-8c2c-8bc9ad134dbd-installation-pull-secrets\") pod \"image-registry-66df7c8f76-nppsr\" (UID: \"13cb1565-085a-43d5-8c2c-8bc9ad134dbd\") " pod="openshift-image-registry/image-registry-66df7c8f76-nppsr" Jan 29 15:34:32 crc kubenswrapper[5008]: I0129 15:34:32.963234 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-nppsr\" (UID: \"13cb1565-085a-43d5-8c2c-8bc9ad134dbd\") " pod="openshift-image-registry/image-registry-66df7c8f76-nppsr" Jan 29 15:34:33 crc kubenswrapper[5008]: I0129 15:34:33.038991 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/13cb1565-085a-43d5-8c2c-8bc9ad134dbd-ca-trust-extracted\") pod \"image-registry-66df7c8f76-nppsr\" (UID: \"13cb1565-085a-43d5-8c2c-8bc9ad134dbd\") " pod="openshift-image-registry/image-registry-66df7c8f76-nppsr" Jan 29 15:34:33 crc kubenswrapper[5008]: I0129 15:34:33.039068 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/13cb1565-085a-43d5-8c2c-8bc9ad134dbd-registry-certificates\") pod \"image-registry-66df7c8f76-nppsr\" (UID: \"13cb1565-085a-43d5-8c2c-8bc9ad134dbd\") " pod="openshift-image-registry/image-registry-66df7c8f76-nppsr" Jan 29 15:34:33 crc kubenswrapper[5008]: I0129 15:34:33.039098 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/13cb1565-085a-43d5-8c2c-8bc9ad134dbd-installation-pull-secrets\") pod \"image-registry-66df7c8f76-nppsr\" (UID: \"13cb1565-085a-43d5-8c2c-8bc9ad134dbd\") " pod="openshift-image-registry/image-registry-66df7c8f76-nppsr" Jan 29 15:34:33 crc kubenswrapper[5008]: I0129 15:34:33.039155 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mnh6v\" (UniqueName: \"kubernetes.io/projected/13cb1565-085a-43d5-8c2c-8bc9ad134dbd-kube-api-access-mnh6v\") pod \"image-registry-66df7c8f76-nppsr\" (UID: \"13cb1565-085a-43d5-8c2c-8bc9ad134dbd\") " pod="openshift-image-registry/image-registry-66df7c8f76-nppsr" Jan 29 15:34:33 crc kubenswrapper[5008]: I0129 15:34:33.039187 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/13cb1565-085a-43d5-8c2c-8bc9ad134dbd-bound-sa-token\") pod \"image-registry-66df7c8f76-nppsr\" (UID: \"13cb1565-085a-43d5-8c2c-8bc9ad134dbd\") " pod="openshift-image-registry/image-registry-66df7c8f76-nppsr" Jan 29 15:34:33 crc kubenswrapper[5008]: I0129 15:34:33.039212 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/13cb1565-085a-43d5-8c2c-8bc9ad134dbd-trusted-ca\") pod \"image-registry-66df7c8f76-nppsr\" (UID: \"13cb1565-085a-43d5-8c2c-8bc9ad134dbd\") " pod="openshift-image-registry/image-registry-66df7c8f76-nppsr" Jan 29 15:34:33 crc kubenswrapper[5008]: I0129 15:34:33.039234 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/13cb1565-085a-43d5-8c2c-8bc9ad134dbd-registry-tls\") pod \"image-registry-66df7c8f76-nppsr\" (UID: \"13cb1565-085a-43d5-8c2c-8bc9ad134dbd\") " pod="openshift-image-registry/image-registry-66df7c8f76-nppsr" Jan 29 15:34:33 crc kubenswrapper[5008]: I0129 15:34:33.039764 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/13cb1565-085a-43d5-8c2c-8bc9ad134dbd-ca-trust-extracted\") pod \"image-registry-66df7c8f76-nppsr\" (UID: \"13cb1565-085a-43d5-8c2c-8bc9ad134dbd\") " pod="openshift-image-registry/image-registry-66df7c8f76-nppsr" Jan 29 15:34:33 crc kubenswrapper[5008]: I0129 15:34:33.040914 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/13cb1565-085a-43d5-8c2c-8bc9ad134dbd-trusted-ca\") pod \"image-registry-66df7c8f76-nppsr\" (UID: \"13cb1565-085a-43d5-8c2c-8bc9ad134dbd\") " pod="openshift-image-registry/image-registry-66df7c8f76-nppsr" Jan 29 15:34:33 crc kubenswrapper[5008]: I0129 15:34:33.041018 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/13cb1565-085a-43d5-8c2c-8bc9ad134dbd-registry-certificates\") pod \"image-registry-66df7c8f76-nppsr\" (UID: \"13cb1565-085a-43d5-8c2c-8bc9ad134dbd\") " pod="openshift-image-registry/image-registry-66df7c8f76-nppsr" Jan 29 15:34:33 crc kubenswrapper[5008]: I0129 15:34:33.049719 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/13cb1565-085a-43d5-8c2c-8bc9ad134dbd-installation-pull-secrets\") pod \"image-registry-66df7c8f76-nppsr\" (UID: \"13cb1565-085a-43d5-8c2c-8bc9ad134dbd\") " pod="openshift-image-registry/image-registry-66df7c8f76-nppsr" Jan 29 15:34:33 crc kubenswrapper[5008]: I0129 15:34:33.050408 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/13cb1565-085a-43d5-8c2c-8bc9ad134dbd-registry-tls\") pod \"image-registry-66df7c8f76-nppsr\" (UID: \"13cb1565-085a-43d5-8c2c-8bc9ad134dbd\") " pod="openshift-image-registry/image-registry-66df7c8f76-nppsr" Jan 29 15:34:33 crc kubenswrapper[5008]: I0129 15:34:33.060180 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/13cb1565-085a-43d5-8c2c-8bc9ad134dbd-bound-sa-token\") pod \"image-registry-66df7c8f76-nppsr\" (UID: \"13cb1565-085a-43d5-8c2c-8bc9ad134dbd\") " pod="openshift-image-registry/image-registry-66df7c8f76-nppsr" Jan 29 15:34:33 crc kubenswrapper[5008]: I0129 15:34:33.063460 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mnh6v\" (UniqueName: \"kubernetes.io/projected/13cb1565-085a-43d5-8c2c-8bc9ad134dbd-kube-api-access-mnh6v\") pod \"image-registry-66df7c8f76-nppsr\" (UID: \"13cb1565-085a-43d5-8c2c-8bc9ad134dbd\") " pod="openshift-image-registry/image-registry-66df7c8f76-nppsr" Jan 29 15:34:33 crc kubenswrapper[5008]: I0129 15:34:33.129733 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-nppsr" Jan 29 15:34:33 crc kubenswrapper[5008]: I0129 15:34:33.378917 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-nppsr"] Jan 29 15:34:34 crc kubenswrapper[5008]: I0129 15:34:34.044617 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-nppsr" event={"ID":"13cb1565-085a-43d5-8c2c-8bc9ad134dbd","Type":"ContainerStarted","Data":"0bfa8fceab34a99c5661ce181db26eb15e0ddc6f70e78329eb85ff451fdb0e4a"} Jan 29 15:34:34 crc kubenswrapper[5008]: I0129 15:34:34.044919 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-nppsr" event={"ID":"13cb1565-085a-43d5-8c2c-8bc9ad134dbd","Type":"ContainerStarted","Data":"dc226af586b75e70cbacda6f5c41b494753d9968ce8d8bc01f319a9ebc77ecc3"} Jan 29 15:34:34 crc kubenswrapper[5008]: I0129 15:34:34.045637 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-66df7c8f76-nppsr" Jan 29 15:34:34 crc kubenswrapper[5008]: I0129 15:34:34.060122 5008 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-66df7c8f76-nppsr" podStartSLOduration=2.060102379 podStartE2EDuration="2.060102379s" podCreationTimestamp="2026-01-29 15:34:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 15:34:34.058680615 +0000 UTC m=+417.731534862" watchObservedRunningTime="2026-01-29 15:34:34.060102379 +0000 UTC m=+417.732956626" Jan 29 15:34:36 crc kubenswrapper[5008]: I0129 15:34:36.835942 5008 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-6fb6f5d5c7-g6fg4"] Jan 29 15:34:36 crc kubenswrapper[5008]: I0129 15:34:36.836541 5008 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-6fb6f5d5c7-g6fg4" podUID="93ec6db8-09a1-4b3b-900d-867f728452cb" containerName="controller-manager" containerID="cri-o://2e22995b163eebe80e37c0570ab875dae72b5630c85b948dc8057b5763467b37" gracePeriod=30 Jan 29 15:34:36 crc kubenswrapper[5008]: I0129 15:34:36.869474 5008 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-555476556f-pvck6"] Jan 29 15:34:36 crc kubenswrapper[5008]: I0129 15:34:36.869702 5008 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-555476556f-pvck6" podUID="9ffb7e45-37e9-49cf-981c-d88916bba44b" containerName="route-controller-manager" containerID="cri-o://c5a81b7d6a5eb5b94e027d72a4da3dbb692c825c9c6bd8260d78e97a8e3f3e2b" gracePeriod=30 Jan 29 15:34:37 crc kubenswrapper[5008]: I0129 15:34:37.068778 5008 generic.go:334] "Generic (PLEG): container finished" podID="93ec6db8-09a1-4b3b-900d-867f728452cb" containerID="2e22995b163eebe80e37c0570ab875dae72b5630c85b948dc8057b5763467b37" exitCode=0 Jan 29 15:34:37 crc kubenswrapper[5008]: I0129 15:34:37.068868 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6fb6f5d5c7-g6fg4" event={"ID":"93ec6db8-09a1-4b3b-900d-867f728452cb","Type":"ContainerDied","Data":"2e22995b163eebe80e37c0570ab875dae72b5630c85b948dc8057b5763467b37"} Jan 29 15:34:37 crc kubenswrapper[5008]: I0129 15:34:37.070480 5008 generic.go:334] "Generic (PLEG): container finished" podID="9ffb7e45-37e9-49cf-981c-d88916bba44b" containerID="c5a81b7d6a5eb5b94e027d72a4da3dbb692c825c9c6bd8260d78e97a8e3f3e2b" exitCode=0 Jan 29 15:34:37 crc kubenswrapper[5008]: I0129 15:34:37.070513 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-555476556f-pvck6" event={"ID":"9ffb7e45-37e9-49cf-981c-d88916bba44b","Type":"ContainerDied","Data":"c5a81b7d6a5eb5b94e027d72a4da3dbb692c825c9c6bd8260d78e97a8e3f3e2b"} Jan 29 15:34:37 crc kubenswrapper[5008]: I0129 15:34:37.310230 5008 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6fb6f5d5c7-g6fg4" Jan 29 15:34:37 crc kubenswrapper[5008]: I0129 15:34:37.315955 5008 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-555476556f-pvck6" Jan 29 15:34:37 crc kubenswrapper[5008]: I0129 15:34:37.399415 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9ffb7e45-37e9-49cf-981c-d88916bba44b-serving-cert\") pod \"9ffb7e45-37e9-49cf-981c-d88916bba44b\" (UID: \"9ffb7e45-37e9-49cf-981c-d88916bba44b\") " Jan 29 15:34:37 crc kubenswrapper[5008]: I0129 15:34:37.399504 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dcvpb\" (UniqueName: \"kubernetes.io/projected/9ffb7e45-37e9-49cf-981c-d88916bba44b-kube-api-access-dcvpb\") pod \"9ffb7e45-37e9-49cf-981c-d88916bba44b\" (UID: \"9ffb7e45-37e9-49cf-981c-d88916bba44b\") " Jan 29 15:34:37 crc kubenswrapper[5008]: I0129 15:34:37.399531 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/93ec6db8-09a1-4b3b-900d-867f728452cb-proxy-ca-bundles\") pod \"93ec6db8-09a1-4b3b-900d-867f728452cb\" (UID: \"93ec6db8-09a1-4b3b-900d-867f728452cb\") " Jan 29 15:34:37 crc kubenswrapper[5008]: I0129 15:34:37.399561 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/9ffb7e45-37e9-49cf-981c-d88916bba44b-client-ca\") pod \"9ffb7e45-37e9-49cf-981c-d88916bba44b\" (UID: \"9ffb7e45-37e9-49cf-981c-d88916bba44b\") " Jan 29 15:34:37 crc kubenswrapper[5008]: I0129 15:34:37.399586 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/93ec6db8-09a1-4b3b-900d-867f728452cb-config\") pod \"93ec6db8-09a1-4b3b-900d-867f728452cb\" (UID: \"93ec6db8-09a1-4b3b-900d-867f728452cb\") " Jan 29 15:34:37 crc kubenswrapper[5008]: I0129 15:34:37.399611 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/93ec6db8-09a1-4b3b-900d-867f728452cb-client-ca\") pod \"93ec6db8-09a1-4b3b-900d-867f728452cb\" (UID: \"93ec6db8-09a1-4b3b-900d-867f728452cb\") " Jan 29 15:34:37 crc kubenswrapper[5008]: I0129 15:34:37.399680 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/93ec6db8-09a1-4b3b-900d-867f728452cb-serving-cert\") pod \"93ec6db8-09a1-4b3b-900d-867f728452cb\" (UID: \"93ec6db8-09a1-4b3b-900d-867f728452cb\") " Jan 29 15:34:37 crc kubenswrapper[5008]: I0129 15:34:37.399741 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9ffb7e45-37e9-49cf-981c-d88916bba44b-config\") pod \"9ffb7e45-37e9-49cf-981c-d88916bba44b\" (UID: \"9ffb7e45-37e9-49cf-981c-d88916bba44b\") " Jan 29 15:34:37 crc kubenswrapper[5008]: I0129 15:34:37.399770 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5dz6b\" (UniqueName: \"kubernetes.io/projected/93ec6db8-09a1-4b3b-900d-867f728452cb-kube-api-access-5dz6b\") pod \"93ec6db8-09a1-4b3b-900d-867f728452cb\" (UID: \"93ec6db8-09a1-4b3b-900d-867f728452cb\") " Jan 29 15:34:37 crc kubenswrapper[5008]: I0129 15:34:37.400576 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/93ec6db8-09a1-4b3b-900d-867f728452cb-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "93ec6db8-09a1-4b3b-900d-867f728452cb" (UID: "93ec6db8-09a1-4b3b-900d-867f728452cb"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:34:37 crc kubenswrapper[5008]: I0129 15:34:37.400666 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/93ec6db8-09a1-4b3b-900d-867f728452cb-config" (OuterVolumeSpecName: "config") pod "93ec6db8-09a1-4b3b-900d-867f728452cb" (UID: "93ec6db8-09a1-4b3b-900d-867f728452cb"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:34:37 crc kubenswrapper[5008]: I0129 15:34:37.400744 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9ffb7e45-37e9-49cf-981c-d88916bba44b-client-ca" (OuterVolumeSpecName: "client-ca") pod "9ffb7e45-37e9-49cf-981c-d88916bba44b" (UID: "9ffb7e45-37e9-49cf-981c-d88916bba44b"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:34:37 crc kubenswrapper[5008]: I0129 15:34:37.401503 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/93ec6db8-09a1-4b3b-900d-867f728452cb-client-ca" (OuterVolumeSpecName: "client-ca") pod "93ec6db8-09a1-4b3b-900d-867f728452cb" (UID: "93ec6db8-09a1-4b3b-900d-867f728452cb"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:34:37 crc kubenswrapper[5008]: I0129 15:34:37.402155 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9ffb7e45-37e9-49cf-981c-d88916bba44b-config" (OuterVolumeSpecName: "config") pod "9ffb7e45-37e9-49cf-981c-d88916bba44b" (UID: "9ffb7e45-37e9-49cf-981c-d88916bba44b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:34:37 crc kubenswrapper[5008]: I0129 15:34:37.405012 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/93ec6db8-09a1-4b3b-900d-867f728452cb-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "93ec6db8-09a1-4b3b-900d-867f728452cb" (UID: "93ec6db8-09a1-4b3b-900d-867f728452cb"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:34:37 crc kubenswrapper[5008]: I0129 15:34:37.405008 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9ffb7e45-37e9-49cf-981c-d88916bba44b-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "9ffb7e45-37e9-49cf-981c-d88916bba44b" (UID: "9ffb7e45-37e9-49cf-981c-d88916bba44b"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:34:37 crc kubenswrapper[5008]: I0129 15:34:37.405219 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/93ec6db8-09a1-4b3b-900d-867f728452cb-kube-api-access-5dz6b" (OuterVolumeSpecName: "kube-api-access-5dz6b") pod "93ec6db8-09a1-4b3b-900d-867f728452cb" (UID: "93ec6db8-09a1-4b3b-900d-867f728452cb"). InnerVolumeSpecName "kube-api-access-5dz6b". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:34:37 crc kubenswrapper[5008]: I0129 15:34:37.408255 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9ffb7e45-37e9-49cf-981c-d88916bba44b-kube-api-access-dcvpb" (OuterVolumeSpecName: "kube-api-access-dcvpb") pod "9ffb7e45-37e9-49cf-981c-d88916bba44b" (UID: "9ffb7e45-37e9-49cf-981c-d88916bba44b"). InnerVolumeSpecName "kube-api-access-dcvpb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:34:37 crc kubenswrapper[5008]: I0129 15:34:37.501629 5008 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/9ffb7e45-37e9-49cf-981c-d88916bba44b-client-ca\") on node \"crc\" DevicePath \"\"" Jan 29 15:34:37 crc kubenswrapper[5008]: I0129 15:34:37.501665 5008 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/93ec6db8-09a1-4b3b-900d-867f728452cb-config\") on node \"crc\" DevicePath \"\"" Jan 29 15:34:37 crc kubenswrapper[5008]: I0129 15:34:37.501674 5008 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/93ec6db8-09a1-4b3b-900d-867f728452cb-client-ca\") on node \"crc\" DevicePath \"\"" Jan 29 15:34:37 crc kubenswrapper[5008]: I0129 15:34:37.501682 5008 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/93ec6db8-09a1-4b3b-900d-867f728452cb-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 29 15:34:37 crc kubenswrapper[5008]: I0129 15:34:37.501690 5008 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9ffb7e45-37e9-49cf-981c-d88916bba44b-config\") on node \"crc\" DevicePath \"\"" Jan 29 15:34:37 crc kubenswrapper[5008]: I0129 15:34:37.501698 5008 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5dz6b\" (UniqueName: \"kubernetes.io/projected/93ec6db8-09a1-4b3b-900d-867f728452cb-kube-api-access-5dz6b\") on node \"crc\" DevicePath \"\"" Jan 29 15:34:37 crc kubenswrapper[5008]: I0129 15:34:37.501708 5008 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9ffb7e45-37e9-49cf-981c-d88916bba44b-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 29 15:34:37 crc kubenswrapper[5008]: I0129 15:34:37.501715 5008 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dcvpb\" (UniqueName: \"kubernetes.io/projected/9ffb7e45-37e9-49cf-981c-d88916bba44b-kube-api-access-dcvpb\") on node \"crc\" DevicePath \"\"" Jan 29 15:34:37 crc kubenswrapper[5008]: I0129 15:34:37.501723 5008 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/93ec6db8-09a1-4b3b-900d-867f728452cb-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 29 15:34:38 crc kubenswrapper[5008]: I0129 15:34:38.077854 5008 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-555476556f-pvck6" Jan 29 15:34:38 crc kubenswrapper[5008]: I0129 15:34:38.077866 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-555476556f-pvck6" event={"ID":"9ffb7e45-37e9-49cf-981c-d88916bba44b","Type":"ContainerDied","Data":"6390f3c64efd012633ef552d358d0db88be60b32e8bc4b6efb83125ea4fe673d"} Jan 29 15:34:38 crc kubenswrapper[5008]: I0129 15:34:38.077995 5008 scope.go:117] "RemoveContainer" containerID="c5a81b7d6a5eb5b94e027d72a4da3dbb692c825c9c6bd8260d78e97a8e3f3e2b" Jan 29 15:34:38 crc kubenswrapper[5008]: I0129 15:34:38.079690 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6fb6f5d5c7-g6fg4" event={"ID":"93ec6db8-09a1-4b3b-900d-867f728452cb","Type":"ContainerDied","Data":"5887ca4850db20b4f0627a5f2b1d2ee4799a7e5d8d086bbb5ed85795193b59c4"} Jan 29 15:34:38 crc kubenswrapper[5008]: I0129 15:34:38.079739 5008 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6fb6f5d5c7-g6fg4" Jan 29 15:34:38 crc kubenswrapper[5008]: I0129 15:34:38.092754 5008 scope.go:117] "RemoveContainer" containerID="2e22995b163eebe80e37c0570ab875dae72b5630c85b948dc8057b5763467b37" Jan 29 15:34:38 crc kubenswrapper[5008]: I0129 15:34:38.119186 5008 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-555476556f-pvck6"] Jan 29 15:34:38 crc kubenswrapper[5008]: I0129 15:34:38.132524 5008 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-555476556f-pvck6"] Jan 29 15:34:38 crc kubenswrapper[5008]: I0129 15:34:38.135954 5008 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-6fb6f5d5c7-g6fg4"] Jan 29 15:34:38 crc kubenswrapper[5008]: I0129 15:34:38.138697 5008 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-6fb6f5d5c7-g6fg4"] Jan 29 15:34:38 crc kubenswrapper[5008]: I0129 15:34:38.220240 5008 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-554dcd487f-hzl7q"] Jan 29 15:34:38 crc kubenswrapper[5008]: E0129 15:34:38.220834 5008 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9ffb7e45-37e9-49cf-981c-d88916bba44b" containerName="route-controller-manager" Jan 29 15:34:38 crc kubenswrapper[5008]: I0129 15:34:38.220904 5008 state_mem.go:107] "Deleted CPUSet assignment" podUID="9ffb7e45-37e9-49cf-981c-d88916bba44b" containerName="route-controller-manager" Jan 29 15:34:38 crc kubenswrapper[5008]: E0129 15:34:38.220923 5008 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="93ec6db8-09a1-4b3b-900d-867f728452cb" containerName="controller-manager" Jan 29 15:34:38 crc kubenswrapper[5008]: I0129 15:34:38.220936 5008 state_mem.go:107] "Deleted CPUSet assignment" podUID="93ec6db8-09a1-4b3b-900d-867f728452cb" containerName="controller-manager" Jan 29 15:34:38 crc kubenswrapper[5008]: I0129 15:34:38.221181 5008 memory_manager.go:354] "RemoveStaleState removing state" podUID="9ffb7e45-37e9-49cf-981c-d88916bba44b" containerName="route-controller-manager" Jan 29 15:34:38 crc kubenswrapper[5008]: I0129 15:34:38.221213 5008 memory_manager.go:354] "RemoveStaleState removing state" podUID="93ec6db8-09a1-4b3b-900d-867f728452cb" containerName="controller-manager" Jan 29 15:34:38 crc kubenswrapper[5008]: I0129 15:34:38.221876 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-554dcd487f-hzl7q" Jan 29 15:34:38 crc kubenswrapper[5008]: I0129 15:34:38.224351 5008 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 29 15:34:38 crc kubenswrapper[5008]: I0129 15:34:38.224675 5008 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 29 15:34:38 crc kubenswrapper[5008]: I0129 15:34:38.224853 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 29 15:34:38 crc kubenswrapper[5008]: I0129 15:34:38.224975 5008 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 29 15:34:38 crc kubenswrapper[5008]: I0129 15:34:38.225104 5008 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 29 15:34:38 crc kubenswrapper[5008]: I0129 15:34:38.227731 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 29 15:34:38 crc kubenswrapper[5008]: I0129 15:34:38.229804 5008 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-56f55f798d-l7rrp"] Jan 29 15:34:38 crc kubenswrapper[5008]: I0129 15:34:38.230912 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-56f55f798d-l7rrp" Jan 29 15:34:38 crc kubenswrapper[5008]: I0129 15:34:38.233303 5008 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 29 15:34:38 crc kubenswrapper[5008]: I0129 15:34:38.235128 5008 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 29 15:34:38 crc kubenswrapper[5008]: I0129 15:34:38.235231 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 29 15:34:38 crc kubenswrapper[5008]: I0129 15:34:38.235445 5008 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 29 15:34:38 crc kubenswrapper[5008]: I0129 15:34:38.235682 5008 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 29 15:34:38 crc kubenswrapper[5008]: I0129 15:34:38.236285 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-554dcd487f-hzl7q"] Jan 29 15:34:38 crc kubenswrapper[5008]: I0129 15:34:38.238452 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 29 15:34:38 crc kubenswrapper[5008]: I0129 15:34:38.240722 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-56f55f798d-l7rrp"] Jan 29 15:34:38 crc kubenswrapper[5008]: I0129 15:34:38.251543 5008 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 29 15:34:38 crc kubenswrapper[5008]: I0129 15:34:38.312422 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/1abe2571-fd60-4224-b5f9-8f0b501c14ce-proxy-ca-bundles\") pod \"controller-manager-56f55f798d-l7rrp\" (UID: \"1abe2571-fd60-4224-b5f9-8f0b501c14ce\") " pod="openshift-controller-manager/controller-manager-56f55f798d-l7rrp" Jan 29 15:34:38 crc kubenswrapper[5008]: I0129 15:34:38.312495 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1abe2571-fd60-4224-b5f9-8f0b501c14ce-config\") pod \"controller-manager-56f55f798d-l7rrp\" (UID: \"1abe2571-fd60-4224-b5f9-8f0b501c14ce\") " pod="openshift-controller-manager/controller-manager-56f55f798d-l7rrp" Jan 29 15:34:38 crc kubenswrapper[5008]: I0129 15:34:38.312569 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f1f49e52-7a77-4c24-8bad-f171e4278f8e-serving-cert\") pod \"route-controller-manager-554dcd487f-hzl7q\" (UID: \"f1f49e52-7a77-4c24-8bad-f171e4278f8e\") " pod="openshift-route-controller-manager/route-controller-manager-554dcd487f-hzl7q" Jan 29 15:34:38 crc kubenswrapper[5008]: I0129 15:34:38.312720 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1abe2571-fd60-4224-b5f9-8f0b501c14ce-serving-cert\") pod \"controller-manager-56f55f798d-l7rrp\" (UID: \"1abe2571-fd60-4224-b5f9-8f0b501c14ce\") " pod="openshift-controller-manager/controller-manager-56f55f798d-l7rrp" Jan 29 15:34:38 crc kubenswrapper[5008]: I0129 15:34:38.312763 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f1f49e52-7a77-4c24-8bad-f171e4278f8e-config\") pod \"route-controller-manager-554dcd487f-hzl7q\" (UID: \"f1f49e52-7a77-4c24-8bad-f171e4278f8e\") " pod="openshift-route-controller-manager/route-controller-manager-554dcd487f-hzl7q" Jan 29 15:34:38 crc kubenswrapper[5008]: I0129 15:34:38.312880 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1abe2571-fd60-4224-b5f9-8f0b501c14ce-client-ca\") pod \"controller-manager-56f55f798d-l7rrp\" (UID: \"1abe2571-fd60-4224-b5f9-8f0b501c14ce\") " pod="openshift-controller-manager/controller-manager-56f55f798d-l7rrp" Jan 29 15:34:38 crc kubenswrapper[5008]: I0129 15:34:38.312907 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hhvh8\" (UniqueName: \"kubernetes.io/projected/1abe2571-fd60-4224-b5f9-8f0b501c14ce-kube-api-access-hhvh8\") pod \"controller-manager-56f55f798d-l7rrp\" (UID: \"1abe2571-fd60-4224-b5f9-8f0b501c14ce\") " pod="openshift-controller-manager/controller-manager-56f55f798d-l7rrp" Jan 29 15:34:38 crc kubenswrapper[5008]: I0129 15:34:38.312966 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ppp9t\" (UniqueName: \"kubernetes.io/projected/f1f49e52-7a77-4c24-8bad-f171e4278f8e-kube-api-access-ppp9t\") pod \"route-controller-manager-554dcd487f-hzl7q\" (UID: \"f1f49e52-7a77-4c24-8bad-f171e4278f8e\") " pod="openshift-route-controller-manager/route-controller-manager-554dcd487f-hzl7q" Jan 29 15:34:38 crc kubenswrapper[5008]: I0129 15:34:38.313044 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f1f49e52-7a77-4c24-8bad-f171e4278f8e-client-ca\") pod \"route-controller-manager-554dcd487f-hzl7q\" (UID: \"f1f49e52-7a77-4c24-8bad-f171e4278f8e\") " pod="openshift-route-controller-manager/route-controller-manager-554dcd487f-hzl7q" Jan 29 15:34:38 crc kubenswrapper[5008]: I0129 15:34:38.414057 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f1f49e52-7a77-4c24-8bad-f171e4278f8e-client-ca\") pod \"route-controller-manager-554dcd487f-hzl7q\" (UID: \"f1f49e52-7a77-4c24-8bad-f171e4278f8e\") " pod="openshift-route-controller-manager/route-controller-manager-554dcd487f-hzl7q" Jan 29 15:34:38 crc kubenswrapper[5008]: I0129 15:34:38.414142 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/1abe2571-fd60-4224-b5f9-8f0b501c14ce-proxy-ca-bundles\") pod \"controller-manager-56f55f798d-l7rrp\" (UID: \"1abe2571-fd60-4224-b5f9-8f0b501c14ce\") " pod="openshift-controller-manager/controller-manager-56f55f798d-l7rrp" Jan 29 15:34:38 crc kubenswrapper[5008]: I0129 15:34:38.414173 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1abe2571-fd60-4224-b5f9-8f0b501c14ce-config\") pod \"controller-manager-56f55f798d-l7rrp\" (UID: \"1abe2571-fd60-4224-b5f9-8f0b501c14ce\") " pod="openshift-controller-manager/controller-manager-56f55f798d-l7rrp" Jan 29 15:34:38 crc kubenswrapper[5008]: I0129 15:34:38.414220 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f1f49e52-7a77-4c24-8bad-f171e4278f8e-serving-cert\") pod \"route-controller-manager-554dcd487f-hzl7q\" (UID: \"f1f49e52-7a77-4c24-8bad-f171e4278f8e\") " pod="openshift-route-controller-manager/route-controller-manager-554dcd487f-hzl7q" Jan 29 15:34:38 crc kubenswrapper[5008]: I0129 15:34:38.414257 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1abe2571-fd60-4224-b5f9-8f0b501c14ce-serving-cert\") pod \"controller-manager-56f55f798d-l7rrp\" (UID: \"1abe2571-fd60-4224-b5f9-8f0b501c14ce\") " pod="openshift-controller-manager/controller-manager-56f55f798d-l7rrp" Jan 29 15:34:38 crc kubenswrapper[5008]: I0129 15:34:38.414277 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f1f49e52-7a77-4c24-8bad-f171e4278f8e-config\") pod \"route-controller-manager-554dcd487f-hzl7q\" (UID: \"f1f49e52-7a77-4c24-8bad-f171e4278f8e\") " pod="openshift-route-controller-manager/route-controller-manager-554dcd487f-hzl7q" Jan 29 15:34:38 crc kubenswrapper[5008]: I0129 15:34:38.414305 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1abe2571-fd60-4224-b5f9-8f0b501c14ce-client-ca\") pod \"controller-manager-56f55f798d-l7rrp\" (UID: \"1abe2571-fd60-4224-b5f9-8f0b501c14ce\") " pod="openshift-controller-manager/controller-manager-56f55f798d-l7rrp" Jan 29 15:34:38 crc kubenswrapper[5008]: I0129 15:34:38.414329 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hhvh8\" (UniqueName: \"kubernetes.io/projected/1abe2571-fd60-4224-b5f9-8f0b501c14ce-kube-api-access-hhvh8\") pod \"controller-manager-56f55f798d-l7rrp\" (UID: \"1abe2571-fd60-4224-b5f9-8f0b501c14ce\") " pod="openshift-controller-manager/controller-manager-56f55f798d-l7rrp" Jan 29 15:34:38 crc kubenswrapper[5008]: I0129 15:34:38.414358 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ppp9t\" (UniqueName: \"kubernetes.io/projected/f1f49e52-7a77-4c24-8bad-f171e4278f8e-kube-api-access-ppp9t\") pod \"route-controller-manager-554dcd487f-hzl7q\" (UID: \"f1f49e52-7a77-4c24-8bad-f171e4278f8e\") " pod="openshift-route-controller-manager/route-controller-manager-554dcd487f-hzl7q" Jan 29 15:34:38 crc kubenswrapper[5008]: I0129 15:34:38.415955 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f1f49e52-7a77-4c24-8bad-f171e4278f8e-client-ca\") pod \"route-controller-manager-554dcd487f-hzl7q\" (UID: \"f1f49e52-7a77-4c24-8bad-f171e4278f8e\") " pod="openshift-route-controller-manager/route-controller-manager-554dcd487f-hzl7q" Jan 29 15:34:38 crc kubenswrapper[5008]: I0129 15:34:38.416012 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1abe2571-fd60-4224-b5f9-8f0b501c14ce-client-ca\") pod \"controller-manager-56f55f798d-l7rrp\" (UID: \"1abe2571-fd60-4224-b5f9-8f0b501c14ce\") " pod="openshift-controller-manager/controller-manager-56f55f798d-l7rrp" Jan 29 15:34:38 crc kubenswrapper[5008]: I0129 15:34:38.416124 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1abe2571-fd60-4224-b5f9-8f0b501c14ce-config\") pod \"controller-manager-56f55f798d-l7rrp\" (UID: \"1abe2571-fd60-4224-b5f9-8f0b501c14ce\") " pod="openshift-controller-manager/controller-manager-56f55f798d-l7rrp" Jan 29 15:34:38 crc kubenswrapper[5008]: I0129 15:34:38.416997 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/1abe2571-fd60-4224-b5f9-8f0b501c14ce-proxy-ca-bundles\") pod \"controller-manager-56f55f798d-l7rrp\" (UID: \"1abe2571-fd60-4224-b5f9-8f0b501c14ce\") " pod="openshift-controller-manager/controller-manager-56f55f798d-l7rrp" Jan 29 15:34:38 crc kubenswrapper[5008]: I0129 15:34:38.417268 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f1f49e52-7a77-4c24-8bad-f171e4278f8e-config\") pod \"route-controller-manager-554dcd487f-hzl7q\" (UID: \"f1f49e52-7a77-4c24-8bad-f171e4278f8e\") " pod="openshift-route-controller-manager/route-controller-manager-554dcd487f-hzl7q" Jan 29 15:34:38 crc kubenswrapper[5008]: I0129 15:34:38.418837 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f1f49e52-7a77-4c24-8bad-f171e4278f8e-serving-cert\") pod \"route-controller-manager-554dcd487f-hzl7q\" (UID: \"f1f49e52-7a77-4c24-8bad-f171e4278f8e\") " pod="openshift-route-controller-manager/route-controller-manager-554dcd487f-hzl7q" Jan 29 15:34:38 crc kubenswrapper[5008]: I0129 15:34:38.420608 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1abe2571-fd60-4224-b5f9-8f0b501c14ce-serving-cert\") pod \"controller-manager-56f55f798d-l7rrp\" (UID: \"1abe2571-fd60-4224-b5f9-8f0b501c14ce\") " pod="openshift-controller-manager/controller-manager-56f55f798d-l7rrp" Jan 29 15:34:38 crc kubenswrapper[5008]: I0129 15:34:38.430519 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ppp9t\" (UniqueName: \"kubernetes.io/projected/f1f49e52-7a77-4c24-8bad-f171e4278f8e-kube-api-access-ppp9t\") pod \"route-controller-manager-554dcd487f-hzl7q\" (UID: \"f1f49e52-7a77-4c24-8bad-f171e4278f8e\") " pod="openshift-route-controller-manager/route-controller-manager-554dcd487f-hzl7q" Jan 29 15:34:38 crc kubenswrapper[5008]: I0129 15:34:38.438002 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hhvh8\" (UniqueName: \"kubernetes.io/projected/1abe2571-fd60-4224-b5f9-8f0b501c14ce-kube-api-access-hhvh8\") pod \"controller-manager-56f55f798d-l7rrp\" (UID: \"1abe2571-fd60-4224-b5f9-8f0b501c14ce\") " pod="openshift-controller-manager/controller-manager-56f55f798d-l7rrp" Jan 29 15:34:38 crc kubenswrapper[5008]: I0129 15:34:38.538087 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-554dcd487f-hzl7q" Jan 29 15:34:38 crc kubenswrapper[5008]: I0129 15:34:38.548665 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-56f55f798d-l7rrp" Jan 29 15:34:39 crc kubenswrapper[5008]: I0129 15:34:39.002289 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-554dcd487f-hzl7q"] Jan 29 15:34:39 crc kubenswrapper[5008]: I0129 15:34:39.032244 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-56f55f798d-l7rrp"] Jan 29 15:34:39 crc kubenswrapper[5008]: I0129 15:34:39.086589 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-554dcd487f-hzl7q" event={"ID":"f1f49e52-7a77-4c24-8bad-f171e4278f8e","Type":"ContainerStarted","Data":"20e96f50276ca61b255e3eb4e2c3bc5077a19ae95d30e49d861f92b90e75d823"} Jan 29 15:34:39 crc kubenswrapper[5008]: I0129 15:34:39.088631 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-56f55f798d-l7rrp" event={"ID":"1abe2571-fd60-4224-b5f9-8f0b501c14ce","Type":"ContainerStarted","Data":"4cf156e56fbe299eda1af33ba6c8769ef1c1f138ff085729ee3ea93b39c940c3"} Jan 29 15:34:39 crc kubenswrapper[5008]: I0129 15:34:39.334056 5008 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="93ec6db8-09a1-4b3b-900d-867f728452cb" path="/var/lib/kubelet/pods/93ec6db8-09a1-4b3b-900d-867f728452cb/volumes" Jan 29 15:34:39 crc kubenswrapper[5008]: I0129 15:34:39.334987 5008 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9ffb7e45-37e9-49cf-981c-d88916bba44b" path="/var/lib/kubelet/pods/9ffb7e45-37e9-49cf-981c-d88916bba44b/volumes" Jan 29 15:34:40 crc kubenswrapper[5008]: I0129 15:34:40.098239 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-56f55f798d-l7rrp" event={"ID":"1abe2571-fd60-4224-b5f9-8f0b501c14ce","Type":"ContainerStarted","Data":"7b4ee8befe1d6476025015239b2664d7522cb29aa77dd77ee357198cbfb8cbff"} Jan 29 15:34:40 crc kubenswrapper[5008]: I0129 15:34:40.099475 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-56f55f798d-l7rrp" Jan 29 15:34:40 crc kubenswrapper[5008]: I0129 15:34:40.101347 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-554dcd487f-hzl7q" event={"ID":"f1f49e52-7a77-4c24-8bad-f171e4278f8e","Type":"ContainerStarted","Data":"b4fe2770368607a122e538b36ba804d4530925293570df5539a68243dd02da22"} Jan 29 15:34:40 crc kubenswrapper[5008]: I0129 15:34:40.101765 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-554dcd487f-hzl7q" Jan 29 15:34:40 crc kubenswrapper[5008]: I0129 15:34:40.108136 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-56f55f798d-l7rrp" Jan 29 15:34:40 crc kubenswrapper[5008]: I0129 15:34:40.113834 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-554dcd487f-hzl7q" Jan 29 15:34:40 crc kubenswrapper[5008]: I0129 15:34:40.122606 5008 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-56f55f798d-l7rrp" podStartSLOduration=4.12258642 podStartE2EDuration="4.12258642s" podCreationTimestamp="2026-01-29 15:34:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 15:34:40.121961444 +0000 UTC m=+423.794815691" watchObservedRunningTime="2026-01-29 15:34:40.12258642 +0000 UTC m=+423.795440667" Jan 29 15:34:40 crc kubenswrapper[5008]: I0129 15:34:40.143559 5008 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-554dcd487f-hzl7q" podStartSLOduration=4.143538362 podStartE2EDuration="4.143538362s" podCreationTimestamp="2026-01-29 15:34:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 15:34:40.138242555 +0000 UTC m=+423.811096802" watchObservedRunningTime="2026-01-29 15:34:40.143538362 +0000 UTC m=+423.816392619" Jan 29 15:34:43 crc kubenswrapper[5008]: I0129 15:34:43.990587 5008 patch_prober.go:28] interesting pod/machine-config-daemon-gk9q8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 15:34:43 crc kubenswrapper[5008]: I0129 15:34:43.991158 5008 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-gk9q8" podUID="ca0fcb2d-733d-4bde-9bbf-3f7082d0e244" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 15:34:43 crc kubenswrapper[5008]: I0129 15:34:43.991241 5008 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-gk9q8" Jan 29 15:34:43 crc kubenswrapper[5008]: I0129 15:34:43.992168 5008 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"1094d3e48c81c3e2ea9f57f39bbd7ccc01c1ccc72a4337e691b80548a8d40521"} pod="openshift-machine-config-operator/machine-config-daemon-gk9q8" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 29 15:34:43 crc kubenswrapper[5008]: I0129 15:34:43.992295 5008 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-gk9q8" podUID="ca0fcb2d-733d-4bde-9bbf-3f7082d0e244" containerName="machine-config-daemon" containerID="cri-o://1094d3e48c81c3e2ea9f57f39bbd7ccc01c1ccc72a4337e691b80548a8d40521" gracePeriod=600 Jan 29 15:34:44 crc kubenswrapper[5008]: I0129 15:34:44.127999 5008 generic.go:334] "Generic (PLEG): container finished" podID="ca0fcb2d-733d-4bde-9bbf-3f7082d0e244" containerID="1094d3e48c81c3e2ea9f57f39bbd7ccc01c1ccc72a4337e691b80548a8d40521" exitCode=0 Jan 29 15:34:44 crc kubenswrapper[5008]: I0129 15:34:44.128049 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-gk9q8" event={"ID":"ca0fcb2d-733d-4bde-9bbf-3f7082d0e244","Type":"ContainerDied","Data":"1094d3e48c81c3e2ea9f57f39bbd7ccc01c1ccc72a4337e691b80548a8d40521"} Jan 29 15:34:44 crc kubenswrapper[5008]: I0129 15:34:44.128105 5008 scope.go:117] "RemoveContainer" containerID="b4781ea933d8ce868cf1da4b2890797c16012b434ce074870a59307d61a3c731" Jan 29 15:34:45 crc kubenswrapper[5008]: I0129 15:34:45.137513 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-gk9q8" event={"ID":"ca0fcb2d-733d-4bde-9bbf-3f7082d0e244","Type":"ContainerStarted","Data":"9850a434d4d07df0fe32aef86e993277e84b797db07cefc7dc516322c6794dab"} Jan 29 15:34:53 crc kubenswrapper[5008]: I0129 15:34:53.138486 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-66df7c8f76-nppsr" Jan 29 15:34:53 crc kubenswrapper[5008]: I0129 15:34:53.186021 5008 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-qm54x"] Jan 29 15:35:01 crc kubenswrapper[5008]: I0129 15:35:01.653316 5008 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-cwgw5"] Jan 29 15:35:01 crc kubenswrapper[5008]: I0129 15:35:01.654632 5008 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-cwgw5" podUID="6aebe040-289b-48c1-a825-f12b471a5ad6" containerName="registry-server" containerID="cri-o://fb026266eabc9b6ace205f36e42b0dab030a6b065f770827028c0ed16d1aa84f" gracePeriod=30 Jan 29 15:35:01 crc kubenswrapper[5008]: I0129 15:35:01.660011 5008 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-4dwdf"] Jan 29 15:35:01 crc kubenswrapper[5008]: I0129 15:35:01.661007 5008 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-4dwdf" podUID="d2d42845-cca1-4b60-bc84-4b2baebf702b" containerName="registry-server" containerID="cri-o://f602032356e6af24b6539dc335606faed034c76d076edd55de00a1f6423d0579" gracePeriod=30 Jan 29 15:35:01 crc kubenswrapper[5008]: I0129 15:35:01.694993 5008 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-4268l"] Jan 29 15:35:01 crc kubenswrapper[5008]: I0129 15:35:01.695852 5008 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/marketplace-operator-79b997595-4268l" podUID="7473d665-3627-4470-a820-ebdbdc113587" containerName="marketplace-operator" containerID="cri-o://8d7598ad2c3c5a660fb19d3ee369a6710759e6bbe8cbe47b3f02e5b7530f821c" gracePeriod=30 Jan 29 15:35:01 crc kubenswrapper[5008]: I0129 15:35:01.708867 5008 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-mkxw5"] Jan 29 15:35:01 crc kubenswrapper[5008]: I0129 15:35:01.709102 5008 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-mkxw5" podUID="6aef1830-577d-405c-bb54-6f9fe217ae86" containerName="registry-server" containerID="cri-o://ed3317e50ebd56908f1ad0d5cbc15af6b8fc520caee4385415a1615527ccd62b" gracePeriod=30 Jan 29 15:35:01 crc kubenswrapper[5008]: I0129 15:35:01.717112 5008 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-pz9kz"] Jan 29 15:35:01 crc kubenswrapper[5008]: I0129 15:35:01.719997 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-pz9kz" Jan 29 15:35:01 crc kubenswrapper[5008]: I0129 15:35:01.726219 5008 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-tst9c"] Jan 29 15:35:01 crc kubenswrapper[5008]: I0129 15:35:01.726480 5008 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-tst9c" podUID="ea8deba9-72cb-4274-add1-e80591a9e7cc" containerName="registry-server" containerID="cri-o://9c3f342d019c4b99216e2db36a8519922ee184a93aa73ddc5f5e324d243d11e6" gracePeriod=30 Jan 29 15:35:01 crc kubenswrapper[5008]: I0129 15:35:01.733379 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-pz9kz"] Jan 29 15:35:01 crc kubenswrapper[5008]: I0129 15:35:01.844827 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gkr9j\" (UniqueName: \"kubernetes.io/projected/077a9343-695d-4180-9255-41f1eaeb58a3-kube-api-access-gkr9j\") pod \"marketplace-operator-79b997595-pz9kz\" (UID: \"077a9343-695d-4180-9255-41f1eaeb58a3\") " pod="openshift-marketplace/marketplace-operator-79b997595-pz9kz" Jan 29 15:35:01 crc kubenswrapper[5008]: I0129 15:35:01.845229 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/077a9343-695d-4180-9255-41f1eaeb58a3-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-pz9kz\" (UID: \"077a9343-695d-4180-9255-41f1eaeb58a3\") " pod="openshift-marketplace/marketplace-operator-79b997595-pz9kz" Jan 29 15:35:01 crc kubenswrapper[5008]: I0129 15:35:01.845287 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/077a9343-695d-4180-9255-41f1eaeb58a3-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-pz9kz\" (UID: \"077a9343-695d-4180-9255-41f1eaeb58a3\") " pod="openshift-marketplace/marketplace-operator-79b997595-pz9kz" Jan 29 15:35:01 crc kubenswrapper[5008]: I0129 15:35:01.946166 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/077a9343-695d-4180-9255-41f1eaeb58a3-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-pz9kz\" (UID: \"077a9343-695d-4180-9255-41f1eaeb58a3\") " pod="openshift-marketplace/marketplace-operator-79b997595-pz9kz" Jan 29 15:35:01 crc kubenswrapper[5008]: I0129 15:35:01.946224 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gkr9j\" (UniqueName: \"kubernetes.io/projected/077a9343-695d-4180-9255-41f1eaeb58a3-kube-api-access-gkr9j\") pod \"marketplace-operator-79b997595-pz9kz\" (UID: \"077a9343-695d-4180-9255-41f1eaeb58a3\") " pod="openshift-marketplace/marketplace-operator-79b997595-pz9kz" Jan 29 15:35:01 crc kubenswrapper[5008]: I0129 15:35:01.946259 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/077a9343-695d-4180-9255-41f1eaeb58a3-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-pz9kz\" (UID: \"077a9343-695d-4180-9255-41f1eaeb58a3\") " pod="openshift-marketplace/marketplace-operator-79b997595-pz9kz" Jan 29 15:35:01 crc kubenswrapper[5008]: I0129 15:35:01.947290 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/077a9343-695d-4180-9255-41f1eaeb58a3-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-pz9kz\" (UID: \"077a9343-695d-4180-9255-41f1eaeb58a3\") " pod="openshift-marketplace/marketplace-operator-79b997595-pz9kz" Jan 29 15:35:01 crc kubenswrapper[5008]: I0129 15:35:01.954590 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/077a9343-695d-4180-9255-41f1eaeb58a3-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-pz9kz\" (UID: \"077a9343-695d-4180-9255-41f1eaeb58a3\") " pod="openshift-marketplace/marketplace-operator-79b997595-pz9kz" Jan 29 15:35:01 crc kubenswrapper[5008]: I0129 15:35:01.995604 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gkr9j\" (UniqueName: \"kubernetes.io/projected/077a9343-695d-4180-9255-41f1eaeb58a3-kube-api-access-gkr9j\") pod \"marketplace-operator-79b997595-pz9kz\" (UID: \"077a9343-695d-4180-9255-41f1eaeb58a3\") " pod="openshift-marketplace/marketplace-operator-79b997595-pz9kz" Jan 29 15:35:02 crc kubenswrapper[5008]: I0129 15:35:02.047450 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-pz9kz" Jan 29 15:35:02 crc kubenswrapper[5008]: I0129 15:35:02.245590 5008 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-cwgw5" Jan 29 15:35:02 crc kubenswrapper[5008]: I0129 15:35:02.246963 5008 generic.go:334] "Generic (PLEG): container finished" podID="d2d42845-cca1-4b60-bc84-4b2baebf702b" containerID="f602032356e6af24b6539dc335606faed034c76d076edd55de00a1f6423d0579" exitCode=0 Jan 29 15:35:02 crc kubenswrapper[5008]: I0129 15:35:02.247029 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4dwdf" event={"ID":"d2d42845-cca1-4b60-bc84-4b2baebf702b","Type":"ContainerDied","Data":"f602032356e6af24b6539dc335606faed034c76d076edd55de00a1f6423d0579"} Jan 29 15:35:02 crc kubenswrapper[5008]: I0129 15:35:02.248879 5008 generic.go:334] "Generic (PLEG): container finished" podID="6aef1830-577d-405c-bb54-6f9fe217ae86" containerID="ed3317e50ebd56908f1ad0d5cbc15af6b8fc520caee4385415a1615527ccd62b" exitCode=0 Jan 29 15:35:02 crc kubenswrapper[5008]: I0129 15:35:02.248959 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mkxw5" event={"ID":"6aef1830-577d-405c-bb54-6f9fe217ae86","Type":"ContainerDied","Data":"ed3317e50ebd56908f1ad0d5cbc15af6b8fc520caee4385415a1615527ccd62b"} Jan 29 15:35:02 crc kubenswrapper[5008]: I0129 15:35:02.250797 5008 generic.go:334] "Generic (PLEG): container finished" podID="6aebe040-289b-48c1-a825-f12b471a5ad6" containerID="fb026266eabc9b6ace205f36e42b0dab030a6b065f770827028c0ed16d1aa84f" exitCode=0 Jan 29 15:35:02 crc kubenswrapper[5008]: I0129 15:35:02.250858 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cwgw5" event={"ID":"6aebe040-289b-48c1-a825-f12b471a5ad6","Type":"ContainerDied","Data":"fb026266eabc9b6ace205f36e42b0dab030a6b065f770827028c0ed16d1aa84f"} Jan 29 15:35:02 crc kubenswrapper[5008]: I0129 15:35:02.250863 5008 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-cwgw5" Jan 29 15:35:02 crc kubenswrapper[5008]: I0129 15:35:02.250877 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cwgw5" event={"ID":"6aebe040-289b-48c1-a825-f12b471a5ad6","Type":"ContainerDied","Data":"54d6cf905ba0c9c55baea0b1bbde4338656f4661c2571ae702fdc0067f3ef4cb"} Jan 29 15:35:02 crc kubenswrapper[5008]: I0129 15:35:02.250897 5008 scope.go:117] "RemoveContainer" containerID="fb026266eabc9b6ace205f36e42b0dab030a6b065f770827028c0ed16d1aa84f" Jan 29 15:35:02 crc kubenswrapper[5008]: I0129 15:35:02.254009 5008 generic.go:334] "Generic (PLEG): container finished" podID="7473d665-3627-4470-a820-ebdbdc113587" containerID="8d7598ad2c3c5a660fb19d3ee369a6710759e6bbe8cbe47b3f02e5b7530f821c" exitCode=0 Jan 29 15:35:02 crc kubenswrapper[5008]: I0129 15:35:02.254166 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-4268l" event={"ID":"7473d665-3627-4470-a820-ebdbdc113587","Type":"ContainerDied","Data":"8d7598ad2c3c5a660fb19d3ee369a6710759e6bbe8cbe47b3f02e5b7530f821c"} Jan 29 15:35:02 crc kubenswrapper[5008]: I0129 15:35:02.273045 5008 scope.go:117] "RemoveContainer" containerID="b7bd66f1ab52d36602a85b79dd606c04b810e09efd18dedd3f58cfeff8f24869" Jan 29 15:35:02 crc kubenswrapper[5008]: I0129 15:35:02.273109 5008 generic.go:334] "Generic (PLEG): container finished" podID="ea8deba9-72cb-4274-add1-e80591a9e7cc" containerID="9c3f342d019c4b99216e2db36a8519922ee184a93aa73ddc5f5e324d243d11e6" exitCode=0 Jan 29 15:35:02 crc kubenswrapper[5008]: I0129 15:35:02.273129 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tst9c" event={"ID":"ea8deba9-72cb-4274-add1-e80591a9e7cc","Type":"ContainerDied","Data":"9c3f342d019c4b99216e2db36a8519922ee184a93aa73ddc5f5e324d243d11e6"} Jan 29 15:35:02 crc kubenswrapper[5008]: I0129 15:35:02.304999 5008 scope.go:117] "RemoveContainer" containerID="f52329f3f265a1114741db2a28bb35b1a3c05c140e0374037d9b0d6bd838822b" Jan 29 15:35:02 crc kubenswrapper[5008]: I0129 15:35:02.337474 5008 scope.go:117] "RemoveContainer" containerID="fb026266eabc9b6ace205f36e42b0dab030a6b065f770827028c0ed16d1aa84f" Jan 29 15:35:02 crc kubenswrapper[5008]: E0129 15:35:02.342302 5008 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fb026266eabc9b6ace205f36e42b0dab030a6b065f770827028c0ed16d1aa84f\": container with ID starting with fb026266eabc9b6ace205f36e42b0dab030a6b065f770827028c0ed16d1aa84f not found: ID does not exist" containerID="fb026266eabc9b6ace205f36e42b0dab030a6b065f770827028c0ed16d1aa84f" Jan 29 15:35:02 crc kubenswrapper[5008]: I0129 15:35:02.342365 5008 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fb026266eabc9b6ace205f36e42b0dab030a6b065f770827028c0ed16d1aa84f"} err="failed to get container status \"fb026266eabc9b6ace205f36e42b0dab030a6b065f770827028c0ed16d1aa84f\": rpc error: code = NotFound desc = could not find container \"fb026266eabc9b6ace205f36e42b0dab030a6b065f770827028c0ed16d1aa84f\": container with ID starting with fb026266eabc9b6ace205f36e42b0dab030a6b065f770827028c0ed16d1aa84f not found: ID does not exist" Jan 29 15:35:02 crc kubenswrapper[5008]: I0129 15:35:02.342397 5008 scope.go:117] "RemoveContainer" containerID="b7bd66f1ab52d36602a85b79dd606c04b810e09efd18dedd3f58cfeff8f24869" Jan 29 15:35:02 crc kubenswrapper[5008]: E0129 15:35:02.342834 5008 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b7bd66f1ab52d36602a85b79dd606c04b810e09efd18dedd3f58cfeff8f24869\": container with ID starting with b7bd66f1ab52d36602a85b79dd606c04b810e09efd18dedd3f58cfeff8f24869 not found: ID does not exist" containerID="b7bd66f1ab52d36602a85b79dd606c04b810e09efd18dedd3f58cfeff8f24869" Jan 29 15:35:02 crc kubenswrapper[5008]: I0129 15:35:02.342856 5008 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b7bd66f1ab52d36602a85b79dd606c04b810e09efd18dedd3f58cfeff8f24869"} err="failed to get container status \"b7bd66f1ab52d36602a85b79dd606c04b810e09efd18dedd3f58cfeff8f24869\": rpc error: code = NotFound desc = could not find container \"b7bd66f1ab52d36602a85b79dd606c04b810e09efd18dedd3f58cfeff8f24869\": container with ID starting with b7bd66f1ab52d36602a85b79dd606c04b810e09efd18dedd3f58cfeff8f24869 not found: ID does not exist" Jan 29 15:35:02 crc kubenswrapper[5008]: I0129 15:35:02.342872 5008 scope.go:117] "RemoveContainer" containerID="f52329f3f265a1114741db2a28bb35b1a3c05c140e0374037d9b0d6bd838822b" Jan 29 15:35:02 crc kubenswrapper[5008]: E0129 15:35:02.344987 5008 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f52329f3f265a1114741db2a28bb35b1a3c05c140e0374037d9b0d6bd838822b\": container with ID starting with f52329f3f265a1114741db2a28bb35b1a3c05c140e0374037d9b0d6bd838822b not found: ID does not exist" containerID="f52329f3f265a1114741db2a28bb35b1a3c05c140e0374037d9b0d6bd838822b" Jan 29 15:35:02 crc kubenswrapper[5008]: I0129 15:35:02.345033 5008 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f52329f3f265a1114741db2a28bb35b1a3c05c140e0374037d9b0d6bd838822b"} err="failed to get container status \"f52329f3f265a1114741db2a28bb35b1a3c05c140e0374037d9b0d6bd838822b\": rpc error: code = NotFound desc = could not find container \"f52329f3f265a1114741db2a28bb35b1a3c05c140e0374037d9b0d6bd838822b\": container with ID starting with f52329f3f265a1114741db2a28bb35b1a3c05c140e0374037d9b0d6bd838822b not found: ID does not exist" Jan 29 15:35:02 crc kubenswrapper[5008]: I0129 15:35:02.350319 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6aebe040-289b-48c1-a825-f12b471a5ad6-catalog-content\") pod \"6aebe040-289b-48c1-a825-f12b471a5ad6\" (UID: \"6aebe040-289b-48c1-a825-f12b471a5ad6\") " Jan 29 15:35:02 crc kubenswrapper[5008]: I0129 15:35:02.350423 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dldqp\" (UniqueName: \"kubernetes.io/projected/6aebe040-289b-48c1-a825-f12b471a5ad6-kube-api-access-dldqp\") pod \"6aebe040-289b-48c1-a825-f12b471a5ad6\" (UID: \"6aebe040-289b-48c1-a825-f12b471a5ad6\") " Jan 29 15:35:02 crc kubenswrapper[5008]: I0129 15:35:02.350495 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6aebe040-289b-48c1-a825-f12b471a5ad6-utilities\") pod \"6aebe040-289b-48c1-a825-f12b471a5ad6\" (UID: \"6aebe040-289b-48c1-a825-f12b471a5ad6\") " Jan 29 15:35:02 crc kubenswrapper[5008]: I0129 15:35:02.351544 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6aebe040-289b-48c1-a825-f12b471a5ad6-utilities" (OuterVolumeSpecName: "utilities") pod "6aebe040-289b-48c1-a825-f12b471a5ad6" (UID: "6aebe040-289b-48c1-a825-f12b471a5ad6"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 15:35:02 crc kubenswrapper[5008]: I0129 15:35:02.355963 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6aebe040-289b-48c1-a825-f12b471a5ad6-kube-api-access-dldqp" (OuterVolumeSpecName: "kube-api-access-dldqp") pod "6aebe040-289b-48c1-a825-f12b471a5ad6" (UID: "6aebe040-289b-48c1-a825-f12b471a5ad6"). InnerVolumeSpecName "kube-api-access-dldqp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:35:02 crc kubenswrapper[5008]: I0129 15:35:02.410859 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6aebe040-289b-48c1-a825-f12b471a5ad6-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "6aebe040-289b-48c1-a825-f12b471a5ad6" (UID: "6aebe040-289b-48c1-a825-f12b471a5ad6"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 15:35:02 crc kubenswrapper[5008]: I0129 15:35:02.452574 5008 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6aebe040-289b-48c1-a825-f12b471a5ad6-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 15:35:02 crc kubenswrapper[5008]: I0129 15:35:02.452899 5008 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6aebe040-289b-48c1-a825-f12b471a5ad6-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 15:35:02 crc kubenswrapper[5008]: I0129 15:35:02.453180 5008 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dldqp\" (UniqueName: \"kubernetes.io/projected/6aebe040-289b-48c1-a825-f12b471a5ad6-kube-api-access-dldqp\") on node \"crc\" DevicePath \"\"" Jan 29 15:35:02 crc kubenswrapper[5008]: I0129 15:35:02.459203 5008 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-mkxw5" Jan 29 15:35:02 crc kubenswrapper[5008]: I0129 15:35:02.467699 5008 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-4dwdf" Jan 29 15:35:02 crc kubenswrapper[5008]: I0129 15:35:02.487662 5008 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-tst9c" Jan 29 15:35:02 crc kubenswrapper[5008]: I0129 15:35:02.524104 5008 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-4268l" Jan 29 15:35:02 crc kubenswrapper[5008]: I0129 15:35:02.556500 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ea8deba9-72cb-4274-add1-e80591a9e7cc-utilities\") pod \"ea8deba9-72cb-4274-add1-e80591a9e7cc\" (UID: \"ea8deba9-72cb-4274-add1-e80591a9e7cc\") " Jan 29 15:35:02 crc kubenswrapper[5008]: I0129 15:35:02.556812 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/7473d665-3627-4470-a820-ebdbdc113587-marketplace-operator-metrics\") pod \"7473d665-3627-4470-a820-ebdbdc113587\" (UID: \"7473d665-3627-4470-a820-ebdbdc113587\") " Jan 29 15:35:02 crc kubenswrapper[5008]: I0129 15:35:02.556998 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6aef1830-577d-405c-bb54-6f9fe217ae86-utilities\") pod \"6aef1830-577d-405c-bb54-6f9fe217ae86\" (UID: \"6aef1830-577d-405c-bb54-6f9fe217ae86\") " Jan 29 15:35:02 crc kubenswrapper[5008]: I0129 15:35:02.557091 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l2kqn\" (UniqueName: \"kubernetes.io/projected/7473d665-3627-4470-a820-ebdbdc113587-kube-api-access-l2kqn\") pod \"7473d665-3627-4470-a820-ebdbdc113587\" (UID: \"7473d665-3627-4470-a820-ebdbdc113587\") " Jan 29 15:35:02 crc kubenswrapper[5008]: I0129 15:35:02.557183 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6aef1830-577d-405c-bb54-6f9fe217ae86-catalog-content\") pod \"6aef1830-577d-405c-bb54-6f9fe217ae86\" (UID: \"6aef1830-577d-405c-bb54-6f9fe217ae86\") " Jan 29 15:35:02 crc kubenswrapper[5008]: I0129 15:35:02.557287 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ftbd9\" (UniqueName: \"kubernetes.io/projected/6aef1830-577d-405c-bb54-6f9fe217ae86-kube-api-access-ftbd9\") pod \"6aef1830-577d-405c-bb54-6f9fe217ae86\" (UID: \"6aef1830-577d-405c-bb54-6f9fe217ae86\") " Jan 29 15:35:02 crc kubenswrapper[5008]: I0129 15:35:02.557387 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/7473d665-3627-4470-a820-ebdbdc113587-marketplace-trusted-ca\") pod \"7473d665-3627-4470-a820-ebdbdc113587\" (UID: \"7473d665-3627-4470-a820-ebdbdc113587\") " Jan 29 15:35:02 crc kubenswrapper[5008]: I0129 15:35:02.557466 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d2d42845-cca1-4b60-bc84-4b2baebf702b-utilities\") pod \"d2d42845-cca1-4b60-bc84-4b2baebf702b\" (UID: \"d2d42845-cca1-4b60-bc84-4b2baebf702b\") " Jan 29 15:35:02 crc kubenswrapper[5008]: I0129 15:35:02.557604 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ea8deba9-72cb-4274-add1-e80591a9e7cc-catalog-content\") pod \"ea8deba9-72cb-4274-add1-e80591a9e7cc\" (UID: \"ea8deba9-72cb-4274-add1-e80591a9e7cc\") " Jan 29 15:35:02 crc kubenswrapper[5008]: I0129 15:35:02.557743 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-229kp\" (UniqueName: \"kubernetes.io/projected/ea8deba9-72cb-4274-add1-e80591a9e7cc-kube-api-access-229kp\") pod \"ea8deba9-72cb-4274-add1-e80591a9e7cc\" (UID: \"ea8deba9-72cb-4274-add1-e80591a9e7cc\") " Jan 29 15:35:02 crc kubenswrapper[5008]: I0129 15:35:02.557909 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s8q2q\" (UniqueName: \"kubernetes.io/projected/d2d42845-cca1-4b60-bc84-4b2baebf702b-kube-api-access-s8q2q\") pod \"d2d42845-cca1-4b60-bc84-4b2baebf702b\" (UID: \"d2d42845-cca1-4b60-bc84-4b2baebf702b\") " Jan 29 15:35:02 crc kubenswrapper[5008]: I0129 15:35:02.558043 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d2d42845-cca1-4b60-bc84-4b2baebf702b-catalog-content\") pod \"d2d42845-cca1-4b60-bc84-4b2baebf702b\" (UID: \"d2d42845-cca1-4b60-bc84-4b2baebf702b\") " Jan 29 15:35:02 crc kubenswrapper[5008]: I0129 15:35:02.557196 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ea8deba9-72cb-4274-add1-e80591a9e7cc-utilities" (OuterVolumeSpecName: "utilities") pod "ea8deba9-72cb-4274-add1-e80591a9e7cc" (UID: "ea8deba9-72cb-4274-add1-e80591a9e7cc"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 15:35:02 crc kubenswrapper[5008]: I0129 15:35:02.558219 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7473d665-3627-4470-a820-ebdbdc113587-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "7473d665-3627-4470-a820-ebdbdc113587" (UID: "7473d665-3627-4470-a820-ebdbdc113587"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:35:02 crc kubenswrapper[5008]: I0129 15:35:02.558194 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d2d42845-cca1-4b60-bc84-4b2baebf702b-utilities" (OuterVolumeSpecName: "utilities") pod "d2d42845-cca1-4b60-bc84-4b2baebf702b" (UID: "d2d42845-cca1-4b60-bc84-4b2baebf702b"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 15:35:02 crc kubenswrapper[5008]: I0129 15:35:02.558382 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6aef1830-577d-405c-bb54-6f9fe217ae86-utilities" (OuterVolumeSpecName: "utilities") pod "6aef1830-577d-405c-bb54-6f9fe217ae86" (UID: "6aef1830-577d-405c-bb54-6f9fe217ae86"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 15:35:02 crc kubenswrapper[5008]: I0129 15:35:02.558648 5008 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/7473d665-3627-4470-a820-ebdbdc113587-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 29 15:35:02 crc kubenswrapper[5008]: I0129 15:35:02.558746 5008 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d2d42845-cca1-4b60-bc84-4b2baebf702b-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 15:35:02 crc kubenswrapper[5008]: I0129 15:35:02.558857 5008 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ea8deba9-72cb-4274-add1-e80591a9e7cc-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 15:35:02 crc kubenswrapper[5008]: I0129 15:35:02.558950 5008 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6aef1830-577d-405c-bb54-6f9fe217ae86-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 15:35:02 crc kubenswrapper[5008]: I0129 15:35:02.588103 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6aef1830-577d-405c-bb54-6f9fe217ae86-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "6aef1830-577d-405c-bb54-6f9fe217ae86" (UID: "6aef1830-577d-405c-bb54-6f9fe217ae86"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 15:35:02 crc kubenswrapper[5008]: I0129 15:35:02.605487 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6aef1830-577d-405c-bb54-6f9fe217ae86-kube-api-access-ftbd9" (OuterVolumeSpecName: "kube-api-access-ftbd9") pod "6aef1830-577d-405c-bb54-6f9fe217ae86" (UID: "6aef1830-577d-405c-bb54-6f9fe217ae86"). InnerVolumeSpecName "kube-api-access-ftbd9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:35:02 crc kubenswrapper[5008]: I0129 15:35:02.605594 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d2d42845-cca1-4b60-bc84-4b2baebf702b-kube-api-access-s8q2q" (OuterVolumeSpecName: "kube-api-access-s8q2q") pod "d2d42845-cca1-4b60-bc84-4b2baebf702b" (UID: "d2d42845-cca1-4b60-bc84-4b2baebf702b"). InnerVolumeSpecName "kube-api-access-s8q2q". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:35:02 crc kubenswrapper[5008]: I0129 15:35:02.608222 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7473d665-3627-4470-a820-ebdbdc113587-kube-api-access-l2kqn" (OuterVolumeSpecName: "kube-api-access-l2kqn") pod "7473d665-3627-4470-a820-ebdbdc113587" (UID: "7473d665-3627-4470-a820-ebdbdc113587"). InnerVolumeSpecName "kube-api-access-l2kqn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:35:02 crc kubenswrapper[5008]: I0129 15:35:02.608676 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ea8deba9-72cb-4274-add1-e80591a9e7cc-kube-api-access-229kp" (OuterVolumeSpecName: "kube-api-access-229kp") pod "ea8deba9-72cb-4274-add1-e80591a9e7cc" (UID: "ea8deba9-72cb-4274-add1-e80591a9e7cc"). InnerVolumeSpecName "kube-api-access-229kp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:35:02 crc kubenswrapper[5008]: I0129 15:35:02.608828 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7473d665-3627-4470-a820-ebdbdc113587-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "7473d665-3627-4470-a820-ebdbdc113587" (UID: "7473d665-3627-4470-a820-ebdbdc113587"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:35:02 crc kubenswrapper[5008]: I0129 15:35:02.621314 5008 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-cwgw5"] Jan 29 15:35:02 crc kubenswrapper[5008]: I0129 15:35:02.621367 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d2d42845-cca1-4b60-bc84-4b2baebf702b-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "d2d42845-cca1-4b60-bc84-4b2baebf702b" (UID: "d2d42845-cca1-4b60-bc84-4b2baebf702b"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 15:35:02 crc kubenswrapper[5008]: I0129 15:35:02.626149 5008 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-cwgw5"] Jan 29 15:35:02 crc kubenswrapper[5008]: I0129 15:35:02.659999 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-pz9kz"] Jan 29 15:35:02 crc kubenswrapper[5008]: I0129 15:35:02.660702 5008 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-229kp\" (UniqueName: \"kubernetes.io/projected/ea8deba9-72cb-4274-add1-e80591a9e7cc-kube-api-access-229kp\") on node \"crc\" DevicePath \"\"" Jan 29 15:35:02 crc kubenswrapper[5008]: I0129 15:35:02.660724 5008 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s8q2q\" (UniqueName: \"kubernetes.io/projected/d2d42845-cca1-4b60-bc84-4b2baebf702b-kube-api-access-s8q2q\") on node \"crc\" DevicePath \"\"" Jan 29 15:35:02 crc kubenswrapper[5008]: I0129 15:35:02.660737 5008 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d2d42845-cca1-4b60-bc84-4b2baebf702b-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 15:35:02 crc kubenswrapper[5008]: I0129 15:35:02.660748 5008 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/7473d665-3627-4470-a820-ebdbdc113587-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Jan 29 15:35:02 crc kubenswrapper[5008]: I0129 15:35:02.660762 5008 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l2kqn\" (UniqueName: \"kubernetes.io/projected/7473d665-3627-4470-a820-ebdbdc113587-kube-api-access-l2kqn\") on node \"crc\" DevicePath \"\"" Jan 29 15:35:02 crc kubenswrapper[5008]: I0129 15:35:02.660774 5008 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6aef1830-577d-405c-bb54-6f9fe217ae86-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 15:35:02 crc kubenswrapper[5008]: I0129 15:35:02.660804 5008 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ftbd9\" (UniqueName: \"kubernetes.io/projected/6aef1830-577d-405c-bb54-6f9fe217ae86-kube-api-access-ftbd9\") on node \"crc\" DevicePath \"\"" Jan 29 15:35:02 crc kubenswrapper[5008]: I0129 15:35:02.702960 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ea8deba9-72cb-4274-add1-e80591a9e7cc-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ea8deba9-72cb-4274-add1-e80591a9e7cc" (UID: "ea8deba9-72cb-4274-add1-e80591a9e7cc"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 15:35:02 crc kubenswrapper[5008]: I0129 15:35:02.763104 5008 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ea8deba9-72cb-4274-add1-e80591a9e7cc-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 15:35:03 crc kubenswrapper[5008]: I0129 15:35:03.280294 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-4268l" event={"ID":"7473d665-3627-4470-a820-ebdbdc113587","Type":"ContainerDied","Data":"744d2c5b14b18a0366937cb219697ae3c655391e7942e7c446395ce7d6b803ff"} Jan 29 15:35:03 crc kubenswrapper[5008]: I0129 15:35:03.280569 5008 scope.go:117] "RemoveContainer" containerID="8d7598ad2c3c5a660fb19d3ee369a6710759e6bbe8cbe47b3f02e5b7530f821c" Jan 29 15:35:03 crc kubenswrapper[5008]: I0129 15:35:03.280302 5008 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-4268l" Jan 29 15:35:03 crc kubenswrapper[5008]: I0129 15:35:03.282825 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tst9c" event={"ID":"ea8deba9-72cb-4274-add1-e80591a9e7cc","Type":"ContainerDied","Data":"add0ef656328b3411c8246a1cffa7e2baeefc91f711bf33d67c37a176e10eb38"} Jan 29 15:35:03 crc kubenswrapper[5008]: I0129 15:35:03.282893 5008 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-tst9c" Jan 29 15:35:03 crc kubenswrapper[5008]: I0129 15:35:03.288300 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-pz9kz" event={"ID":"077a9343-695d-4180-9255-41f1eaeb58a3","Type":"ContainerStarted","Data":"8c5780bdf73732a664202a63403be5237694bfd4cb9a15e445217aa18813d668"} Jan 29 15:35:03 crc kubenswrapper[5008]: I0129 15:35:03.288324 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-pz9kz" event={"ID":"077a9343-695d-4180-9255-41f1eaeb58a3","Type":"ContainerStarted","Data":"ef7302fd31879c7584c1dc343696e2b136034cd38c8da40fd9a6416da7664dc8"} Jan 29 15:35:03 crc kubenswrapper[5008]: I0129 15:35:03.289529 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-pz9kz" Jan 29 15:35:03 crc kubenswrapper[5008]: I0129 15:35:03.293366 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4dwdf" event={"ID":"d2d42845-cca1-4b60-bc84-4b2baebf702b","Type":"ContainerDied","Data":"dd8d6696ceba57808730ee9b74baad13f0f3efae19998fb92ff0c2c357522c56"} Jan 29 15:35:03 crc kubenswrapper[5008]: I0129 15:35:03.293539 5008 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-4dwdf" Jan 29 15:35:03 crc kubenswrapper[5008]: I0129 15:35:03.294733 5008 scope.go:117] "RemoveContainer" containerID="9c3f342d019c4b99216e2db36a8519922ee184a93aa73ddc5f5e324d243d11e6" Jan 29 15:35:03 crc kubenswrapper[5008]: I0129 15:35:03.306557 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mkxw5" event={"ID":"6aef1830-577d-405c-bb54-6f9fe217ae86","Type":"ContainerDied","Data":"57f282b94968e79e724bd40448547c7c110b5b3c35e9677aea1eb21b270ed1d9"} Jan 29 15:35:03 crc kubenswrapper[5008]: I0129 15:35:03.306606 5008 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-mkxw5" Jan 29 15:35:03 crc kubenswrapper[5008]: I0129 15:35:03.316446 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-pz9kz" Jan 29 15:35:03 crc kubenswrapper[5008]: I0129 15:35:03.322342 5008 scope.go:117] "RemoveContainer" containerID="c66762f5da3eb3376b4ceceb433da1a00c15c72c9c525f47d7d7528bad62fea4" Jan 29 15:35:03 crc kubenswrapper[5008]: I0129 15:35:03.328144 5008 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-pz9kz" podStartSLOduration=2.328117761 podStartE2EDuration="2.328117761s" podCreationTimestamp="2026-01-29 15:35:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 15:35:03.310630303 +0000 UTC m=+446.983484540" watchObservedRunningTime="2026-01-29 15:35:03.328117761 +0000 UTC m=+447.000971998" Jan 29 15:35:03 crc kubenswrapper[5008]: I0129 15:35:03.347295 5008 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6aebe040-289b-48c1-a825-f12b471a5ad6" path="/var/lib/kubelet/pods/6aebe040-289b-48c1-a825-f12b471a5ad6/volumes" Jan 29 15:35:03 crc kubenswrapper[5008]: I0129 15:35:03.348341 5008 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-4268l"] Jan 29 15:35:03 crc kubenswrapper[5008]: I0129 15:35:03.348385 5008 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-4268l"] Jan 29 15:35:03 crc kubenswrapper[5008]: I0129 15:35:03.350893 5008 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-tst9c"] Jan 29 15:35:03 crc kubenswrapper[5008]: I0129 15:35:03.353848 5008 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-tst9c"] Jan 29 15:35:03 crc kubenswrapper[5008]: I0129 15:35:03.368173 5008 scope.go:117] "RemoveContainer" containerID="4b51ccd27d29592df8a7bede95816e1b7ee7978e1541458bdd34bb868c6e0912" Jan 29 15:35:03 crc kubenswrapper[5008]: I0129 15:35:03.384858 5008 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-4dwdf"] Jan 29 15:35:03 crc kubenswrapper[5008]: I0129 15:35:03.387912 5008 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-4dwdf"] Jan 29 15:35:03 crc kubenswrapper[5008]: I0129 15:35:03.393976 5008 scope.go:117] "RemoveContainer" containerID="f602032356e6af24b6539dc335606faed034c76d076edd55de00a1f6423d0579" Jan 29 15:35:03 crc kubenswrapper[5008]: I0129 15:35:03.394537 5008 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-mkxw5"] Jan 29 15:35:03 crc kubenswrapper[5008]: I0129 15:35:03.410216 5008 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-mkxw5"] Jan 29 15:35:03 crc kubenswrapper[5008]: I0129 15:35:03.417271 5008 scope.go:117] "RemoveContainer" containerID="5ef6720d337e6b7bdd09776b3452601c072f482c35a5a9e55c34041df49ba20b" Jan 29 15:35:03 crc kubenswrapper[5008]: I0129 15:35:03.440237 5008 scope.go:117] "RemoveContainer" containerID="62b0c01ef29dcd7c7957aa7b9fba8ee02c41e66ab0221b57ac7769babd464e8c" Jan 29 15:35:03 crc kubenswrapper[5008]: I0129 15:35:03.454814 5008 scope.go:117] "RemoveContainer" containerID="ed3317e50ebd56908f1ad0d5cbc15af6b8fc520caee4385415a1615527ccd62b" Jan 29 15:35:03 crc kubenswrapper[5008]: I0129 15:35:03.475343 5008 scope.go:117] "RemoveContainer" containerID="6fbbb1c70108b41582b5edef8de3a67424fd51168b22d0d1f5469f11eceefd27" Jan 29 15:35:03 crc kubenswrapper[5008]: I0129 15:35:03.486238 5008 scope.go:117] "RemoveContainer" containerID="b4ed1901a1ac7d83b698c4d263db5514ae2a4bf0aab0e1f9032c155913f5bd2d" Jan 29 15:35:03 crc kubenswrapper[5008]: I0129 15:35:03.871142 5008 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-nd64n"] Jan 29 15:35:03 crc kubenswrapper[5008]: E0129 15:35:03.871694 5008 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6aebe040-289b-48c1-a825-f12b471a5ad6" containerName="extract-content" Jan 29 15:35:03 crc kubenswrapper[5008]: I0129 15:35:03.871717 5008 state_mem.go:107] "Deleted CPUSet assignment" podUID="6aebe040-289b-48c1-a825-f12b471a5ad6" containerName="extract-content" Jan 29 15:35:03 crc kubenswrapper[5008]: E0129 15:35:03.871765 5008 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d2d42845-cca1-4b60-bc84-4b2baebf702b" containerName="registry-server" Jan 29 15:35:03 crc kubenswrapper[5008]: I0129 15:35:03.871821 5008 state_mem.go:107] "Deleted CPUSet assignment" podUID="d2d42845-cca1-4b60-bc84-4b2baebf702b" containerName="registry-server" Jan 29 15:35:03 crc kubenswrapper[5008]: E0129 15:35:03.871838 5008 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d2d42845-cca1-4b60-bc84-4b2baebf702b" containerName="extract-utilities" Jan 29 15:35:03 crc kubenswrapper[5008]: I0129 15:35:03.871851 5008 state_mem.go:107] "Deleted CPUSet assignment" podUID="d2d42845-cca1-4b60-bc84-4b2baebf702b" containerName="extract-utilities" Jan 29 15:35:03 crc kubenswrapper[5008]: E0129 15:35:03.871874 5008 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6aebe040-289b-48c1-a825-f12b471a5ad6" containerName="extract-utilities" Jan 29 15:35:03 crc kubenswrapper[5008]: I0129 15:35:03.871927 5008 state_mem.go:107] "Deleted CPUSet assignment" podUID="6aebe040-289b-48c1-a825-f12b471a5ad6" containerName="extract-utilities" Jan 29 15:35:03 crc kubenswrapper[5008]: E0129 15:35:03.871946 5008 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d2d42845-cca1-4b60-bc84-4b2baebf702b" containerName="extract-content" Jan 29 15:35:03 crc kubenswrapper[5008]: I0129 15:35:03.871958 5008 state_mem.go:107] "Deleted CPUSet assignment" podUID="d2d42845-cca1-4b60-bc84-4b2baebf702b" containerName="extract-content" Jan 29 15:35:03 crc kubenswrapper[5008]: E0129 15:35:03.872012 5008 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ea8deba9-72cb-4274-add1-e80591a9e7cc" containerName="registry-server" Jan 29 15:35:03 crc kubenswrapper[5008]: I0129 15:35:03.872027 5008 state_mem.go:107] "Deleted CPUSet assignment" podUID="ea8deba9-72cb-4274-add1-e80591a9e7cc" containerName="registry-server" Jan 29 15:35:03 crc kubenswrapper[5008]: E0129 15:35:03.872052 5008 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6aef1830-577d-405c-bb54-6f9fe217ae86" containerName="registry-server" Jan 29 15:35:03 crc kubenswrapper[5008]: I0129 15:35:03.872064 5008 state_mem.go:107] "Deleted CPUSet assignment" podUID="6aef1830-577d-405c-bb54-6f9fe217ae86" containerName="registry-server" Jan 29 15:35:03 crc kubenswrapper[5008]: E0129 15:35:03.872124 5008 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6aebe040-289b-48c1-a825-f12b471a5ad6" containerName="registry-server" Jan 29 15:35:03 crc kubenswrapper[5008]: I0129 15:35:03.872136 5008 state_mem.go:107] "Deleted CPUSet assignment" podUID="6aebe040-289b-48c1-a825-f12b471a5ad6" containerName="registry-server" Jan 29 15:35:03 crc kubenswrapper[5008]: E0129 15:35:03.872149 5008 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ea8deba9-72cb-4274-add1-e80591a9e7cc" containerName="extract-content" Jan 29 15:35:03 crc kubenswrapper[5008]: I0129 15:35:03.872199 5008 state_mem.go:107] "Deleted CPUSet assignment" podUID="ea8deba9-72cb-4274-add1-e80591a9e7cc" containerName="extract-content" Jan 29 15:35:03 crc kubenswrapper[5008]: E0129 15:35:03.872214 5008 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ea8deba9-72cb-4274-add1-e80591a9e7cc" containerName="extract-utilities" Jan 29 15:35:03 crc kubenswrapper[5008]: I0129 15:35:03.872226 5008 state_mem.go:107] "Deleted CPUSet assignment" podUID="ea8deba9-72cb-4274-add1-e80591a9e7cc" containerName="extract-utilities" Jan 29 15:35:03 crc kubenswrapper[5008]: E0129 15:35:03.872245 5008 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6aef1830-577d-405c-bb54-6f9fe217ae86" containerName="extract-content" Jan 29 15:35:03 crc kubenswrapper[5008]: I0129 15:35:03.872296 5008 state_mem.go:107] "Deleted CPUSet assignment" podUID="6aef1830-577d-405c-bb54-6f9fe217ae86" containerName="extract-content" Jan 29 15:35:03 crc kubenswrapper[5008]: E0129 15:35:03.872317 5008 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6aef1830-577d-405c-bb54-6f9fe217ae86" containerName="extract-utilities" Jan 29 15:35:03 crc kubenswrapper[5008]: I0129 15:35:03.872329 5008 state_mem.go:107] "Deleted CPUSet assignment" podUID="6aef1830-577d-405c-bb54-6f9fe217ae86" containerName="extract-utilities" Jan 29 15:35:03 crc kubenswrapper[5008]: E0129 15:35:03.872345 5008 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7473d665-3627-4470-a820-ebdbdc113587" containerName="marketplace-operator" Jan 29 15:35:03 crc kubenswrapper[5008]: I0129 15:35:03.872396 5008 state_mem.go:107] "Deleted CPUSet assignment" podUID="7473d665-3627-4470-a820-ebdbdc113587" containerName="marketplace-operator" Jan 29 15:35:03 crc kubenswrapper[5008]: I0129 15:35:03.872608 5008 memory_manager.go:354] "RemoveStaleState removing state" podUID="ea8deba9-72cb-4274-add1-e80591a9e7cc" containerName="registry-server" Jan 29 15:35:03 crc kubenswrapper[5008]: I0129 15:35:03.872630 5008 memory_manager.go:354] "RemoveStaleState removing state" podUID="d2d42845-cca1-4b60-bc84-4b2baebf702b" containerName="registry-server" Jan 29 15:35:03 crc kubenswrapper[5008]: I0129 15:35:03.872655 5008 memory_manager.go:354] "RemoveStaleState removing state" podUID="6aebe040-289b-48c1-a825-f12b471a5ad6" containerName="registry-server" Jan 29 15:35:03 crc kubenswrapper[5008]: I0129 15:35:03.872678 5008 memory_manager.go:354] "RemoveStaleState removing state" podUID="6aef1830-577d-405c-bb54-6f9fe217ae86" containerName="registry-server" Jan 29 15:35:03 crc kubenswrapper[5008]: I0129 15:35:03.872692 5008 memory_manager.go:354] "RemoveStaleState removing state" podUID="7473d665-3627-4470-a820-ebdbdc113587" containerName="marketplace-operator" Jan 29 15:35:03 crc kubenswrapper[5008]: I0129 15:35:03.873920 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-nd64n" Jan 29 15:35:03 crc kubenswrapper[5008]: I0129 15:35:03.876617 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Jan 29 15:35:03 crc kubenswrapper[5008]: I0129 15:35:03.880838 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-nd64n"] Jan 29 15:35:03 crc kubenswrapper[5008]: I0129 15:35:03.982476 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1babb539-12b9-4532-b9c3-bc165829c40e-catalog-content\") pod \"redhat-marketplace-nd64n\" (UID: \"1babb539-12b9-4532-b9c3-bc165829c40e\") " pod="openshift-marketplace/redhat-marketplace-nd64n" Jan 29 15:35:03 crc kubenswrapper[5008]: I0129 15:35:03.982594 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8fv49\" (UniqueName: \"kubernetes.io/projected/1babb539-12b9-4532-b9c3-bc165829c40e-kube-api-access-8fv49\") pod \"redhat-marketplace-nd64n\" (UID: \"1babb539-12b9-4532-b9c3-bc165829c40e\") " pod="openshift-marketplace/redhat-marketplace-nd64n" Jan 29 15:35:03 crc kubenswrapper[5008]: I0129 15:35:03.982633 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1babb539-12b9-4532-b9c3-bc165829c40e-utilities\") pod \"redhat-marketplace-nd64n\" (UID: \"1babb539-12b9-4532-b9c3-bc165829c40e\") " pod="openshift-marketplace/redhat-marketplace-nd64n" Jan 29 15:35:04 crc kubenswrapper[5008]: I0129 15:35:04.071107 5008 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-5g5wg"] Jan 29 15:35:04 crc kubenswrapper[5008]: I0129 15:35:04.072044 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-5g5wg" Jan 29 15:35:04 crc kubenswrapper[5008]: I0129 15:35:04.073873 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Jan 29 15:35:04 crc kubenswrapper[5008]: I0129 15:35:04.085409 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8fv49\" (UniqueName: \"kubernetes.io/projected/1babb539-12b9-4532-b9c3-bc165829c40e-kube-api-access-8fv49\") pod \"redhat-marketplace-nd64n\" (UID: \"1babb539-12b9-4532-b9c3-bc165829c40e\") " pod="openshift-marketplace/redhat-marketplace-nd64n" Jan 29 15:35:04 crc kubenswrapper[5008]: I0129 15:35:04.085457 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1babb539-12b9-4532-b9c3-bc165829c40e-utilities\") pod \"redhat-marketplace-nd64n\" (UID: \"1babb539-12b9-4532-b9c3-bc165829c40e\") " pod="openshift-marketplace/redhat-marketplace-nd64n" Jan 29 15:35:04 crc kubenswrapper[5008]: I0129 15:35:04.085519 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1babb539-12b9-4532-b9c3-bc165829c40e-catalog-content\") pod \"redhat-marketplace-nd64n\" (UID: \"1babb539-12b9-4532-b9c3-bc165829c40e\") " pod="openshift-marketplace/redhat-marketplace-nd64n" Jan 29 15:35:04 crc kubenswrapper[5008]: I0129 15:35:04.085963 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1babb539-12b9-4532-b9c3-bc165829c40e-catalog-content\") pod \"redhat-marketplace-nd64n\" (UID: \"1babb539-12b9-4532-b9c3-bc165829c40e\") " pod="openshift-marketplace/redhat-marketplace-nd64n" Jan 29 15:35:04 crc kubenswrapper[5008]: I0129 15:35:04.086502 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1babb539-12b9-4532-b9c3-bc165829c40e-utilities\") pod \"redhat-marketplace-nd64n\" (UID: \"1babb539-12b9-4532-b9c3-bc165829c40e\") " pod="openshift-marketplace/redhat-marketplace-nd64n" Jan 29 15:35:04 crc kubenswrapper[5008]: I0129 15:35:04.087313 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-5g5wg"] Jan 29 15:35:04 crc kubenswrapper[5008]: I0129 15:35:04.112802 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8fv49\" (UniqueName: \"kubernetes.io/projected/1babb539-12b9-4532-b9c3-bc165829c40e-kube-api-access-8fv49\") pod \"redhat-marketplace-nd64n\" (UID: \"1babb539-12b9-4532-b9c3-bc165829c40e\") " pod="openshift-marketplace/redhat-marketplace-nd64n" Jan 29 15:35:04 crc kubenswrapper[5008]: I0129 15:35:04.186839 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5fbd5270-4a24-47ba-a0cf-0c3382a833c0-utilities\") pod \"redhat-operators-5g5wg\" (UID: \"5fbd5270-4a24-47ba-a0cf-0c3382a833c0\") " pod="openshift-marketplace/redhat-operators-5g5wg" Jan 29 15:35:04 crc kubenswrapper[5008]: I0129 15:35:04.186899 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5fbd5270-4a24-47ba-a0cf-0c3382a833c0-catalog-content\") pod \"redhat-operators-5g5wg\" (UID: \"5fbd5270-4a24-47ba-a0cf-0c3382a833c0\") " pod="openshift-marketplace/redhat-operators-5g5wg" Jan 29 15:35:04 crc kubenswrapper[5008]: I0129 15:35:04.187102 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p92fr\" (UniqueName: \"kubernetes.io/projected/5fbd5270-4a24-47ba-a0cf-0c3382a833c0-kube-api-access-p92fr\") pod \"redhat-operators-5g5wg\" (UID: \"5fbd5270-4a24-47ba-a0cf-0c3382a833c0\") " pod="openshift-marketplace/redhat-operators-5g5wg" Jan 29 15:35:04 crc kubenswrapper[5008]: I0129 15:35:04.202659 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-nd64n" Jan 29 15:35:04 crc kubenswrapper[5008]: I0129 15:35:04.288470 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5fbd5270-4a24-47ba-a0cf-0c3382a833c0-utilities\") pod \"redhat-operators-5g5wg\" (UID: \"5fbd5270-4a24-47ba-a0cf-0c3382a833c0\") " pod="openshift-marketplace/redhat-operators-5g5wg" Jan 29 15:35:04 crc kubenswrapper[5008]: I0129 15:35:04.288515 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5fbd5270-4a24-47ba-a0cf-0c3382a833c0-catalog-content\") pod \"redhat-operators-5g5wg\" (UID: \"5fbd5270-4a24-47ba-a0cf-0c3382a833c0\") " pod="openshift-marketplace/redhat-operators-5g5wg" Jan 29 15:35:04 crc kubenswrapper[5008]: I0129 15:35:04.288562 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p92fr\" (UniqueName: \"kubernetes.io/projected/5fbd5270-4a24-47ba-a0cf-0c3382a833c0-kube-api-access-p92fr\") pod \"redhat-operators-5g5wg\" (UID: \"5fbd5270-4a24-47ba-a0cf-0c3382a833c0\") " pod="openshift-marketplace/redhat-operators-5g5wg" Jan 29 15:35:04 crc kubenswrapper[5008]: I0129 15:35:04.289200 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5fbd5270-4a24-47ba-a0cf-0c3382a833c0-utilities\") pod \"redhat-operators-5g5wg\" (UID: \"5fbd5270-4a24-47ba-a0cf-0c3382a833c0\") " pod="openshift-marketplace/redhat-operators-5g5wg" Jan 29 15:35:04 crc kubenswrapper[5008]: I0129 15:35:04.289329 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5fbd5270-4a24-47ba-a0cf-0c3382a833c0-catalog-content\") pod \"redhat-operators-5g5wg\" (UID: \"5fbd5270-4a24-47ba-a0cf-0c3382a833c0\") " pod="openshift-marketplace/redhat-operators-5g5wg" Jan 29 15:35:04 crc kubenswrapper[5008]: I0129 15:35:04.310978 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p92fr\" (UniqueName: \"kubernetes.io/projected/5fbd5270-4a24-47ba-a0cf-0c3382a833c0-kube-api-access-p92fr\") pod \"redhat-operators-5g5wg\" (UID: \"5fbd5270-4a24-47ba-a0cf-0c3382a833c0\") " pod="openshift-marketplace/redhat-operators-5g5wg" Jan 29 15:35:04 crc kubenswrapper[5008]: I0129 15:35:04.405375 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-5g5wg" Jan 29 15:35:04 crc kubenswrapper[5008]: I0129 15:35:04.586498 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-nd64n"] Jan 29 15:35:04 crc kubenswrapper[5008]: W0129 15:35:04.600408 5008 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1babb539_12b9_4532_b9c3_bc165829c40e.slice/crio-e37689380c3622f490c954f7fb007fa6de6d35e480891ae38bd2f40c0a5d14c2 WatchSource:0}: Error finding container e37689380c3622f490c954f7fb007fa6de6d35e480891ae38bd2f40c0a5d14c2: Status 404 returned error can't find the container with id e37689380c3622f490c954f7fb007fa6de6d35e480891ae38bd2f40c0a5d14c2 Jan 29 15:35:04 crc kubenswrapper[5008]: I0129 15:35:04.808858 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-5g5wg"] Jan 29 15:35:04 crc kubenswrapper[5008]: W0129 15:35:04.815956 5008 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5fbd5270_4a24_47ba_a0cf_0c3382a833c0.slice/crio-f47cdc7022ed4732fc406cdc2a4cd2a094585fb848f3cc3f166c35c3b35b744c WatchSource:0}: Error finding container f47cdc7022ed4732fc406cdc2a4cd2a094585fb848f3cc3f166c35c3b35b744c: Status 404 returned error can't find the container with id f47cdc7022ed4732fc406cdc2a4cd2a094585fb848f3cc3f166c35c3b35b744c Jan 29 15:35:05 crc kubenswrapper[5008]: I0129 15:35:05.330758 5008 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6aef1830-577d-405c-bb54-6f9fe217ae86" path="/var/lib/kubelet/pods/6aef1830-577d-405c-bb54-6f9fe217ae86/volumes" Jan 29 15:35:05 crc kubenswrapper[5008]: I0129 15:35:05.331421 5008 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7473d665-3627-4470-a820-ebdbdc113587" path="/var/lib/kubelet/pods/7473d665-3627-4470-a820-ebdbdc113587/volumes" Jan 29 15:35:05 crc kubenswrapper[5008]: I0129 15:35:05.331854 5008 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d2d42845-cca1-4b60-bc84-4b2baebf702b" path="/var/lib/kubelet/pods/d2d42845-cca1-4b60-bc84-4b2baebf702b/volumes" Jan 29 15:35:05 crc kubenswrapper[5008]: I0129 15:35:05.332380 5008 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ea8deba9-72cb-4274-add1-e80591a9e7cc" path="/var/lib/kubelet/pods/ea8deba9-72cb-4274-add1-e80591a9e7cc/volumes" Jan 29 15:35:05 crc kubenswrapper[5008]: I0129 15:35:05.336765 5008 generic.go:334] "Generic (PLEG): container finished" podID="1babb539-12b9-4532-b9c3-bc165829c40e" containerID="c71dddbb50bddf0961ab298e304c65bedc0bbf44cbca0140b51d704f99e7773a" exitCode=0 Jan 29 15:35:05 crc kubenswrapper[5008]: I0129 15:35:05.336811 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nd64n" event={"ID":"1babb539-12b9-4532-b9c3-bc165829c40e","Type":"ContainerDied","Data":"c71dddbb50bddf0961ab298e304c65bedc0bbf44cbca0140b51d704f99e7773a"} Jan 29 15:35:05 crc kubenswrapper[5008]: I0129 15:35:05.336841 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nd64n" event={"ID":"1babb539-12b9-4532-b9c3-bc165829c40e","Type":"ContainerStarted","Data":"e37689380c3622f490c954f7fb007fa6de6d35e480891ae38bd2f40c0a5d14c2"} Jan 29 15:35:05 crc kubenswrapper[5008]: I0129 15:35:05.341078 5008 generic.go:334] "Generic (PLEG): container finished" podID="5fbd5270-4a24-47ba-a0cf-0c3382a833c0" containerID="2ee95d7903e3576a4d9a678fd50e6ad9cbd147b2b509919775ff7bee59a15d44" exitCode=0 Jan 29 15:35:05 crc kubenswrapper[5008]: I0129 15:35:05.341179 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5g5wg" event={"ID":"5fbd5270-4a24-47ba-a0cf-0c3382a833c0","Type":"ContainerDied","Data":"2ee95d7903e3576a4d9a678fd50e6ad9cbd147b2b509919775ff7bee59a15d44"} Jan 29 15:35:05 crc kubenswrapper[5008]: I0129 15:35:05.341209 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5g5wg" event={"ID":"5fbd5270-4a24-47ba-a0cf-0c3382a833c0","Type":"ContainerStarted","Data":"f47cdc7022ed4732fc406cdc2a4cd2a094585fb848f3cc3f166c35c3b35b744c"} Jan 29 15:35:06 crc kubenswrapper[5008]: I0129 15:35:06.269912 5008 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-l2shr"] Jan 29 15:35:06 crc kubenswrapper[5008]: I0129 15:35:06.271667 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-l2shr" Jan 29 15:35:06 crc kubenswrapper[5008]: I0129 15:35:06.277088 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Jan 29 15:35:06 crc kubenswrapper[5008]: I0129 15:35:06.279009 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-l2shr"] Jan 29 15:35:06 crc kubenswrapper[5008]: I0129 15:35:06.417760 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8xstw\" (UniqueName: \"kubernetes.io/projected/6263e09b-1d9a-4833-851b-1cb8c8132dfe-kube-api-access-8xstw\") pod \"certified-operators-l2shr\" (UID: \"6263e09b-1d9a-4833-851b-1cb8c8132dfe\") " pod="openshift-marketplace/certified-operators-l2shr" Jan 29 15:35:06 crc kubenswrapper[5008]: I0129 15:35:06.418034 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6263e09b-1d9a-4833-851b-1cb8c8132dfe-catalog-content\") pod \"certified-operators-l2shr\" (UID: \"6263e09b-1d9a-4833-851b-1cb8c8132dfe\") " pod="openshift-marketplace/certified-operators-l2shr" Jan 29 15:35:06 crc kubenswrapper[5008]: I0129 15:35:06.418095 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6263e09b-1d9a-4833-851b-1cb8c8132dfe-utilities\") pod \"certified-operators-l2shr\" (UID: \"6263e09b-1d9a-4833-851b-1cb8c8132dfe\") " pod="openshift-marketplace/certified-operators-l2shr" Jan 29 15:35:06 crc kubenswrapper[5008]: I0129 15:35:06.467091 5008 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-5br4h"] Jan 29 15:35:06 crc kubenswrapper[5008]: I0129 15:35:06.468385 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-5br4h" Jan 29 15:35:06 crc kubenswrapper[5008]: I0129 15:35:06.472818 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Jan 29 15:35:06 crc kubenswrapper[5008]: I0129 15:35:06.477063 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-5br4h"] Jan 29 15:35:06 crc kubenswrapper[5008]: I0129 15:35:06.519559 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6263e09b-1d9a-4833-851b-1cb8c8132dfe-catalog-content\") pod \"certified-operators-l2shr\" (UID: \"6263e09b-1d9a-4833-851b-1cb8c8132dfe\") " pod="openshift-marketplace/certified-operators-l2shr" Jan 29 15:35:06 crc kubenswrapper[5008]: I0129 15:35:06.519598 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6263e09b-1d9a-4833-851b-1cb8c8132dfe-utilities\") pod \"certified-operators-l2shr\" (UID: \"6263e09b-1d9a-4833-851b-1cb8c8132dfe\") " pod="openshift-marketplace/certified-operators-l2shr" Jan 29 15:35:06 crc kubenswrapper[5008]: I0129 15:35:06.519634 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b4517208-d057-4652-a3c2-fb8374a45a04-utilities\") pod \"community-operators-5br4h\" (UID: \"b4517208-d057-4652-a3c2-fb8374a45a04\") " pod="openshift-marketplace/community-operators-5br4h" Jan 29 15:35:06 crc kubenswrapper[5008]: I0129 15:35:06.519670 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9d5zx\" (UniqueName: \"kubernetes.io/projected/b4517208-d057-4652-a3c2-fb8374a45a04-kube-api-access-9d5zx\") pod \"community-operators-5br4h\" (UID: \"b4517208-d057-4652-a3c2-fb8374a45a04\") " pod="openshift-marketplace/community-operators-5br4h" Jan 29 15:35:06 crc kubenswrapper[5008]: I0129 15:35:06.519704 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b4517208-d057-4652-a3c2-fb8374a45a04-catalog-content\") pod \"community-operators-5br4h\" (UID: \"b4517208-d057-4652-a3c2-fb8374a45a04\") " pod="openshift-marketplace/community-operators-5br4h" Jan 29 15:35:06 crc kubenswrapper[5008]: I0129 15:35:06.519756 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8xstw\" (UniqueName: \"kubernetes.io/projected/6263e09b-1d9a-4833-851b-1cb8c8132dfe-kube-api-access-8xstw\") pod \"certified-operators-l2shr\" (UID: \"6263e09b-1d9a-4833-851b-1cb8c8132dfe\") " pod="openshift-marketplace/certified-operators-l2shr" Jan 29 15:35:06 crc kubenswrapper[5008]: I0129 15:35:06.520026 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6263e09b-1d9a-4833-851b-1cb8c8132dfe-catalog-content\") pod \"certified-operators-l2shr\" (UID: \"6263e09b-1d9a-4833-851b-1cb8c8132dfe\") " pod="openshift-marketplace/certified-operators-l2shr" Jan 29 15:35:06 crc kubenswrapper[5008]: I0129 15:35:06.520298 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6263e09b-1d9a-4833-851b-1cb8c8132dfe-utilities\") pod \"certified-operators-l2shr\" (UID: \"6263e09b-1d9a-4833-851b-1cb8c8132dfe\") " pod="openshift-marketplace/certified-operators-l2shr" Jan 29 15:35:06 crc kubenswrapper[5008]: I0129 15:35:06.538023 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8xstw\" (UniqueName: \"kubernetes.io/projected/6263e09b-1d9a-4833-851b-1cb8c8132dfe-kube-api-access-8xstw\") pod \"certified-operators-l2shr\" (UID: \"6263e09b-1d9a-4833-851b-1cb8c8132dfe\") " pod="openshift-marketplace/certified-operators-l2shr" Jan 29 15:35:06 crc kubenswrapper[5008]: I0129 15:35:06.620564 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b4517208-d057-4652-a3c2-fb8374a45a04-utilities\") pod \"community-operators-5br4h\" (UID: \"b4517208-d057-4652-a3c2-fb8374a45a04\") " pod="openshift-marketplace/community-operators-5br4h" Jan 29 15:35:06 crc kubenswrapper[5008]: I0129 15:35:06.620601 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9d5zx\" (UniqueName: \"kubernetes.io/projected/b4517208-d057-4652-a3c2-fb8374a45a04-kube-api-access-9d5zx\") pod \"community-operators-5br4h\" (UID: \"b4517208-d057-4652-a3c2-fb8374a45a04\") " pod="openshift-marketplace/community-operators-5br4h" Jan 29 15:35:06 crc kubenswrapper[5008]: I0129 15:35:06.620624 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b4517208-d057-4652-a3c2-fb8374a45a04-catalog-content\") pod \"community-operators-5br4h\" (UID: \"b4517208-d057-4652-a3c2-fb8374a45a04\") " pod="openshift-marketplace/community-operators-5br4h" Jan 29 15:35:06 crc kubenswrapper[5008]: I0129 15:35:06.621009 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b4517208-d057-4652-a3c2-fb8374a45a04-catalog-content\") pod \"community-operators-5br4h\" (UID: \"b4517208-d057-4652-a3c2-fb8374a45a04\") " pod="openshift-marketplace/community-operators-5br4h" Jan 29 15:35:06 crc kubenswrapper[5008]: I0129 15:35:06.621120 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b4517208-d057-4652-a3c2-fb8374a45a04-utilities\") pod \"community-operators-5br4h\" (UID: \"b4517208-d057-4652-a3c2-fb8374a45a04\") " pod="openshift-marketplace/community-operators-5br4h" Jan 29 15:35:06 crc kubenswrapper[5008]: I0129 15:35:06.626206 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-l2shr" Jan 29 15:35:06 crc kubenswrapper[5008]: I0129 15:35:06.637755 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9d5zx\" (UniqueName: \"kubernetes.io/projected/b4517208-d057-4652-a3c2-fb8374a45a04-kube-api-access-9d5zx\") pod \"community-operators-5br4h\" (UID: \"b4517208-d057-4652-a3c2-fb8374a45a04\") " pod="openshift-marketplace/community-operators-5br4h" Jan 29 15:35:06 crc kubenswrapper[5008]: I0129 15:35:06.799860 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-5br4h" Jan 29 15:35:07 crc kubenswrapper[5008]: I0129 15:35:07.092721 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-l2shr"] Jan 29 15:35:07 crc kubenswrapper[5008]: W0129 15:35:07.168768 5008 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6263e09b_1d9a_4833_851b_1cb8c8132dfe.slice/crio-57480267075c1c6c1f32d424673d3bfff9181437427053dbebbc2ef55150cf47 WatchSource:0}: Error finding container 57480267075c1c6c1f32d424673d3bfff9181437427053dbebbc2ef55150cf47: Status 404 returned error can't find the container with id 57480267075c1c6c1f32d424673d3bfff9181437427053dbebbc2ef55150cf47 Jan 29 15:35:07 crc kubenswrapper[5008]: I0129 15:35:07.224270 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-5br4h"] Jan 29 15:35:07 crc kubenswrapper[5008]: W0129 15:35:07.247099 5008 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb4517208_d057_4652_a3c2_fb8374a45a04.slice/crio-4f6ce874b68e0fda76e6971e2fb915ac52044b317e68762759151427ef13befc WatchSource:0}: Error finding container 4f6ce874b68e0fda76e6971e2fb915ac52044b317e68762759151427ef13befc: Status 404 returned error can't find the container with id 4f6ce874b68e0fda76e6971e2fb915ac52044b317e68762759151427ef13befc Jan 29 15:35:07 crc kubenswrapper[5008]: I0129 15:35:07.358385 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5g5wg" event={"ID":"5fbd5270-4a24-47ba-a0cf-0c3382a833c0","Type":"ContainerStarted","Data":"f6566b1a71bc154f8976359feb809c07d53908a590fb8fe1ffa6fa71bd415b5d"} Jan 29 15:35:07 crc kubenswrapper[5008]: I0129 15:35:07.360814 5008 generic.go:334] "Generic (PLEG): container finished" podID="6263e09b-1d9a-4833-851b-1cb8c8132dfe" containerID="1ee50f343fac896f32e8426a9fca1830223d71004d0f941dac17e02272ea739e" exitCode=0 Jan 29 15:35:07 crc kubenswrapper[5008]: I0129 15:35:07.361003 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-l2shr" event={"ID":"6263e09b-1d9a-4833-851b-1cb8c8132dfe","Type":"ContainerDied","Data":"1ee50f343fac896f32e8426a9fca1830223d71004d0f941dac17e02272ea739e"} Jan 29 15:35:07 crc kubenswrapper[5008]: I0129 15:35:07.361122 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-l2shr" event={"ID":"6263e09b-1d9a-4833-851b-1cb8c8132dfe","Type":"ContainerStarted","Data":"57480267075c1c6c1f32d424673d3bfff9181437427053dbebbc2ef55150cf47"} Jan 29 15:35:07 crc kubenswrapper[5008]: I0129 15:35:07.366139 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5br4h" event={"ID":"b4517208-d057-4652-a3c2-fb8374a45a04","Type":"ContainerStarted","Data":"4f6ce874b68e0fda76e6971e2fb915ac52044b317e68762759151427ef13befc"} Jan 29 15:35:07 crc kubenswrapper[5008]: I0129 15:35:07.368029 5008 generic.go:334] "Generic (PLEG): container finished" podID="1babb539-12b9-4532-b9c3-bc165829c40e" containerID="6fad28f4bf1cf406958ddb55142116cd23f56d5096aa3407e95620ebb3a848e6" exitCode=0 Jan 29 15:35:07 crc kubenswrapper[5008]: I0129 15:35:07.368086 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nd64n" event={"ID":"1babb539-12b9-4532-b9c3-bc165829c40e","Type":"ContainerDied","Data":"6fad28f4bf1cf406958ddb55142116cd23f56d5096aa3407e95620ebb3a848e6"} Jan 29 15:35:08 crc kubenswrapper[5008]: I0129 15:35:08.374165 5008 generic.go:334] "Generic (PLEG): container finished" podID="5fbd5270-4a24-47ba-a0cf-0c3382a833c0" containerID="f6566b1a71bc154f8976359feb809c07d53908a590fb8fe1ffa6fa71bd415b5d" exitCode=0 Jan 29 15:35:08 crc kubenswrapper[5008]: I0129 15:35:08.374241 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5g5wg" event={"ID":"5fbd5270-4a24-47ba-a0cf-0c3382a833c0","Type":"ContainerDied","Data":"f6566b1a71bc154f8976359feb809c07d53908a590fb8fe1ffa6fa71bd415b5d"} Jan 29 15:35:08 crc kubenswrapper[5008]: I0129 15:35:08.378204 5008 generic.go:334] "Generic (PLEG): container finished" podID="6263e09b-1d9a-4833-851b-1cb8c8132dfe" containerID="6516b380bd26c21d19906c2018f0fe0dc2208d3e146b72ecc659dc058365fb8a" exitCode=0 Jan 29 15:35:08 crc kubenswrapper[5008]: I0129 15:35:08.378255 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-l2shr" event={"ID":"6263e09b-1d9a-4833-851b-1cb8c8132dfe","Type":"ContainerDied","Data":"6516b380bd26c21d19906c2018f0fe0dc2208d3e146b72ecc659dc058365fb8a"} Jan 29 15:35:08 crc kubenswrapper[5008]: I0129 15:35:08.381292 5008 generic.go:334] "Generic (PLEG): container finished" podID="b4517208-d057-4652-a3c2-fb8374a45a04" containerID="3e99b0758cd250255cb957c3e3c8a726a0dcf68bdfc21fa22b16d16aec39c8cb" exitCode=0 Jan 29 15:35:08 crc kubenswrapper[5008]: I0129 15:35:08.381340 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5br4h" event={"ID":"b4517208-d057-4652-a3c2-fb8374a45a04","Type":"ContainerDied","Data":"3e99b0758cd250255cb957c3e3c8a726a0dcf68bdfc21fa22b16d16aec39c8cb"} Jan 29 15:35:08 crc kubenswrapper[5008]: I0129 15:35:08.385161 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nd64n" event={"ID":"1babb539-12b9-4532-b9c3-bc165829c40e","Type":"ContainerStarted","Data":"089753bfd363a7f88b34666d0f0064b2c2b42df8b6e141620a4f9204ab79a2d9"} Jan 29 15:35:08 crc kubenswrapper[5008]: I0129 15:35:08.426948 5008 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-nd64n" podStartSLOduration=2.711090651 podStartE2EDuration="5.426932765s" podCreationTimestamp="2026-01-29 15:35:03 +0000 UTC" firstStartedPulling="2026-01-29 15:35:05.338422633 +0000 UTC m=+449.011276870" lastFinishedPulling="2026-01-29 15:35:08.054264747 +0000 UTC m=+451.727118984" observedRunningTime="2026-01-29 15:35:08.411239099 +0000 UTC m=+452.084093336" watchObservedRunningTime="2026-01-29 15:35:08.426932765 +0000 UTC m=+452.099787002" Jan 29 15:35:09 crc kubenswrapper[5008]: I0129 15:35:09.392346 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5g5wg" event={"ID":"5fbd5270-4a24-47ba-a0cf-0c3382a833c0","Type":"ContainerStarted","Data":"c648c2606a395897d1776e33ddc545c7be11dafde7981802c30449fc687b5b1f"} Jan 29 15:35:09 crc kubenswrapper[5008]: I0129 15:35:09.395440 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-l2shr" event={"ID":"6263e09b-1d9a-4833-851b-1cb8c8132dfe","Type":"ContainerStarted","Data":"66815f6259390c53bbff0823dc258c97136d9ac4e32415a6f01a94831722e5ec"} Jan 29 15:35:09 crc kubenswrapper[5008]: I0129 15:35:09.422904 5008 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-5g5wg" podStartSLOduration=1.877726032 podStartE2EDuration="5.422887745s" podCreationTimestamp="2026-01-29 15:35:04 +0000 UTC" firstStartedPulling="2026-01-29 15:35:05.342393308 +0000 UTC m=+449.015247545" lastFinishedPulling="2026-01-29 15:35:08.887555021 +0000 UTC m=+452.560409258" observedRunningTime="2026-01-29 15:35:09.4226578 +0000 UTC m=+453.095512057" watchObservedRunningTime="2026-01-29 15:35:09.422887745 +0000 UTC m=+453.095741982" Jan 29 15:35:09 crc kubenswrapper[5008]: I0129 15:35:09.439574 5008 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-l2shr" podStartSLOduration=1.903850864 podStartE2EDuration="3.439560645s" podCreationTimestamp="2026-01-29 15:35:06 +0000 UTC" firstStartedPulling="2026-01-29 15:35:07.364526213 +0000 UTC m=+451.037380450" lastFinishedPulling="2026-01-29 15:35:08.900235994 +0000 UTC m=+452.573090231" observedRunningTime="2026-01-29 15:35:09.43850307 +0000 UTC m=+453.111357317" watchObservedRunningTime="2026-01-29 15:35:09.439560645 +0000 UTC m=+453.112414882" Jan 29 15:35:10 crc kubenswrapper[5008]: I0129 15:35:10.403629 5008 generic.go:334] "Generic (PLEG): container finished" podID="b4517208-d057-4652-a3c2-fb8374a45a04" containerID="d950744f523fe600a5ffb714a059bbb60099a47879bf08a43236c06ec7e7485d" exitCode=0 Jan 29 15:35:10 crc kubenswrapper[5008]: I0129 15:35:10.403749 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5br4h" event={"ID":"b4517208-d057-4652-a3c2-fb8374a45a04","Type":"ContainerDied","Data":"d950744f523fe600a5ffb714a059bbb60099a47879bf08a43236c06ec7e7485d"} Jan 29 15:35:11 crc kubenswrapper[5008]: I0129 15:35:11.411477 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5br4h" event={"ID":"b4517208-d057-4652-a3c2-fb8374a45a04","Type":"ContainerStarted","Data":"725bdcdc2388f6f1986bbce31dd59ec18775828a95c676633832ed8592276314"} Jan 29 15:35:11 crc kubenswrapper[5008]: I0129 15:35:11.432409 5008 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-5br4h" podStartSLOduration=2.87119727 podStartE2EDuration="5.432395929s" podCreationTimestamp="2026-01-29 15:35:06 +0000 UTC" firstStartedPulling="2026-01-29 15:35:08.382769168 +0000 UTC m=+452.055623405" lastFinishedPulling="2026-01-29 15:35:10.943967827 +0000 UTC m=+454.616822064" observedRunningTime="2026-01-29 15:35:11.430514513 +0000 UTC m=+455.103368750" watchObservedRunningTime="2026-01-29 15:35:11.432395929 +0000 UTC m=+455.105250166" Jan 29 15:35:14 crc kubenswrapper[5008]: I0129 15:35:14.203264 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-nd64n" Jan 29 15:35:14 crc kubenswrapper[5008]: I0129 15:35:14.204502 5008 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-nd64n" Jan 29 15:35:14 crc kubenswrapper[5008]: I0129 15:35:14.242613 5008 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-nd64n" Jan 29 15:35:14 crc kubenswrapper[5008]: I0129 15:35:14.406295 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-5g5wg" Jan 29 15:35:14 crc kubenswrapper[5008]: I0129 15:35:14.406346 5008 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-5g5wg" Jan 29 15:35:14 crc kubenswrapper[5008]: I0129 15:35:14.460683 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-nd64n" Jan 29 15:35:15 crc kubenswrapper[5008]: I0129 15:35:15.440241 5008 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-5g5wg" podUID="5fbd5270-4a24-47ba-a0cf-0c3382a833c0" containerName="registry-server" probeResult="failure" output=< Jan 29 15:35:15 crc kubenswrapper[5008]: timeout: failed to connect service ":50051" within 1s Jan 29 15:35:15 crc kubenswrapper[5008]: > Jan 29 15:35:16 crc kubenswrapper[5008]: I0129 15:35:16.627075 5008 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-l2shr" Jan 29 15:35:16 crc kubenswrapper[5008]: I0129 15:35:16.627125 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-l2shr" Jan 29 15:35:16 crc kubenswrapper[5008]: I0129 15:35:16.678833 5008 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-l2shr" Jan 29 15:35:16 crc kubenswrapper[5008]: I0129 15:35:16.800575 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-5br4h" Jan 29 15:35:16 crc kubenswrapper[5008]: I0129 15:35:16.800991 5008 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-5br4h" Jan 29 15:35:16 crc kubenswrapper[5008]: I0129 15:35:16.837275 5008 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-5br4h" Jan 29 15:35:17 crc kubenswrapper[5008]: I0129 15:35:17.487366 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-5br4h" Jan 29 15:35:17 crc kubenswrapper[5008]: I0129 15:35:17.488889 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-l2shr" Jan 29 15:35:18 crc kubenswrapper[5008]: I0129 15:35:18.240498 5008 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-image-registry/image-registry-697d97f7c8-qm54x" podUID="30c54800-b443-4da8-9d41-22e8f156a1a1" containerName="registry" containerID="cri-o://30e2e1673271910cbbe5ac685fc8d9b9256d07c42ba932c22e18da6b153ba5d5" gracePeriod=30 Jan 29 15:35:22 crc kubenswrapper[5008]: I0129 15:35:22.112883 5008 patch_prober.go:28] interesting pod/image-registry-697d97f7c8-qm54x container/registry namespace/openshift-image-registry: Readiness probe status=failure output="Get \"https://10.217.0.32:5000/healthz\": dial tcp 10.217.0.32:5000: connect: connection refused" start-of-body= Jan 29 15:35:22 crc kubenswrapper[5008]: I0129 15:35:22.113523 5008 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-image-registry/image-registry-697d97f7c8-qm54x" podUID="30c54800-b443-4da8-9d41-22e8f156a1a1" containerName="registry" probeResult="failure" output="Get \"https://10.217.0.32:5000/healthz\": dial tcp 10.217.0.32:5000: connect: connection refused" Jan 29 15:35:23 crc kubenswrapper[5008]: I0129 15:35:23.969991 5008 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-qm54x" Jan 29 15:35:24 crc kubenswrapper[5008]: I0129 15:35:24.057158 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/30c54800-b443-4da8-9d41-22e8f156a1a1-registry-certificates\") pod \"30c54800-b443-4da8-9d41-22e8f156a1a1\" (UID: \"30c54800-b443-4da8-9d41-22e8f156a1a1\") " Jan 29 15:35:24 crc kubenswrapper[5008]: I0129 15:35:24.057223 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/30c54800-b443-4da8-9d41-22e8f156a1a1-ca-trust-extracted\") pod \"30c54800-b443-4da8-9d41-22e8f156a1a1\" (UID: \"30c54800-b443-4da8-9d41-22e8f156a1a1\") " Jan 29 15:35:24 crc kubenswrapper[5008]: I0129 15:35:24.057261 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/30c54800-b443-4da8-9d41-22e8f156a1a1-registry-tls\") pod \"30c54800-b443-4da8-9d41-22e8f156a1a1\" (UID: \"30c54800-b443-4da8-9d41-22e8f156a1a1\") " Jan 29 15:35:24 crc kubenswrapper[5008]: I0129 15:35:24.057305 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tsm4s\" (UniqueName: \"kubernetes.io/projected/30c54800-b443-4da8-9d41-22e8f156a1a1-kube-api-access-tsm4s\") pod \"30c54800-b443-4da8-9d41-22e8f156a1a1\" (UID: \"30c54800-b443-4da8-9d41-22e8f156a1a1\") " Jan 29 15:35:24 crc kubenswrapper[5008]: I0129 15:35:24.057368 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/30c54800-b443-4da8-9d41-22e8f156a1a1-bound-sa-token\") pod \"30c54800-b443-4da8-9d41-22e8f156a1a1\" (UID: \"30c54800-b443-4da8-9d41-22e8f156a1a1\") " Jan 29 15:35:24 crc kubenswrapper[5008]: I0129 15:35:24.057645 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-storage\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"30c54800-b443-4da8-9d41-22e8f156a1a1\" (UID: \"30c54800-b443-4da8-9d41-22e8f156a1a1\") " Jan 29 15:35:24 crc kubenswrapper[5008]: I0129 15:35:24.057730 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/30c54800-b443-4da8-9d41-22e8f156a1a1-installation-pull-secrets\") pod \"30c54800-b443-4da8-9d41-22e8f156a1a1\" (UID: \"30c54800-b443-4da8-9d41-22e8f156a1a1\") " Jan 29 15:35:24 crc kubenswrapper[5008]: I0129 15:35:24.057758 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/30c54800-b443-4da8-9d41-22e8f156a1a1-trusted-ca\") pod \"30c54800-b443-4da8-9d41-22e8f156a1a1\" (UID: \"30c54800-b443-4da8-9d41-22e8f156a1a1\") " Jan 29 15:35:24 crc kubenswrapper[5008]: I0129 15:35:24.059223 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/30c54800-b443-4da8-9d41-22e8f156a1a1-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "30c54800-b443-4da8-9d41-22e8f156a1a1" (UID: "30c54800-b443-4da8-9d41-22e8f156a1a1"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:35:24 crc kubenswrapper[5008]: I0129 15:35:24.059579 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/30c54800-b443-4da8-9d41-22e8f156a1a1-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "30c54800-b443-4da8-9d41-22e8f156a1a1" (UID: "30c54800-b443-4da8-9d41-22e8f156a1a1"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:35:24 crc kubenswrapper[5008]: I0129 15:35:24.063990 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/30c54800-b443-4da8-9d41-22e8f156a1a1-kube-api-access-tsm4s" (OuterVolumeSpecName: "kube-api-access-tsm4s") pod "30c54800-b443-4da8-9d41-22e8f156a1a1" (UID: "30c54800-b443-4da8-9d41-22e8f156a1a1"). InnerVolumeSpecName "kube-api-access-tsm4s". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:35:24 crc kubenswrapper[5008]: I0129 15:35:24.064359 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/30c54800-b443-4da8-9d41-22e8f156a1a1-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "30c54800-b443-4da8-9d41-22e8f156a1a1" (UID: "30c54800-b443-4da8-9d41-22e8f156a1a1"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:35:24 crc kubenswrapper[5008]: I0129 15:35:24.064642 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/30c54800-b443-4da8-9d41-22e8f156a1a1-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "30c54800-b443-4da8-9d41-22e8f156a1a1" (UID: "30c54800-b443-4da8-9d41-22e8f156a1a1"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:35:24 crc kubenswrapper[5008]: I0129 15:35:24.065986 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/30c54800-b443-4da8-9d41-22e8f156a1a1-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "30c54800-b443-4da8-9d41-22e8f156a1a1" (UID: "30c54800-b443-4da8-9d41-22e8f156a1a1"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:35:24 crc kubenswrapper[5008]: I0129 15:35:24.075646 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "registry-storage") pod "30c54800-b443-4da8-9d41-22e8f156a1a1" (UID: "30c54800-b443-4da8-9d41-22e8f156a1a1"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Jan 29 15:35:24 crc kubenswrapper[5008]: I0129 15:35:24.090836 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/30c54800-b443-4da8-9d41-22e8f156a1a1-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "30c54800-b443-4da8-9d41-22e8f156a1a1" (UID: "30c54800-b443-4da8-9d41-22e8f156a1a1"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 15:35:24 crc kubenswrapper[5008]: I0129 15:35:24.158954 5008 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/30c54800-b443-4da8-9d41-22e8f156a1a1-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 29 15:35:24 crc kubenswrapper[5008]: I0129 15:35:24.159008 5008 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/30c54800-b443-4da8-9d41-22e8f156a1a1-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Jan 29 15:35:24 crc kubenswrapper[5008]: I0129 15:35:24.159031 5008 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/30c54800-b443-4da8-9d41-22e8f156a1a1-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 29 15:35:24 crc kubenswrapper[5008]: I0129 15:35:24.159050 5008 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/30c54800-b443-4da8-9d41-22e8f156a1a1-registry-certificates\") on node \"crc\" DevicePath \"\"" Jan 29 15:35:24 crc kubenswrapper[5008]: I0129 15:35:24.159068 5008 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/30c54800-b443-4da8-9d41-22e8f156a1a1-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Jan 29 15:35:24 crc kubenswrapper[5008]: I0129 15:35:24.159085 5008 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/30c54800-b443-4da8-9d41-22e8f156a1a1-registry-tls\") on node \"crc\" DevicePath \"\"" Jan 29 15:35:24 crc kubenswrapper[5008]: I0129 15:35:24.159102 5008 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tsm4s\" (UniqueName: \"kubernetes.io/projected/30c54800-b443-4da8-9d41-22e8f156a1a1-kube-api-access-tsm4s\") on node \"crc\" DevicePath \"\"" Jan 29 15:35:24 crc kubenswrapper[5008]: I0129 15:35:24.303258 5008 generic.go:334] "Generic (PLEG): container finished" podID="30c54800-b443-4da8-9d41-22e8f156a1a1" containerID="30e2e1673271910cbbe5ac685fc8d9b9256d07c42ba932c22e18da6b153ba5d5" exitCode=0 Jan 29 15:35:24 crc kubenswrapper[5008]: I0129 15:35:24.303309 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-qm54x" event={"ID":"30c54800-b443-4da8-9d41-22e8f156a1a1","Type":"ContainerDied","Data":"30e2e1673271910cbbe5ac685fc8d9b9256d07c42ba932c22e18da6b153ba5d5"} Jan 29 15:35:24 crc kubenswrapper[5008]: I0129 15:35:24.303348 5008 scope.go:117] "RemoveContainer" containerID="30e2e1673271910cbbe5ac685fc8d9b9256d07c42ba932c22e18da6b153ba5d5" Jan 29 15:35:24 crc kubenswrapper[5008]: I0129 15:35:24.440643 5008 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-5g5wg" Jan 29 15:35:24 crc kubenswrapper[5008]: I0129 15:35:24.478812 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-5g5wg" Jan 29 15:35:25 crc kubenswrapper[5008]: I0129 15:35:25.309992 5008 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-qm54x" Jan 29 15:35:25 crc kubenswrapper[5008]: I0129 15:35:25.310004 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-qm54x" event={"ID":"30c54800-b443-4da8-9d41-22e8f156a1a1","Type":"ContainerDied","Data":"59462ccb837299ee29a72d7df21357033cdf6b013812c469de4c5ef1edbad70d"} Jan 29 15:35:25 crc kubenswrapper[5008]: I0129 15:35:25.340696 5008 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-qm54x"] Jan 29 15:35:25 crc kubenswrapper[5008]: I0129 15:35:25.348293 5008 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-qm54x"] Jan 29 15:35:27 crc kubenswrapper[5008]: I0129 15:35:27.330429 5008 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="30c54800-b443-4da8-9d41-22e8f156a1a1" path="/var/lib/kubelet/pods/30c54800-b443-4da8-9d41-22e8f156a1a1/volumes" Jan 29 15:37:13 crc kubenswrapper[5008]: I0129 15:37:13.990427 5008 patch_prober.go:28] interesting pod/machine-config-daemon-gk9q8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 15:37:13 crc kubenswrapper[5008]: I0129 15:37:13.991033 5008 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-gk9q8" podUID="ca0fcb2d-733d-4bde-9bbf-3f7082d0e244" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 15:37:43 crc kubenswrapper[5008]: I0129 15:37:43.990960 5008 patch_prober.go:28] interesting pod/machine-config-daemon-gk9q8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 15:37:43 crc kubenswrapper[5008]: I0129 15:37:43.991532 5008 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-gk9q8" podUID="ca0fcb2d-733d-4bde-9bbf-3f7082d0e244" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 15:38:13 crc kubenswrapper[5008]: I0129 15:38:13.991650 5008 patch_prober.go:28] interesting pod/machine-config-daemon-gk9q8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 15:38:13 crc kubenswrapper[5008]: I0129 15:38:13.992519 5008 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-gk9q8" podUID="ca0fcb2d-733d-4bde-9bbf-3f7082d0e244" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 15:38:13 crc kubenswrapper[5008]: I0129 15:38:13.992592 5008 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-gk9q8" Jan 29 15:38:13 crc kubenswrapper[5008]: I0129 15:38:13.993841 5008 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"9850a434d4d07df0fe32aef86e993277e84b797db07cefc7dc516322c6794dab"} pod="openshift-machine-config-operator/machine-config-daemon-gk9q8" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 29 15:38:13 crc kubenswrapper[5008]: I0129 15:38:13.993994 5008 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-gk9q8" podUID="ca0fcb2d-733d-4bde-9bbf-3f7082d0e244" containerName="machine-config-daemon" containerID="cri-o://9850a434d4d07df0fe32aef86e993277e84b797db07cefc7dc516322c6794dab" gracePeriod=600 Jan 29 15:38:14 crc kubenswrapper[5008]: I0129 15:38:14.338071 5008 generic.go:334] "Generic (PLEG): container finished" podID="ca0fcb2d-733d-4bde-9bbf-3f7082d0e244" containerID="9850a434d4d07df0fe32aef86e993277e84b797db07cefc7dc516322c6794dab" exitCode=0 Jan 29 15:38:14 crc kubenswrapper[5008]: I0129 15:38:14.338160 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-gk9q8" event={"ID":"ca0fcb2d-733d-4bde-9bbf-3f7082d0e244","Type":"ContainerDied","Data":"9850a434d4d07df0fe32aef86e993277e84b797db07cefc7dc516322c6794dab"} Jan 29 15:38:14 crc kubenswrapper[5008]: I0129 15:38:14.338663 5008 scope.go:117] "RemoveContainer" containerID="1094d3e48c81c3e2ea9f57f39bbd7ccc01c1ccc72a4337e691b80548a8d40521" Jan 29 15:38:15 crc kubenswrapper[5008]: I0129 15:38:15.348916 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-gk9q8" event={"ID":"ca0fcb2d-733d-4bde-9bbf-3f7082d0e244","Type":"ContainerStarted","Data":"d89267ade5f0f1bc5747291958183960695e4e4e932d44027e6c4704ebb5c4ef"} Jan 29 15:38:41 crc kubenswrapper[5008]: I0129 15:38:41.226725 5008 scope.go:117] "RemoveContainer" containerID="40321afd189e235fc1bb78923d74cb98e8fe85b88b55f9bd3844976bd07eb0f5" Jan 29 15:40:03 crc kubenswrapper[5008]: I0129 15:40:03.232713 5008 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-cainjector-cf98fcc89-dvjtx"] Jan 29 15:40:03 crc kubenswrapper[5008]: E0129 15:40:03.234100 5008 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="30c54800-b443-4da8-9d41-22e8f156a1a1" containerName="registry" Jan 29 15:40:03 crc kubenswrapper[5008]: I0129 15:40:03.234163 5008 state_mem.go:107] "Deleted CPUSet assignment" podUID="30c54800-b443-4da8-9d41-22e8f156a1a1" containerName="registry" Jan 29 15:40:03 crc kubenswrapper[5008]: I0129 15:40:03.234308 5008 memory_manager.go:354] "RemoveStaleState removing state" podUID="30c54800-b443-4da8-9d41-22e8f156a1a1" containerName="registry" Jan 29 15:40:03 crc kubenswrapper[5008]: I0129 15:40:03.234718 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-cf98fcc89-dvjtx" Jan 29 15:40:03 crc kubenswrapper[5008]: I0129 15:40:03.241341 5008 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"openshift-service-ca.crt" Jan 29 15:40:03 crc kubenswrapper[5008]: I0129 15:40:03.241541 5008 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"kube-root-ca.crt" Jan 29 15:40:03 crc kubenswrapper[5008]: I0129 15:40:03.241662 5008 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-cainjector-dockercfg-kdwxj" Jan 29 15:40:03 crc kubenswrapper[5008]: I0129 15:40:03.248216 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-cf98fcc89-dvjtx"] Jan 29 15:40:03 crc kubenswrapper[5008]: I0129 15:40:03.252767 5008 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-858654f9db-fbjsd"] Jan 29 15:40:03 crc kubenswrapper[5008]: I0129 15:40:03.253478 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-858654f9db-fbjsd" Jan 29 15:40:03 crc kubenswrapper[5008]: I0129 15:40:03.258249 5008 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-dockercfg-cw4ht" Jan 29 15:40:03 crc kubenswrapper[5008]: I0129 15:40:03.260055 5008 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-webhook-687f57d79b-wvlhn"] Jan 29 15:40:03 crc kubenswrapper[5008]: I0129 15:40:03.260777 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-687f57d79b-wvlhn" Jan 29 15:40:03 crc kubenswrapper[5008]: I0129 15:40:03.264207 5008 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-webhook-dockercfg-mhxb5" Jan 29 15:40:03 crc kubenswrapper[5008]: I0129 15:40:03.279434 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-687f57d79b-wvlhn"] Jan 29 15:40:03 crc kubenswrapper[5008]: I0129 15:40:03.285860 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-858654f9db-fbjsd"] Jan 29 15:40:03 crc kubenswrapper[5008]: I0129 15:40:03.330357 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kjcv6\" (UniqueName: \"kubernetes.io/projected/1217edcf-8ec1-4354-8fbe-a9325b564932-kube-api-access-kjcv6\") pod \"cert-manager-cainjector-cf98fcc89-dvjtx\" (UID: \"1217edcf-8ec1-4354-8fbe-a9325b564932\") " pod="cert-manager/cert-manager-cainjector-cf98fcc89-dvjtx" Jan 29 15:40:03 crc kubenswrapper[5008]: I0129 15:40:03.431413 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dgfpc\" (UniqueName: \"kubernetes.io/projected/346fd378-8582-44af-8332-dad183bddf6e-kube-api-access-dgfpc\") pod \"cert-manager-858654f9db-fbjsd\" (UID: \"346fd378-8582-44af-8332-dad183bddf6e\") " pod="cert-manager/cert-manager-858654f9db-fbjsd" Jan 29 15:40:03 crc kubenswrapper[5008]: I0129 15:40:03.431461 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kjcv6\" (UniqueName: \"kubernetes.io/projected/1217edcf-8ec1-4354-8fbe-a9325b564932-kube-api-access-kjcv6\") pod \"cert-manager-cainjector-cf98fcc89-dvjtx\" (UID: \"1217edcf-8ec1-4354-8fbe-a9325b564932\") " pod="cert-manager/cert-manager-cainjector-cf98fcc89-dvjtx" Jan 29 15:40:03 crc kubenswrapper[5008]: I0129 15:40:03.431575 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tnxks\" (UniqueName: \"kubernetes.io/projected/6111be19-5e01-42e4-b4cf-3728e3ee4a6f-kube-api-access-tnxks\") pod \"cert-manager-webhook-687f57d79b-wvlhn\" (UID: \"6111be19-5e01-42e4-b4cf-3728e3ee4a6f\") " pod="cert-manager/cert-manager-webhook-687f57d79b-wvlhn" Jan 29 15:40:03 crc kubenswrapper[5008]: I0129 15:40:03.454659 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kjcv6\" (UniqueName: \"kubernetes.io/projected/1217edcf-8ec1-4354-8fbe-a9325b564932-kube-api-access-kjcv6\") pod \"cert-manager-cainjector-cf98fcc89-dvjtx\" (UID: \"1217edcf-8ec1-4354-8fbe-a9325b564932\") " pod="cert-manager/cert-manager-cainjector-cf98fcc89-dvjtx" Jan 29 15:40:03 crc kubenswrapper[5008]: I0129 15:40:03.533070 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dgfpc\" (UniqueName: \"kubernetes.io/projected/346fd378-8582-44af-8332-dad183bddf6e-kube-api-access-dgfpc\") pod \"cert-manager-858654f9db-fbjsd\" (UID: \"346fd378-8582-44af-8332-dad183bddf6e\") " pod="cert-manager/cert-manager-858654f9db-fbjsd" Jan 29 15:40:03 crc kubenswrapper[5008]: I0129 15:40:03.533176 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tnxks\" (UniqueName: \"kubernetes.io/projected/6111be19-5e01-42e4-b4cf-3728e3ee4a6f-kube-api-access-tnxks\") pod \"cert-manager-webhook-687f57d79b-wvlhn\" (UID: \"6111be19-5e01-42e4-b4cf-3728e3ee4a6f\") " pod="cert-manager/cert-manager-webhook-687f57d79b-wvlhn" Jan 29 15:40:03 crc kubenswrapper[5008]: I0129 15:40:03.552758 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-cf98fcc89-dvjtx" Jan 29 15:40:03 crc kubenswrapper[5008]: I0129 15:40:03.553498 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dgfpc\" (UniqueName: \"kubernetes.io/projected/346fd378-8582-44af-8332-dad183bddf6e-kube-api-access-dgfpc\") pod \"cert-manager-858654f9db-fbjsd\" (UID: \"346fd378-8582-44af-8332-dad183bddf6e\") " pod="cert-manager/cert-manager-858654f9db-fbjsd" Jan 29 15:40:03 crc kubenswrapper[5008]: I0129 15:40:03.554537 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tnxks\" (UniqueName: \"kubernetes.io/projected/6111be19-5e01-42e4-b4cf-3728e3ee4a6f-kube-api-access-tnxks\") pod \"cert-manager-webhook-687f57d79b-wvlhn\" (UID: \"6111be19-5e01-42e4-b4cf-3728e3ee4a6f\") " pod="cert-manager/cert-manager-webhook-687f57d79b-wvlhn" Jan 29 15:40:03 crc kubenswrapper[5008]: I0129 15:40:03.572532 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-858654f9db-fbjsd" Jan 29 15:40:03 crc kubenswrapper[5008]: I0129 15:40:03.578568 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-687f57d79b-wvlhn" Jan 29 15:40:03 crc kubenswrapper[5008]: I0129 15:40:03.781563 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-cf98fcc89-dvjtx"] Jan 29 15:40:03 crc kubenswrapper[5008]: W0129 15:40:03.790158 5008 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1217edcf_8ec1_4354_8fbe_a9325b564932.slice/crio-216995277e5d30ed098dd19e52df235162e50b78436973c63d29c1f7f45df80d WatchSource:0}: Error finding container 216995277e5d30ed098dd19e52df235162e50b78436973c63d29c1f7f45df80d: Status 404 returned error can't find the container with id 216995277e5d30ed098dd19e52df235162e50b78436973c63d29c1f7f45df80d Jan 29 15:40:03 crc kubenswrapper[5008]: I0129 15:40:03.792839 5008 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 29 15:40:03 crc kubenswrapper[5008]: I0129 15:40:03.824816 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-858654f9db-fbjsd"] Jan 29 15:40:03 crc kubenswrapper[5008]: W0129 15:40:03.831119 5008 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod346fd378_8582_44af_8332_dad183bddf6e.slice/crio-5c0ad503af1b2db8df1b5e71d1b0785a05ae8120e4e93b5a2efec461db0432f0 WatchSource:0}: Error finding container 5c0ad503af1b2db8df1b5e71d1b0785a05ae8120e4e93b5a2efec461db0432f0: Status 404 returned error can't find the container with id 5c0ad503af1b2db8df1b5e71d1b0785a05ae8120e4e93b5a2efec461db0432f0 Jan 29 15:40:03 crc kubenswrapper[5008]: I0129 15:40:03.860103 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-687f57d79b-wvlhn"] Jan 29 15:40:03 crc kubenswrapper[5008]: W0129 15:40:03.863577 5008 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6111be19_5e01_42e4_b4cf_3728e3ee4a6f.slice/crio-4abd6fd6a3fecf5bc465dfd724e04ab141b2c033e5c8b08ab8a920b1a01351a7 WatchSource:0}: Error finding container 4abd6fd6a3fecf5bc465dfd724e04ab141b2c033e5c8b08ab8a920b1a01351a7: Status 404 returned error can't find the container with id 4abd6fd6a3fecf5bc465dfd724e04ab141b2c033e5c8b08ab8a920b1a01351a7 Jan 29 15:40:04 crc kubenswrapper[5008]: I0129 15:40:04.017442 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-687f57d79b-wvlhn" event={"ID":"6111be19-5e01-42e4-b4cf-3728e3ee4a6f","Type":"ContainerStarted","Data":"4abd6fd6a3fecf5bc465dfd724e04ab141b2c033e5c8b08ab8a920b1a01351a7"} Jan 29 15:40:04 crc kubenswrapper[5008]: I0129 15:40:04.018901 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-cf98fcc89-dvjtx" event={"ID":"1217edcf-8ec1-4354-8fbe-a9325b564932","Type":"ContainerStarted","Data":"216995277e5d30ed098dd19e52df235162e50b78436973c63d29c1f7f45df80d"} Jan 29 15:40:04 crc kubenswrapper[5008]: I0129 15:40:04.020163 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-858654f9db-fbjsd" event={"ID":"346fd378-8582-44af-8332-dad183bddf6e","Type":"ContainerStarted","Data":"5c0ad503af1b2db8df1b5e71d1b0785a05ae8120e4e93b5a2efec461db0432f0"} Jan 29 15:40:09 crc kubenswrapper[5008]: I0129 15:40:09.051587 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-858654f9db-fbjsd" event={"ID":"346fd378-8582-44af-8332-dad183bddf6e","Type":"ContainerStarted","Data":"34bfddb8b2aa4c65caa162750a2c933a9e28ae7f64daf2f02258b413a9bf62fd"} Jan 29 15:40:09 crc kubenswrapper[5008]: I0129 15:40:09.071160 5008 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-858654f9db-fbjsd" podStartSLOduration=1.2556275669999999 podStartE2EDuration="6.071138678s" podCreationTimestamp="2026-01-29 15:40:03 +0000 UTC" firstStartedPulling="2026-01-29 15:40:03.836996114 +0000 UTC m=+747.509850351" lastFinishedPulling="2026-01-29 15:40:08.652507235 +0000 UTC m=+752.325361462" observedRunningTime="2026-01-29 15:40:09.066360792 +0000 UTC m=+752.739215049" watchObservedRunningTime="2026-01-29 15:40:09.071138678 +0000 UTC m=+752.743992925" Jan 29 15:40:10 crc kubenswrapper[5008]: I0129 15:40:10.063300 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-687f57d79b-wvlhn" event={"ID":"6111be19-5e01-42e4-b4cf-3728e3ee4a6f","Type":"ContainerStarted","Data":"fb7241b0aeb8cb74e9b6bbc1ffbe469525775788ea0f59db5ce9eeb1fa467092"} Jan 29 15:40:10 crc kubenswrapper[5008]: I0129 15:40:10.063686 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="cert-manager/cert-manager-webhook-687f57d79b-wvlhn" Jan 29 15:40:10 crc kubenswrapper[5008]: I0129 15:40:10.065279 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-cf98fcc89-dvjtx" event={"ID":"1217edcf-8ec1-4354-8fbe-a9325b564932","Type":"ContainerStarted","Data":"9e452f590aa4034f58c037627564780fcf4c1501ec00ba88da98d01c3b1a302c"} Jan 29 15:40:10 crc kubenswrapper[5008]: I0129 15:40:10.085589 5008 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-webhook-687f57d79b-wvlhn" podStartSLOduration=1.460760493 podStartE2EDuration="7.085570582s" podCreationTimestamp="2026-01-29 15:40:03 +0000 UTC" firstStartedPulling="2026-01-29 15:40:03.865566256 +0000 UTC m=+747.538420493" lastFinishedPulling="2026-01-29 15:40:09.490376345 +0000 UTC m=+753.163230582" observedRunningTime="2026-01-29 15:40:10.080653204 +0000 UTC m=+753.753507441" watchObservedRunningTime="2026-01-29 15:40:10.085570582 +0000 UTC m=+753.758424819" Jan 29 15:40:10 crc kubenswrapper[5008]: I0129 15:40:10.099625 5008 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-cainjector-cf98fcc89-dvjtx" podStartSLOduration=1.492104221 podStartE2EDuration="7.099606052s" podCreationTimestamp="2026-01-29 15:40:03 +0000 UTC" firstStartedPulling="2026-01-29 15:40:03.792510557 +0000 UTC m=+747.465364794" lastFinishedPulling="2026-01-29 15:40:09.400012388 +0000 UTC m=+753.072866625" observedRunningTime="2026-01-29 15:40:10.095627275 +0000 UTC m=+753.768481512" watchObservedRunningTime="2026-01-29 15:40:10.099606052 +0000 UTC m=+753.772460309" Jan 29 15:40:13 crc kubenswrapper[5008]: I0129 15:40:13.021751 5008 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-pqg9w"] Jan 29 15:40:13 crc kubenswrapper[5008]: I0129 15:40:13.022385 5008 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-pqg9w" podUID="1d092513-7735-4c98-9734-57bc46b99280" containerName="ovn-controller" containerID="cri-o://676b28dc78242b0ec7c7a3643a048da9020c807de1f4ddd0cd801f60a1bf41a8" gracePeriod=30 Jan 29 15:40:13 crc kubenswrapper[5008]: I0129 15:40:13.022913 5008 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-pqg9w" podUID="1d092513-7735-4c98-9734-57bc46b99280" containerName="sbdb" containerID="cri-o://dc93128ecb53884c776154eafc7f29837e9c378a10c37df5d85d608ef14d7195" gracePeriod=30 Jan 29 15:40:13 crc kubenswrapper[5008]: I0129 15:40:13.022991 5008 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-pqg9w" podUID="1d092513-7735-4c98-9734-57bc46b99280" containerName="nbdb" containerID="cri-o://eddc7bcf8b28e2d71e41dbad61e84e0e0ac1e2702628a400e9c16dcc4303cad1" gracePeriod=30 Jan 29 15:40:13 crc kubenswrapper[5008]: I0129 15:40:13.023087 5008 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-pqg9w" podUID="1d092513-7735-4c98-9734-57bc46b99280" containerName="northd" containerID="cri-o://b82de879355c27b3c577b5d5a292b2c1db266e6d92a8e01409bf87ede71ba420" gracePeriod=30 Jan 29 15:40:13 crc kubenswrapper[5008]: I0129 15:40:13.023156 5008 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-pqg9w" podUID="1d092513-7735-4c98-9734-57bc46b99280" containerName="ovn-acl-logging" containerID="cri-o://3e0b6c0db5ed1e87ffade45aa1c7194322bbf680050f9b7328a3584db57e1554" gracePeriod=30 Jan 29 15:40:13 crc kubenswrapper[5008]: I0129 15:40:13.023226 5008 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-pqg9w" podUID="1d092513-7735-4c98-9734-57bc46b99280" containerName="kube-rbac-proxy-ovn-metrics" containerID="cri-o://84bee79a5084a74e833cfe4bac65bc4b319e7a41e9f3e8c7ee7de383385da1a5" gracePeriod=30 Jan 29 15:40:13 crc kubenswrapper[5008]: I0129 15:40:13.023210 5008 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-pqg9w" podUID="1d092513-7735-4c98-9734-57bc46b99280" containerName="kube-rbac-proxy-node" containerID="cri-o://08beb10f1715c1ca4bbe5b5ecf918e595f3befca424a2b65a06e682936dcc9c1" gracePeriod=30 Jan 29 15:40:13 crc kubenswrapper[5008]: I0129 15:40:13.088970 5008 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-pqg9w" podUID="1d092513-7735-4c98-9734-57bc46b99280" containerName="ovnkube-controller" containerID="cri-o://f8f1d8793cbf27bc352ee2009caccdffa0a765f416beee3df3c97018285f6f5c" gracePeriod=30 Jan 29 15:40:13 crc kubenswrapper[5008]: I0129 15:40:13.819043 5008 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-pqg9w_1d092513-7735-4c98-9734-57bc46b99280/ovnkube-controller/3.log" Jan 29 15:40:13 crc kubenswrapper[5008]: I0129 15:40:13.823064 5008 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-pqg9w_1d092513-7735-4c98-9734-57bc46b99280/ovn-acl-logging/0.log" Jan 29 15:40:13 crc kubenswrapper[5008]: I0129 15:40:13.823976 5008 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-pqg9w_1d092513-7735-4c98-9734-57bc46b99280/ovn-controller/0.log" Jan 29 15:40:13 crc kubenswrapper[5008]: I0129 15:40:13.824684 5008 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-pqg9w" Jan 29 15:40:13 crc kubenswrapper[5008]: I0129 15:40:13.892193 5008 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-j9h2f"] Jan 29 15:40:13 crc kubenswrapper[5008]: E0129 15:40:13.892432 5008 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1d092513-7735-4c98-9734-57bc46b99280" containerName="kube-rbac-proxy-node" Jan 29 15:40:13 crc kubenswrapper[5008]: I0129 15:40:13.892450 5008 state_mem.go:107] "Deleted CPUSet assignment" podUID="1d092513-7735-4c98-9734-57bc46b99280" containerName="kube-rbac-proxy-node" Jan 29 15:40:13 crc kubenswrapper[5008]: E0129 15:40:13.892462 5008 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1d092513-7735-4c98-9734-57bc46b99280" containerName="kube-rbac-proxy-ovn-metrics" Jan 29 15:40:13 crc kubenswrapper[5008]: I0129 15:40:13.892471 5008 state_mem.go:107] "Deleted CPUSet assignment" podUID="1d092513-7735-4c98-9734-57bc46b99280" containerName="kube-rbac-proxy-ovn-metrics" Jan 29 15:40:13 crc kubenswrapper[5008]: E0129 15:40:13.892486 5008 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1d092513-7735-4c98-9734-57bc46b99280" containerName="northd" Jan 29 15:40:13 crc kubenswrapper[5008]: I0129 15:40:13.892495 5008 state_mem.go:107] "Deleted CPUSet assignment" podUID="1d092513-7735-4c98-9734-57bc46b99280" containerName="northd" Jan 29 15:40:13 crc kubenswrapper[5008]: E0129 15:40:13.892507 5008 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1d092513-7735-4c98-9734-57bc46b99280" containerName="sbdb" Jan 29 15:40:13 crc kubenswrapper[5008]: I0129 15:40:13.892515 5008 state_mem.go:107] "Deleted CPUSet assignment" podUID="1d092513-7735-4c98-9734-57bc46b99280" containerName="sbdb" Jan 29 15:40:13 crc kubenswrapper[5008]: E0129 15:40:13.892530 5008 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1d092513-7735-4c98-9734-57bc46b99280" containerName="ovn-controller" Jan 29 15:40:13 crc kubenswrapper[5008]: I0129 15:40:13.892538 5008 state_mem.go:107] "Deleted CPUSet assignment" podUID="1d092513-7735-4c98-9734-57bc46b99280" containerName="ovn-controller" Jan 29 15:40:13 crc kubenswrapper[5008]: E0129 15:40:13.892547 5008 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1d092513-7735-4c98-9734-57bc46b99280" containerName="ovnkube-controller" Jan 29 15:40:13 crc kubenswrapper[5008]: I0129 15:40:13.892555 5008 state_mem.go:107] "Deleted CPUSet assignment" podUID="1d092513-7735-4c98-9734-57bc46b99280" containerName="ovnkube-controller" Jan 29 15:40:13 crc kubenswrapper[5008]: E0129 15:40:13.892566 5008 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1d092513-7735-4c98-9734-57bc46b99280" containerName="ovnkube-controller" Jan 29 15:40:13 crc kubenswrapper[5008]: I0129 15:40:13.892574 5008 state_mem.go:107] "Deleted CPUSet assignment" podUID="1d092513-7735-4c98-9734-57bc46b99280" containerName="ovnkube-controller" Jan 29 15:40:13 crc kubenswrapper[5008]: E0129 15:40:13.892584 5008 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1d092513-7735-4c98-9734-57bc46b99280" containerName="ovnkube-controller" Jan 29 15:40:13 crc kubenswrapper[5008]: I0129 15:40:13.892591 5008 state_mem.go:107] "Deleted CPUSet assignment" podUID="1d092513-7735-4c98-9734-57bc46b99280" containerName="ovnkube-controller" Jan 29 15:40:13 crc kubenswrapper[5008]: E0129 15:40:13.892605 5008 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1d092513-7735-4c98-9734-57bc46b99280" containerName="nbdb" Jan 29 15:40:13 crc kubenswrapper[5008]: I0129 15:40:13.892612 5008 state_mem.go:107] "Deleted CPUSet assignment" podUID="1d092513-7735-4c98-9734-57bc46b99280" containerName="nbdb" Jan 29 15:40:13 crc kubenswrapper[5008]: E0129 15:40:13.892627 5008 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1d092513-7735-4c98-9734-57bc46b99280" containerName="ovnkube-controller" Jan 29 15:40:13 crc kubenswrapper[5008]: I0129 15:40:13.892636 5008 state_mem.go:107] "Deleted CPUSet assignment" podUID="1d092513-7735-4c98-9734-57bc46b99280" containerName="ovnkube-controller" Jan 29 15:40:13 crc kubenswrapper[5008]: E0129 15:40:13.892648 5008 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1d092513-7735-4c98-9734-57bc46b99280" containerName="kubecfg-setup" Jan 29 15:40:13 crc kubenswrapper[5008]: I0129 15:40:13.892657 5008 state_mem.go:107] "Deleted CPUSet assignment" podUID="1d092513-7735-4c98-9734-57bc46b99280" containerName="kubecfg-setup" Jan 29 15:40:13 crc kubenswrapper[5008]: E0129 15:40:13.892670 5008 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1d092513-7735-4c98-9734-57bc46b99280" containerName="ovn-acl-logging" Jan 29 15:40:13 crc kubenswrapper[5008]: I0129 15:40:13.892679 5008 state_mem.go:107] "Deleted CPUSet assignment" podUID="1d092513-7735-4c98-9734-57bc46b99280" containerName="ovn-acl-logging" Jan 29 15:40:13 crc kubenswrapper[5008]: I0129 15:40:13.892837 5008 memory_manager.go:354] "RemoveStaleState removing state" podUID="1d092513-7735-4c98-9734-57bc46b99280" containerName="northd" Jan 29 15:40:13 crc kubenswrapper[5008]: I0129 15:40:13.892853 5008 memory_manager.go:354] "RemoveStaleState removing state" podUID="1d092513-7735-4c98-9734-57bc46b99280" containerName="ovnkube-controller" Jan 29 15:40:13 crc kubenswrapper[5008]: I0129 15:40:13.892865 5008 memory_manager.go:354] "RemoveStaleState removing state" podUID="1d092513-7735-4c98-9734-57bc46b99280" containerName="ovn-controller" Jan 29 15:40:13 crc kubenswrapper[5008]: I0129 15:40:13.892875 5008 memory_manager.go:354] "RemoveStaleState removing state" podUID="1d092513-7735-4c98-9734-57bc46b99280" containerName="nbdb" Jan 29 15:40:13 crc kubenswrapper[5008]: I0129 15:40:13.892887 5008 memory_manager.go:354] "RemoveStaleState removing state" podUID="1d092513-7735-4c98-9734-57bc46b99280" containerName="sbdb" Jan 29 15:40:13 crc kubenswrapper[5008]: I0129 15:40:13.892898 5008 memory_manager.go:354] "RemoveStaleState removing state" podUID="1d092513-7735-4c98-9734-57bc46b99280" containerName="kube-rbac-proxy-ovn-metrics" Jan 29 15:40:13 crc kubenswrapper[5008]: I0129 15:40:13.892907 5008 memory_manager.go:354] "RemoveStaleState removing state" podUID="1d092513-7735-4c98-9734-57bc46b99280" containerName="ovnkube-controller" Jan 29 15:40:13 crc kubenswrapper[5008]: I0129 15:40:13.892917 5008 memory_manager.go:354] "RemoveStaleState removing state" podUID="1d092513-7735-4c98-9734-57bc46b99280" containerName="ovnkube-controller" Jan 29 15:40:13 crc kubenswrapper[5008]: I0129 15:40:13.892927 5008 memory_manager.go:354] "RemoveStaleState removing state" podUID="1d092513-7735-4c98-9734-57bc46b99280" containerName="ovnkube-controller" Jan 29 15:40:13 crc kubenswrapper[5008]: I0129 15:40:13.892936 5008 memory_manager.go:354] "RemoveStaleState removing state" podUID="1d092513-7735-4c98-9734-57bc46b99280" containerName="ovnkube-controller" Jan 29 15:40:13 crc kubenswrapper[5008]: I0129 15:40:13.892949 5008 memory_manager.go:354] "RemoveStaleState removing state" podUID="1d092513-7735-4c98-9734-57bc46b99280" containerName="ovn-acl-logging" Jan 29 15:40:13 crc kubenswrapper[5008]: I0129 15:40:13.892961 5008 memory_manager.go:354] "RemoveStaleState removing state" podUID="1d092513-7735-4c98-9734-57bc46b99280" containerName="kube-rbac-proxy-node" Jan 29 15:40:13 crc kubenswrapper[5008]: E0129 15:40:13.893081 5008 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1d092513-7735-4c98-9734-57bc46b99280" containerName="ovnkube-controller" Jan 29 15:40:13 crc kubenswrapper[5008]: I0129 15:40:13.893090 5008 state_mem.go:107] "Deleted CPUSet assignment" podUID="1d092513-7735-4c98-9734-57bc46b99280" containerName="ovnkube-controller" Jan 29 15:40:13 crc kubenswrapper[5008]: I0129 15:40:13.895466 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-j9h2f" Jan 29 15:40:13 crc kubenswrapper[5008]: I0129 15:40:13.995223 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/1d092513-7735-4c98-9734-57bc46b99280-host-var-lib-cni-networks-ovn-kubernetes\") pod \"1d092513-7735-4c98-9734-57bc46b99280\" (UID: \"1d092513-7735-4c98-9734-57bc46b99280\") " Jan 29 15:40:13 crc kubenswrapper[5008]: I0129 15:40:13.995287 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/1d092513-7735-4c98-9734-57bc46b99280-host-cni-bin\") pod \"1d092513-7735-4c98-9734-57bc46b99280\" (UID: \"1d092513-7735-4c98-9734-57bc46b99280\") " Jan 29 15:40:13 crc kubenswrapper[5008]: I0129 15:40:13.995324 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/1d092513-7735-4c98-9734-57bc46b99280-host-run-netns\") pod \"1d092513-7735-4c98-9734-57bc46b99280\" (UID: \"1d092513-7735-4c98-9734-57bc46b99280\") " Jan 29 15:40:13 crc kubenswrapper[5008]: I0129 15:40:13.995361 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/1d092513-7735-4c98-9734-57bc46b99280-run-systemd\") pod \"1d092513-7735-4c98-9734-57bc46b99280\" (UID: \"1d092513-7735-4c98-9734-57bc46b99280\") " Jan 29 15:40:13 crc kubenswrapper[5008]: I0129 15:40:13.995390 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/1d092513-7735-4c98-9734-57bc46b99280-systemd-units\") pod \"1d092513-7735-4c98-9734-57bc46b99280\" (UID: \"1d092513-7735-4c98-9734-57bc46b99280\") " Jan 29 15:40:13 crc kubenswrapper[5008]: I0129 15:40:13.995419 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/1d092513-7735-4c98-9734-57bc46b99280-host-slash\") pod \"1d092513-7735-4c98-9734-57bc46b99280\" (UID: \"1d092513-7735-4c98-9734-57bc46b99280\") " Jan 29 15:40:13 crc kubenswrapper[5008]: I0129 15:40:13.995459 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d2xcc\" (UniqueName: \"kubernetes.io/projected/1d092513-7735-4c98-9734-57bc46b99280-kube-api-access-d2xcc\") pod \"1d092513-7735-4c98-9734-57bc46b99280\" (UID: \"1d092513-7735-4c98-9734-57bc46b99280\") " Jan 29 15:40:13 crc kubenswrapper[5008]: I0129 15:40:13.995498 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/1d092513-7735-4c98-9734-57bc46b99280-host-run-ovn-kubernetes\") pod \"1d092513-7735-4c98-9734-57bc46b99280\" (UID: \"1d092513-7735-4c98-9734-57bc46b99280\") " Jan 29 15:40:13 crc kubenswrapper[5008]: I0129 15:40:13.995544 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/1d092513-7735-4c98-9734-57bc46b99280-host-cni-netd\") pod \"1d092513-7735-4c98-9734-57bc46b99280\" (UID: \"1d092513-7735-4c98-9734-57bc46b99280\") " Jan 29 15:40:13 crc kubenswrapper[5008]: I0129 15:40:13.995597 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/1d092513-7735-4c98-9734-57bc46b99280-ovn-node-metrics-cert\") pod \"1d092513-7735-4c98-9734-57bc46b99280\" (UID: \"1d092513-7735-4c98-9734-57bc46b99280\") " Jan 29 15:40:13 crc kubenswrapper[5008]: I0129 15:40:13.995668 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/1d092513-7735-4c98-9734-57bc46b99280-ovnkube-config\") pod \"1d092513-7735-4c98-9734-57bc46b99280\" (UID: \"1d092513-7735-4c98-9734-57bc46b99280\") " Jan 29 15:40:13 crc kubenswrapper[5008]: I0129 15:40:13.995696 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/1d092513-7735-4c98-9734-57bc46b99280-host-kubelet\") pod \"1d092513-7735-4c98-9734-57bc46b99280\" (UID: \"1d092513-7735-4c98-9734-57bc46b99280\") " Jan 29 15:40:13 crc kubenswrapper[5008]: I0129 15:40:13.995726 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/1d092513-7735-4c98-9734-57bc46b99280-run-ovn\") pod \"1d092513-7735-4c98-9734-57bc46b99280\" (UID: \"1d092513-7735-4c98-9734-57bc46b99280\") " Jan 29 15:40:13 crc kubenswrapper[5008]: I0129 15:40:13.995759 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/1d092513-7735-4c98-9734-57bc46b99280-env-overrides\") pod \"1d092513-7735-4c98-9734-57bc46b99280\" (UID: \"1d092513-7735-4c98-9734-57bc46b99280\") " Jan 29 15:40:13 crc kubenswrapper[5008]: I0129 15:40:13.995808 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/1d092513-7735-4c98-9734-57bc46b99280-log-socket\") pod \"1d092513-7735-4c98-9734-57bc46b99280\" (UID: \"1d092513-7735-4c98-9734-57bc46b99280\") " Jan 29 15:40:13 crc kubenswrapper[5008]: I0129 15:40:13.995846 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/1d092513-7735-4c98-9734-57bc46b99280-etc-openvswitch\") pod \"1d092513-7735-4c98-9734-57bc46b99280\" (UID: \"1d092513-7735-4c98-9734-57bc46b99280\") " Jan 29 15:40:13 crc kubenswrapper[5008]: I0129 15:40:13.995873 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/1d092513-7735-4c98-9734-57bc46b99280-var-lib-openvswitch\") pod \"1d092513-7735-4c98-9734-57bc46b99280\" (UID: \"1d092513-7735-4c98-9734-57bc46b99280\") " Jan 29 15:40:13 crc kubenswrapper[5008]: I0129 15:40:13.995910 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/1d092513-7735-4c98-9734-57bc46b99280-run-openvswitch\") pod \"1d092513-7735-4c98-9734-57bc46b99280\" (UID: \"1d092513-7735-4c98-9734-57bc46b99280\") " Jan 29 15:40:13 crc kubenswrapper[5008]: I0129 15:40:13.995940 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/1d092513-7735-4c98-9734-57bc46b99280-node-log\") pod \"1d092513-7735-4c98-9734-57bc46b99280\" (UID: \"1d092513-7735-4c98-9734-57bc46b99280\") " Jan 29 15:40:13 crc kubenswrapper[5008]: I0129 15:40:13.995979 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/1d092513-7735-4c98-9734-57bc46b99280-ovnkube-script-lib\") pod \"1d092513-7735-4c98-9734-57bc46b99280\" (UID: \"1d092513-7735-4c98-9734-57bc46b99280\") " Jan 29 15:40:13 crc kubenswrapper[5008]: I0129 15:40:13.997093 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1d092513-7735-4c98-9734-57bc46b99280-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "1d092513-7735-4c98-9734-57bc46b99280" (UID: "1d092513-7735-4c98-9734-57bc46b99280"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:40:13 crc kubenswrapper[5008]: I0129 15:40:13.997136 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1d092513-7735-4c98-9734-57bc46b99280-host-var-lib-cni-networks-ovn-kubernetes" (OuterVolumeSpecName: "host-var-lib-cni-networks-ovn-kubernetes") pod "1d092513-7735-4c98-9734-57bc46b99280" (UID: "1d092513-7735-4c98-9734-57bc46b99280"). InnerVolumeSpecName "host-var-lib-cni-networks-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 15:40:13 crc kubenswrapper[5008]: I0129 15:40:13.997165 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1d092513-7735-4c98-9734-57bc46b99280-host-cni-bin" (OuterVolumeSpecName: "host-cni-bin") pod "1d092513-7735-4c98-9734-57bc46b99280" (UID: "1d092513-7735-4c98-9734-57bc46b99280"). InnerVolumeSpecName "host-cni-bin". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 15:40:13 crc kubenswrapper[5008]: I0129 15:40:13.997192 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1d092513-7735-4c98-9734-57bc46b99280-host-run-netns" (OuterVolumeSpecName: "host-run-netns") pod "1d092513-7735-4c98-9734-57bc46b99280" (UID: "1d092513-7735-4c98-9734-57bc46b99280"). InnerVolumeSpecName "host-run-netns". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 15:40:13 crc kubenswrapper[5008]: I0129 15:40:13.999663 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1d092513-7735-4c98-9734-57bc46b99280-host-kubelet" (OuterVolumeSpecName: "host-kubelet") pod "1d092513-7735-4c98-9734-57bc46b99280" (UID: "1d092513-7735-4c98-9734-57bc46b99280"). InnerVolumeSpecName "host-kubelet". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 15:40:13 crc kubenswrapper[5008]: I0129 15:40:13.999910 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1d092513-7735-4c98-9734-57bc46b99280-systemd-units" (OuterVolumeSpecName: "systemd-units") pod "1d092513-7735-4c98-9734-57bc46b99280" (UID: "1d092513-7735-4c98-9734-57bc46b99280"). InnerVolumeSpecName "systemd-units". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 15:40:14 crc kubenswrapper[5008]: I0129 15:40:14.001019 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1d092513-7735-4c98-9734-57bc46b99280-host-slash" (OuterVolumeSpecName: "host-slash") pod "1d092513-7735-4c98-9734-57bc46b99280" (UID: "1d092513-7735-4c98-9734-57bc46b99280"). InnerVolumeSpecName "host-slash". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 15:40:14 crc kubenswrapper[5008]: I0129 15:40:14.002050 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1d092513-7735-4c98-9734-57bc46b99280-etc-openvswitch" (OuterVolumeSpecName: "etc-openvswitch") pod "1d092513-7735-4c98-9734-57bc46b99280" (UID: "1d092513-7735-4c98-9734-57bc46b99280"). InnerVolumeSpecName "etc-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 15:40:14 crc kubenswrapper[5008]: I0129 15:40:14.002118 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1d092513-7735-4c98-9734-57bc46b99280-run-ovn" (OuterVolumeSpecName: "run-ovn") pod "1d092513-7735-4c98-9734-57bc46b99280" (UID: "1d092513-7735-4c98-9734-57bc46b99280"). InnerVolumeSpecName "run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 15:40:14 crc kubenswrapper[5008]: I0129 15:40:14.002117 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1d092513-7735-4c98-9734-57bc46b99280-host-cni-netd" (OuterVolumeSpecName: "host-cni-netd") pod "1d092513-7735-4c98-9734-57bc46b99280" (UID: "1d092513-7735-4c98-9734-57bc46b99280"). InnerVolumeSpecName "host-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 15:40:14 crc kubenswrapper[5008]: I0129 15:40:14.002202 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1d092513-7735-4c98-9734-57bc46b99280-node-log" (OuterVolumeSpecName: "node-log") pod "1d092513-7735-4c98-9734-57bc46b99280" (UID: "1d092513-7735-4c98-9734-57bc46b99280"). InnerVolumeSpecName "node-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 15:40:14 crc kubenswrapper[5008]: I0129 15:40:14.002205 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1d092513-7735-4c98-9734-57bc46b99280-host-run-ovn-kubernetes" (OuterVolumeSpecName: "host-run-ovn-kubernetes") pod "1d092513-7735-4c98-9734-57bc46b99280" (UID: "1d092513-7735-4c98-9734-57bc46b99280"). InnerVolumeSpecName "host-run-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 15:40:14 crc kubenswrapper[5008]: I0129 15:40:14.002327 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1d092513-7735-4c98-9734-57bc46b99280-var-lib-openvswitch" (OuterVolumeSpecName: "var-lib-openvswitch") pod "1d092513-7735-4c98-9734-57bc46b99280" (UID: "1d092513-7735-4c98-9734-57bc46b99280"). InnerVolumeSpecName "var-lib-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 15:40:14 crc kubenswrapper[5008]: I0129 15:40:14.002347 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1d092513-7735-4c98-9734-57bc46b99280-log-socket" (OuterVolumeSpecName: "log-socket") pod "1d092513-7735-4c98-9734-57bc46b99280" (UID: "1d092513-7735-4c98-9734-57bc46b99280"). InnerVolumeSpecName "log-socket". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 15:40:14 crc kubenswrapper[5008]: I0129 15:40:14.002483 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1d092513-7735-4c98-9734-57bc46b99280-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "1d092513-7735-4c98-9734-57bc46b99280" (UID: "1d092513-7735-4c98-9734-57bc46b99280"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:40:14 crc kubenswrapper[5008]: I0129 15:40:14.002965 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1d092513-7735-4c98-9734-57bc46b99280-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "1d092513-7735-4c98-9734-57bc46b99280" (UID: "1d092513-7735-4c98-9734-57bc46b99280"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:40:14 crc kubenswrapper[5008]: I0129 15:40:14.003061 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1d092513-7735-4c98-9734-57bc46b99280-run-openvswitch" (OuterVolumeSpecName: "run-openvswitch") pod "1d092513-7735-4c98-9734-57bc46b99280" (UID: "1d092513-7735-4c98-9734-57bc46b99280"). InnerVolumeSpecName "run-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 15:40:14 crc kubenswrapper[5008]: I0129 15:40:14.006086 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d092513-7735-4c98-9734-57bc46b99280-kube-api-access-d2xcc" (OuterVolumeSpecName: "kube-api-access-d2xcc") pod "1d092513-7735-4c98-9734-57bc46b99280" (UID: "1d092513-7735-4c98-9734-57bc46b99280"). InnerVolumeSpecName "kube-api-access-d2xcc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:40:14 crc kubenswrapper[5008]: I0129 15:40:14.008370 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1d092513-7735-4c98-9734-57bc46b99280-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "1d092513-7735-4c98-9734-57bc46b99280" (UID: "1d092513-7735-4c98-9734-57bc46b99280"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:40:14 crc kubenswrapper[5008]: I0129 15:40:14.028436 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1d092513-7735-4c98-9734-57bc46b99280-run-systemd" (OuterVolumeSpecName: "run-systemd") pod "1d092513-7735-4c98-9734-57bc46b99280" (UID: "1d092513-7735-4c98-9734-57bc46b99280"). InnerVolumeSpecName "run-systemd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 15:40:14 crc kubenswrapper[5008]: I0129 15:40:14.091430 5008 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-42hcz_cdd8ae23-3f9f-49f8-928d-46dad823fde4/kube-multus/2.log" Jan 29 15:40:14 crc kubenswrapper[5008]: I0129 15:40:14.092006 5008 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-42hcz_cdd8ae23-3f9f-49f8-928d-46dad823fde4/kube-multus/1.log" Jan 29 15:40:14 crc kubenswrapper[5008]: I0129 15:40:14.092050 5008 generic.go:334] "Generic (PLEG): container finished" podID="cdd8ae23-3f9f-49f8-928d-46dad823fde4" containerID="a79b05ecc77ae822ab75bfdce779bbfbb375857cfbf47a090a83a690373dc6e0" exitCode=2 Jan 29 15:40:14 crc kubenswrapper[5008]: I0129 15:40:14.092111 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-42hcz" event={"ID":"cdd8ae23-3f9f-49f8-928d-46dad823fde4","Type":"ContainerDied","Data":"a79b05ecc77ae822ab75bfdce779bbfbb375857cfbf47a090a83a690373dc6e0"} Jan 29 15:40:14 crc kubenswrapper[5008]: I0129 15:40:14.092153 5008 scope.go:117] "RemoveContainer" containerID="af9a973786f58d2c63123c28e0b1aedaa9ec4188567960c544cf68f70ba20873" Jan 29 15:40:14 crc kubenswrapper[5008]: I0129 15:40:14.092720 5008 scope.go:117] "RemoveContainer" containerID="a79b05ecc77ae822ab75bfdce779bbfbb375857cfbf47a090a83a690373dc6e0" Jan 29 15:40:14 crc kubenswrapper[5008]: I0129 15:40:14.097267 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/252dea6f-dc2c-4c83-8930-535e5b0f6cdb-node-log\") pod \"ovnkube-node-j9h2f\" (UID: \"252dea6f-dc2c-4c83-8930-535e5b0f6cdb\") " pod="openshift-ovn-kubernetes/ovnkube-node-j9h2f" Jan 29 15:40:14 crc kubenswrapper[5008]: I0129 15:40:14.097477 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/252dea6f-dc2c-4c83-8930-535e5b0f6cdb-host-run-ovn-kubernetes\") pod \"ovnkube-node-j9h2f\" (UID: \"252dea6f-dc2c-4c83-8930-535e5b0f6cdb\") " pod="openshift-ovn-kubernetes/ovnkube-node-j9h2f" Jan 29 15:40:14 crc kubenswrapper[5008]: I0129 15:40:14.097594 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/252dea6f-dc2c-4c83-8930-535e5b0f6cdb-host-cni-netd\") pod \"ovnkube-node-j9h2f\" (UID: \"252dea6f-dc2c-4c83-8930-535e5b0f6cdb\") " pod="openshift-ovn-kubernetes/ovnkube-node-j9h2f" Jan 29 15:40:14 crc kubenswrapper[5008]: I0129 15:40:14.097659 5008 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-pqg9w_1d092513-7735-4c98-9734-57bc46b99280/ovnkube-controller/3.log" Jan 29 15:40:14 crc kubenswrapper[5008]: I0129 15:40:14.098148 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/252dea6f-dc2c-4c83-8930-535e5b0f6cdb-var-lib-openvswitch\") pod \"ovnkube-node-j9h2f\" (UID: \"252dea6f-dc2c-4c83-8930-535e5b0f6cdb\") " pod="openshift-ovn-kubernetes/ovnkube-node-j9h2f" Jan 29 15:40:14 crc kubenswrapper[5008]: I0129 15:40:14.098198 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/252dea6f-dc2c-4c83-8930-535e5b0f6cdb-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-j9h2f\" (UID: \"252dea6f-dc2c-4c83-8930-535e5b0f6cdb\") " pod="openshift-ovn-kubernetes/ovnkube-node-j9h2f" Jan 29 15:40:14 crc kubenswrapper[5008]: I0129 15:40:14.098246 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/252dea6f-dc2c-4c83-8930-535e5b0f6cdb-etc-openvswitch\") pod \"ovnkube-node-j9h2f\" (UID: \"252dea6f-dc2c-4c83-8930-535e5b0f6cdb\") " pod="openshift-ovn-kubernetes/ovnkube-node-j9h2f" Jan 29 15:40:14 crc kubenswrapper[5008]: I0129 15:40:14.098282 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/252dea6f-dc2c-4c83-8930-535e5b0f6cdb-run-systemd\") pod \"ovnkube-node-j9h2f\" (UID: \"252dea6f-dc2c-4c83-8930-535e5b0f6cdb\") " pod="openshift-ovn-kubernetes/ovnkube-node-j9h2f" Jan 29 15:40:14 crc kubenswrapper[5008]: I0129 15:40:14.098466 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/252dea6f-dc2c-4c83-8930-535e5b0f6cdb-host-slash\") pod \"ovnkube-node-j9h2f\" (UID: \"252dea6f-dc2c-4c83-8930-535e5b0f6cdb\") " pod="openshift-ovn-kubernetes/ovnkube-node-j9h2f" Jan 29 15:40:14 crc kubenswrapper[5008]: I0129 15:40:14.098679 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/252dea6f-dc2c-4c83-8930-535e5b0f6cdb-host-run-netns\") pod \"ovnkube-node-j9h2f\" (UID: \"252dea6f-dc2c-4c83-8930-535e5b0f6cdb\") " pod="openshift-ovn-kubernetes/ovnkube-node-j9h2f" Jan 29 15:40:14 crc kubenswrapper[5008]: I0129 15:40:14.098727 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/252dea6f-dc2c-4c83-8930-535e5b0f6cdb-host-cni-bin\") pod \"ovnkube-node-j9h2f\" (UID: \"252dea6f-dc2c-4c83-8930-535e5b0f6cdb\") " pod="openshift-ovn-kubernetes/ovnkube-node-j9h2f" Jan 29 15:40:14 crc kubenswrapper[5008]: I0129 15:40:14.098759 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/252dea6f-dc2c-4c83-8930-535e5b0f6cdb-host-kubelet\") pod \"ovnkube-node-j9h2f\" (UID: \"252dea6f-dc2c-4c83-8930-535e5b0f6cdb\") " pod="openshift-ovn-kubernetes/ovnkube-node-j9h2f" Jan 29 15:40:14 crc kubenswrapper[5008]: I0129 15:40:14.098801 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/252dea6f-dc2c-4c83-8930-535e5b0f6cdb-log-socket\") pod \"ovnkube-node-j9h2f\" (UID: \"252dea6f-dc2c-4c83-8930-535e5b0f6cdb\") " pod="openshift-ovn-kubernetes/ovnkube-node-j9h2f" Jan 29 15:40:14 crc kubenswrapper[5008]: I0129 15:40:14.098829 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/252dea6f-dc2c-4c83-8930-535e5b0f6cdb-run-ovn\") pod \"ovnkube-node-j9h2f\" (UID: \"252dea6f-dc2c-4c83-8930-535e5b0f6cdb\") " pod="openshift-ovn-kubernetes/ovnkube-node-j9h2f" Jan 29 15:40:14 crc kubenswrapper[5008]: I0129 15:40:14.098891 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/252dea6f-dc2c-4c83-8930-535e5b0f6cdb-run-openvswitch\") pod \"ovnkube-node-j9h2f\" (UID: \"252dea6f-dc2c-4c83-8930-535e5b0f6cdb\") " pod="openshift-ovn-kubernetes/ovnkube-node-j9h2f" Jan 29 15:40:14 crc kubenswrapper[5008]: I0129 15:40:14.099013 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/252dea6f-dc2c-4c83-8930-535e5b0f6cdb-systemd-units\") pod \"ovnkube-node-j9h2f\" (UID: \"252dea6f-dc2c-4c83-8930-535e5b0f6cdb\") " pod="openshift-ovn-kubernetes/ovnkube-node-j9h2f" Jan 29 15:40:14 crc kubenswrapper[5008]: I0129 15:40:14.099080 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6s8mt\" (UniqueName: \"kubernetes.io/projected/252dea6f-dc2c-4c83-8930-535e5b0f6cdb-kube-api-access-6s8mt\") pod \"ovnkube-node-j9h2f\" (UID: \"252dea6f-dc2c-4c83-8930-535e5b0f6cdb\") " pod="openshift-ovn-kubernetes/ovnkube-node-j9h2f" Jan 29 15:40:14 crc kubenswrapper[5008]: I0129 15:40:14.099147 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/252dea6f-dc2c-4c83-8930-535e5b0f6cdb-ovnkube-config\") pod \"ovnkube-node-j9h2f\" (UID: \"252dea6f-dc2c-4c83-8930-535e5b0f6cdb\") " pod="openshift-ovn-kubernetes/ovnkube-node-j9h2f" Jan 29 15:40:14 crc kubenswrapper[5008]: I0129 15:40:14.099224 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/252dea6f-dc2c-4c83-8930-535e5b0f6cdb-env-overrides\") pod \"ovnkube-node-j9h2f\" (UID: \"252dea6f-dc2c-4c83-8930-535e5b0f6cdb\") " pod="openshift-ovn-kubernetes/ovnkube-node-j9h2f" Jan 29 15:40:14 crc kubenswrapper[5008]: I0129 15:40:14.099302 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/252dea6f-dc2c-4c83-8930-535e5b0f6cdb-ovnkube-script-lib\") pod \"ovnkube-node-j9h2f\" (UID: \"252dea6f-dc2c-4c83-8930-535e5b0f6cdb\") " pod="openshift-ovn-kubernetes/ovnkube-node-j9h2f" Jan 29 15:40:14 crc kubenswrapper[5008]: I0129 15:40:14.100142 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/252dea6f-dc2c-4c83-8930-535e5b0f6cdb-ovn-node-metrics-cert\") pod \"ovnkube-node-j9h2f\" (UID: \"252dea6f-dc2c-4c83-8930-535e5b0f6cdb\") " pod="openshift-ovn-kubernetes/ovnkube-node-j9h2f" Jan 29 15:40:14 crc kubenswrapper[5008]: I0129 15:40:14.100558 5008 reconciler_common.go:293] "Volume detached for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/1d092513-7735-4c98-9734-57bc46b99280-etc-openvswitch\") on node \"crc\" DevicePath \"\"" Jan 29 15:40:14 crc kubenswrapper[5008]: I0129 15:40:14.100596 5008 reconciler_common.go:293] "Volume detached for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/1d092513-7735-4c98-9734-57bc46b99280-var-lib-openvswitch\") on node \"crc\" DevicePath \"\"" Jan 29 15:40:14 crc kubenswrapper[5008]: I0129 15:40:14.100617 5008 reconciler_common.go:293] "Volume detached for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/1d092513-7735-4c98-9734-57bc46b99280-run-openvswitch\") on node \"crc\" DevicePath \"\"" Jan 29 15:40:14 crc kubenswrapper[5008]: I0129 15:40:14.100634 5008 reconciler_common.go:293] "Volume detached for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/1d092513-7735-4c98-9734-57bc46b99280-node-log\") on node \"crc\" DevicePath \"\"" Jan 29 15:40:14 crc kubenswrapper[5008]: I0129 15:40:14.100651 5008 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/1d092513-7735-4c98-9734-57bc46b99280-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Jan 29 15:40:14 crc kubenswrapper[5008]: I0129 15:40:14.100673 5008 reconciler_common.go:293] "Volume detached for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/1d092513-7735-4c98-9734-57bc46b99280-host-var-lib-cni-networks-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Jan 29 15:40:14 crc kubenswrapper[5008]: I0129 15:40:14.100692 5008 reconciler_common.go:293] "Volume detached for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/1d092513-7735-4c98-9734-57bc46b99280-host-cni-bin\") on node \"crc\" DevicePath \"\"" Jan 29 15:40:14 crc kubenswrapper[5008]: I0129 15:40:14.100712 5008 reconciler_common.go:293] "Volume detached for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/1d092513-7735-4c98-9734-57bc46b99280-host-run-netns\") on node \"crc\" DevicePath \"\"" Jan 29 15:40:14 crc kubenswrapper[5008]: I0129 15:40:14.100729 5008 reconciler_common.go:293] "Volume detached for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/1d092513-7735-4c98-9734-57bc46b99280-run-systemd\") on node \"crc\" DevicePath \"\"" Jan 29 15:40:14 crc kubenswrapper[5008]: I0129 15:40:14.100746 5008 reconciler_common.go:293] "Volume detached for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/1d092513-7735-4c98-9734-57bc46b99280-systemd-units\") on node \"crc\" DevicePath \"\"" Jan 29 15:40:14 crc kubenswrapper[5008]: I0129 15:40:14.100762 5008 reconciler_common.go:293] "Volume detached for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/1d092513-7735-4c98-9734-57bc46b99280-host-slash\") on node \"crc\" DevicePath \"\"" Jan 29 15:40:14 crc kubenswrapper[5008]: I0129 15:40:14.100803 5008 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d2xcc\" (UniqueName: \"kubernetes.io/projected/1d092513-7735-4c98-9734-57bc46b99280-kube-api-access-d2xcc\") on node \"crc\" DevicePath \"\"" Jan 29 15:40:14 crc kubenswrapper[5008]: I0129 15:40:14.100821 5008 reconciler_common.go:293] "Volume detached for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/1d092513-7735-4c98-9734-57bc46b99280-host-run-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Jan 29 15:40:14 crc kubenswrapper[5008]: I0129 15:40:14.100838 5008 reconciler_common.go:293] "Volume detached for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/1d092513-7735-4c98-9734-57bc46b99280-host-cni-netd\") on node \"crc\" DevicePath \"\"" Jan 29 15:40:14 crc kubenswrapper[5008]: I0129 15:40:14.100858 5008 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/1d092513-7735-4c98-9734-57bc46b99280-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Jan 29 15:40:14 crc kubenswrapper[5008]: I0129 15:40:14.100876 5008 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/1d092513-7735-4c98-9734-57bc46b99280-ovnkube-config\") on node \"crc\" DevicePath \"\"" Jan 29 15:40:14 crc kubenswrapper[5008]: I0129 15:40:14.100895 5008 reconciler_common.go:293] "Volume detached for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/1d092513-7735-4c98-9734-57bc46b99280-host-kubelet\") on node \"crc\" DevicePath \"\"" Jan 29 15:40:14 crc kubenswrapper[5008]: I0129 15:40:14.100913 5008 reconciler_common.go:293] "Volume detached for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/1d092513-7735-4c98-9734-57bc46b99280-run-ovn\") on node \"crc\" DevicePath \"\"" Jan 29 15:40:14 crc kubenswrapper[5008]: I0129 15:40:14.100951 5008 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/1d092513-7735-4c98-9734-57bc46b99280-env-overrides\") on node \"crc\" DevicePath \"\"" Jan 29 15:40:14 crc kubenswrapper[5008]: I0129 15:40:14.100969 5008 reconciler_common.go:293] "Volume detached for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/1d092513-7735-4c98-9734-57bc46b99280-log-socket\") on node \"crc\" DevicePath \"\"" Jan 29 15:40:14 crc kubenswrapper[5008]: I0129 15:40:14.102050 5008 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-pqg9w_1d092513-7735-4c98-9734-57bc46b99280/ovn-acl-logging/0.log" Jan 29 15:40:14 crc kubenswrapper[5008]: I0129 15:40:14.102699 5008 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-pqg9w_1d092513-7735-4c98-9734-57bc46b99280/ovn-controller/0.log" Jan 29 15:40:14 crc kubenswrapper[5008]: I0129 15:40:14.103382 5008 generic.go:334] "Generic (PLEG): container finished" podID="1d092513-7735-4c98-9734-57bc46b99280" containerID="f8f1d8793cbf27bc352ee2009caccdffa0a765f416beee3df3c97018285f6f5c" exitCode=0 Jan 29 15:40:14 crc kubenswrapper[5008]: I0129 15:40:14.103443 5008 generic.go:334] "Generic (PLEG): container finished" podID="1d092513-7735-4c98-9734-57bc46b99280" containerID="dc93128ecb53884c776154eafc7f29837e9c378a10c37df5d85d608ef14d7195" exitCode=0 Jan 29 15:40:14 crc kubenswrapper[5008]: I0129 15:40:14.103463 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pqg9w" event={"ID":"1d092513-7735-4c98-9734-57bc46b99280","Type":"ContainerDied","Data":"f8f1d8793cbf27bc352ee2009caccdffa0a765f416beee3df3c97018285f6f5c"} Jan 29 15:40:14 crc kubenswrapper[5008]: I0129 15:40:14.103498 5008 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-pqg9w" Jan 29 15:40:14 crc kubenswrapper[5008]: I0129 15:40:14.103518 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pqg9w" event={"ID":"1d092513-7735-4c98-9734-57bc46b99280","Type":"ContainerDied","Data":"dc93128ecb53884c776154eafc7f29837e9c378a10c37df5d85d608ef14d7195"} Jan 29 15:40:14 crc kubenswrapper[5008]: I0129 15:40:14.103537 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pqg9w" event={"ID":"1d092513-7735-4c98-9734-57bc46b99280","Type":"ContainerDied","Data":"eddc7bcf8b28e2d71e41dbad61e84e0e0ac1e2702628a400e9c16dcc4303cad1"} Jan 29 15:40:14 crc kubenswrapper[5008]: I0129 15:40:14.103471 5008 generic.go:334] "Generic (PLEG): container finished" podID="1d092513-7735-4c98-9734-57bc46b99280" containerID="eddc7bcf8b28e2d71e41dbad61e84e0e0ac1e2702628a400e9c16dcc4303cad1" exitCode=0 Jan 29 15:40:14 crc kubenswrapper[5008]: I0129 15:40:14.103570 5008 generic.go:334] "Generic (PLEG): container finished" podID="1d092513-7735-4c98-9734-57bc46b99280" containerID="b82de879355c27b3c577b5d5a292b2c1db266e6d92a8e01409bf87ede71ba420" exitCode=0 Jan 29 15:40:14 crc kubenswrapper[5008]: I0129 15:40:14.103587 5008 generic.go:334] "Generic (PLEG): container finished" podID="1d092513-7735-4c98-9734-57bc46b99280" containerID="84bee79a5084a74e833cfe4bac65bc4b319e7a41e9f3e8c7ee7de383385da1a5" exitCode=0 Jan 29 15:40:14 crc kubenswrapper[5008]: I0129 15:40:14.103600 5008 generic.go:334] "Generic (PLEG): container finished" podID="1d092513-7735-4c98-9734-57bc46b99280" containerID="08beb10f1715c1ca4bbe5b5ecf918e595f3befca424a2b65a06e682936dcc9c1" exitCode=0 Jan 29 15:40:14 crc kubenswrapper[5008]: I0129 15:40:14.103612 5008 generic.go:334] "Generic (PLEG): container finished" podID="1d092513-7735-4c98-9734-57bc46b99280" containerID="3e0b6c0db5ed1e87ffade45aa1c7194322bbf680050f9b7328a3584db57e1554" exitCode=143 Jan 29 15:40:14 crc kubenswrapper[5008]: I0129 15:40:14.103624 5008 generic.go:334] "Generic (PLEG): container finished" podID="1d092513-7735-4c98-9734-57bc46b99280" containerID="676b28dc78242b0ec7c7a3643a048da9020c807de1f4ddd0cd801f60a1bf41a8" exitCode=143 Jan 29 15:40:14 crc kubenswrapper[5008]: I0129 15:40:14.103642 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pqg9w" event={"ID":"1d092513-7735-4c98-9734-57bc46b99280","Type":"ContainerDied","Data":"b82de879355c27b3c577b5d5a292b2c1db266e6d92a8e01409bf87ede71ba420"} Jan 29 15:40:14 crc kubenswrapper[5008]: I0129 15:40:14.103658 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pqg9w" event={"ID":"1d092513-7735-4c98-9734-57bc46b99280","Type":"ContainerDied","Data":"84bee79a5084a74e833cfe4bac65bc4b319e7a41e9f3e8c7ee7de383385da1a5"} Jan 29 15:40:14 crc kubenswrapper[5008]: I0129 15:40:14.103673 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pqg9w" event={"ID":"1d092513-7735-4c98-9734-57bc46b99280","Type":"ContainerDied","Data":"08beb10f1715c1ca4bbe5b5ecf918e595f3befca424a2b65a06e682936dcc9c1"} Jan 29 15:40:14 crc kubenswrapper[5008]: I0129 15:40:14.103691 5008 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"f8f1d8793cbf27bc352ee2009caccdffa0a765f416beee3df3c97018285f6f5c"} Jan 29 15:40:14 crc kubenswrapper[5008]: I0129 15:40:14.103707 5008 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"c4894794fa383987c6dc74bda3cd40e56fa81dab982e631fe2fb043b74a6afd9"} Jan 29 15:40:14 crc kubenswrapper[5008]: I0129 15:40:14.103716 5008 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"dc93128ecb53884c776154eafc7f29837e9c378a10c37df5d85d608ef14d7195"} Jan 29 15:40:14 crc kubenswrapper[5008]: I0129 15:40:14.103725 5008 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"eddc7bcf8b28e2d71e41dbad61e84e0e0ac1e2702628a400e9c16dcc4303cad1"} Jan 29 15:40:14 crc kubenswrapper[5008]: I0129 15:40:14.103732 5008 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"b82de879355c27b3c577b5d5a292b2c1db266e6d92a8e01409bf87ede71ba420"} Jan 29 15:40:14 crc kubenswrapper[5008]: I0129 15:40:14.103739 5008 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"84bee79a5084a74e833cfe4bac65bc4b319e7a41e9f3e8c7ee7de383385da1a5"} Jan 29 15:40:14 crc kubenswrapper[5008]: I0129 15:40:14.103747 5008 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"08beb10f1715c1ca4bbe5b5ecf918e595f3befca424a2b65a06e682936dcc9c1"} Jan 29 15:40:14 crc kubenswrapper[5008]: I0129 15:40:14.103754 5008 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"3e0b6c0db5ed1e87ffade45aa1c7194322bbf680050f9b7328a3584db57e1554"} Jan 29 15:40:14 crc kubenswrapper[5008]: I0129 15:40:14.103762 5008 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"676b28dc78242b0ec7c7a3643a048da9020c807de1f4ddd0cd801f60a1bf41a8"} Jan 29 15:40:14 crc kubenswrapper[5008]: I0129 15:40:14.103770 5008 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"6807035e51f1a1b563d2c2de6ad73607b2a3bbb9b4336cb9dfeea693d35fdda6"} Jan 29 15:40:14 crc kubenswrapper[5008]: I0129 15:40:14.103797 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pqg9w" event={"ID":"1d092513-7735-4c98-9734-57bc46b99280","Type":"ContainerDied","Data":"3e0b6c0db5ed1e87ffade45aa1c7194322bbf680050f9b7328a3584db57e1554"} Jan 29 15:40:14 crc kubenswrapper[5008]: I0129 15:40:14.103810 5008 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"f8f1d8793cbf27bc352ee2009caccdffa0a765f416beee3df3c97018285f6f5c"} Jan 29 15:40:14 crc kubenswrapper[5008]: I0129 15:40:14.103821 5008 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"c4894794fa383987c6dc74bda3cd40e56fa81dab982e631fe2fb043b74a6afd9"} Jan 29 15:40:14 crc kubenswrapper[5008]: I0129 15:40:14.103828 5008 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"dc93128ecb53884c776154eafc7f29837e9c378a10c37df5d85d608ef14d7195"} Jan 29 15:40:14 crc kubenswrapper[5008]: I0129 15:40:14.103835 5008 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"eddc7bcf8b28e2d71e41dbad61e84e0e0ac1e2702628a400e9c16dcc4303cad1"} Jan 29 15:40:14 crc kubenswrapper[5008]: I0129 15:40:14.103842 5008 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"b82de879355c27b3c577b5d5a292b2c1db266e6d92a8e01409bf87ede71ba420"} Jan 29 15:40:14 crc kubenswrapper[5008]: I0129 15:40:14.103849 5008 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"84bee79a5084a74e833cfe4bac65bc4b319e7a41e9f3e8c7ee7de383385da1a5"} Jan 29 15:40:14 crc kubenswrapper[5008]: I0129 15:40:14.103856 5008 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"08beb10f1715c1ca4bbe5b5ecf918e595f3befca424a2b65a06e682936dcc9c1"} Jan 29 15:40:14 crc kubenswrapper[5008]: I0129 15:40:14.103864 5008 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"3e0b6c0db5ed1e87ffade45aa1c7194322bbf680050f9b7328a3584db57e1554"} Jan 29 15:40:14 crc kubenswrapper[5008]: I0129 15:40:14.103871 5008 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"676b28dc78242b0ec7c7a3643a048da9020c807de1f4ddd0cd801f60a1bf41a8"} Jan 29 15:40:14 crc kubenswrapper[5008]: I0129 15:40:14.103877 5008 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"6807035e51f1a1b563d2c2de6ad73607b2a3bbb9b4336cb9dfeea693d35fdda6"} Jan 29 15:40:14 crc kubenswrapper[5008]: I0129 15:40:14.103889 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pqg9w" event={"ID":"1d092513-7735-4c98-9734-57bc46b99280","Type":"ContainerDied","Data":"676b28dc78242b0ec7c7a3643a048da9020c807de1f4ddd0cd801f60a1bf41a8"} Jan 29 15:40:14 crc kubenswrapper[5008]: I0129 15:40:14.103900 5008 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"f8f1d8793cbf27bc352ee2009caccdffa0a765f416beee3df3c97018285f6f5c"} Jan 29 15:40:14 crc kubenswrapper[5008]: I0129 15:40:14.103911 5008 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"c4894794fa383987c6dc74bda3cd40e56fa81dab982e631fe2fb043b74a6afd9"} Jan 29 15:40:14 crc kubenswrapper[5008]: I0129 15:40:14.103918 5008 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"dc93128ecb53884c776154eafc7f29837e9c378a10c37df5d85d608ef14d7195"} Jan 29 15:40:14 crc kubenswrapper[5008]: I0129 15:40:14.103925 5008 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"eddc7bcf8b28e2d71e41dbad61e84e0e0ac1e2702628a400e9c16dcc4303cad1"} Jan 29 15:40:14 crc kubenswrapper[5008]: I0129 15:40:14.103933 5008 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"b82de879355c27b3c577b5d5a292b2c1db266e6d92a8e01409bf87ede71ba420"} Jan 29 15:40:14 crc kubenswrapper[5008]: I0129 15:40:14.103940 5008 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"84bee79a5084a74e833cfe4bac65bc4b319e7a41e9f3e8c7ee7de383385da1a5"} Jan 29 15:40:14 crc kubenswrapper[5008]: I0129 15:40:14.103947 5008 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"08beb10f1715c1ca4bbe5b5ecf918e595f3befca424a2b65a06e682936dcc9c1"} Jan 29 15:40:14 crc kubenswrapper[5008]: I0129 15:40:14.103954 5008 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"3e0b6c0db5ed1e87ffade45aa1c7194322bbf680050f9b7328a3584db57e1554"} Jan 29 15:40:14 crc kubenswrapper[5008]: I0129 15:40:14.103961 5008 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"676b28dc78242b0ec7c7a3643a048da9020c807de1f4ddd0cd801f60a1bf41a8"} Jan 29 15:40:14 crc kubenswrapper[5008]: I0129 15:40:14.103969 5008 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"6807035e51f1a1b563d2c2de6ad73607b2a3bbb9b4336cb9dfeea693d35fdda6"} Jan 29 15:40:14 crc kubenswrapper[5008]: I0129 15:40:14.103979 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pqg9w" event={"ID":"1d092513-7735-4c98-9734-57bc46b99280","Type":"ContainerDied","Data":"3ed021c49019edf6db353db02ef3c36191fef92186df2ed16a92920dd439b3d2"} Jan 29 15:40:14 crc kubenswrapper[5008]: I0129 15:40:14.103991 5008 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"f8f1d8793cbf27bc352ee2009caccdffa0a765f416beee3df3c97018285f6f5c"} Jan 29 15:40:14 crc kubenswrapper[5008]: I0129 15:40:14.103999 5008 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"c4894794fa383987c6dc74bda3cd40e56fa81dab982e631fe2fb043b74a6afd9"} Jan 29 15:40:14 crc kubenswrapper[5008]: I0129 15:40:14.104006 5008 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"dc93128ecb53884c776154eafc7f29837e9c378a10c37df5d85d608ef14d7195"} Jan 29 15:40:14 crc kubenswrapper[5008]: I0129 15:40:14.104013 5008 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"eddc7bcf8b28e2d71e41dbad61e84e0e0ac1e2702628a400e9c16dcc4303cad1"} Jan 29 15:40:14 crc kubenswrapper[5008]: I0129 15:40:14.104019 5008 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"b82de879355c27b3c577b5d5a292b2c1db266e6d92a8e01409bf87ede71ba420"} Jan 29 15:40:14 crc kubenswrapper[5008]: I0129 15:40:14.104026 5008 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"84bee79a5084a74e833cfe4bac65bc4b319e7a41e9f3e8c7ee7de383385da1a5"} Jan 29 15:40:14 crc kubenswrapper[5008]: I0129 15:40:14.104034 5008 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"08beb10f1715c1ca4bbe5b5ecf918e595f3befca424a2b65a06e682936dcc9c1"} Jan 29 15:40:14 crc kubenswrapper[5008]: I0129 15:40:14.104041 5008 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"3e0b6c0db5ed1e87ffade45aa1c7194322bbf680050f9b7328a3584db57e1554"} Jan 29 15:40:14 crc kubenswrapper[5008]: I0129 15:40:14.104048 5008 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"676b28dc78242b0ec7c7a3643a048da9020c807de1f4ddd0cd801f60a1bf41a8"} Jan 29 15:40:14 crc kubenswrapper[5008]: I0129 15:40:14.104054 5008 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"6807035e51f1a1b563d2c2de6ad73607b2a3bbb9b4336cb9dfeea693d35fdda6"} Jan 29 15:40:14 crc kubenswrapper[5008]: I0129 15:40:14.193546 5008 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-pqg9w"] Jan 29 15:40:14 crc kubenswrapper[5008]: I0129 15:40:14.196380 5008 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-pqg9w"] Jan 29 15:40:14 crc kubenswrapper[5008]: I0129 15:40:14.209050 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/252dea6f-dc2c-4c83-8930-535e5b0f6cdb-env-overrides\") pod \"ovnkube-node-j9h2f\" (UID: \"252dea6f-dc2c-4c83-8930-535e5b0f6cdb\") " pod="openshift-ovn-kubernetes/ovnkube-node-j9h2f" Jan 29 15:40:14 crc kubenswrapper[5008]: I0129 15:40:14.209084 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/252dea6f-dc2c-4c83-8930-535e5b0f6cdb-ovnkube-script-lib\") pod \"ovnkube-node-j9h2f\" (UID: \"252dea6f-dc2c-4c83-8930-535e5b0f6cdb\") " pod="openshift-ovn-kubernetes/ovnkube-node-j9h2f" Jan 29 15:40:14 crc kubenswrapper[5008]: I0129 15:40:14.209105 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/252dea6f-dc2c-4c83-8930-535e5b0f6cdb-ovn-node-metrics-cert\") pod \"ovnkube-node-j9h2f\" (UID: \"252dea6f-dc2c-4c83-8930-535e5b0f6cdb\") " pod="openshift-ovn-kubernetes/ovnkube-node-j9h2f" Jan 29 15:40:14 crc kubenswrapper[5008]: I0129 15:40:14.209133 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/252dea6f-dc2c-4c83-8930-535e5b0f6cdb-node-log\") pod \"ovnkube-node-j9h2f\" (UID: \"252dea6f-dc2c-4c83-8930-535e5b0f6cdb\") " pod="openshift-ovn-kubernetes/ovnkube-node-j9h2f" Jan 29 15:40:14 crc kubenswrapper[5008]: I0129 15:40:14.209148 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/252dea6f-dc2c-4c83-8930-535e5b0f6cdb-host-run-ovn-kubernetes\") pod \"ovnkube-node-j9h2f\" (UID: \"252dea6f-dc2c-4c83-8930-535e5b0f6cdb\") " pod="openshift-ovn-kubernetes/ovnkube-node-j9h2f" Jan 29 15:40:14 crc kubenswrapper[5008]: I0129 15:40:14.209162 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/252dea6f-dc2c-4c83-8930-535e5b0f6cdb-host-cni-netd\") pod \"ovnkube-node-j9h2f\" (UID: \"252dea6f-dc2c-4c83-8930-535e5b0f6cdb\") " pod="openshift-ovn-kubernetes/ovnkube-node-j9h2f" Jan 29 15:40:14 crc kubenswrapper[5008]: I0129 15:40:14.209184 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/252dea6f-dc2c-4c83-8930-535e5b0f6cdb-var-lib-openvswitch\") pod \"ovnkube-node-j9h2f\" (UID: \"252dea6f-dc2c-4c83-8930-535e5b0f6cdb\") " pod="openshift-ovn-kubernetes/ovnkube-node-j9h2f" Jan 29 15:40:14 crc kubenswrapper[5008]: I0129 15:40:14.209198 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/252dea6f-dc2c-4c83-8930-535e5b0f6cdb-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-j9h2f\" (UID: \"252dea6f-dc2c-4c83-8930-535e5b0f6cdb\") " pod="openshift-ovn-kubernetes/ovnkube-node-j9h2f" Jan 29 15:40:14 crc kubenswrapper[5008]: I0129 15:40:14.209236 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/252dea6f-dc2c-4c83-8930-535e5b0f6cdb-etc-openvswitch\") pod \"ovnkube-node-j9h2f\" (UID: \"252dea6f-dc2c-4c83-8930-535e5b0f6cdb\") " pod="openshift-ovn-kubernetes/ovnkube-node-j9h2f" Jan 29 15:40:14 crc kubenswrapper[5008]: I0129 15:40:14.209257 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/252dea6f-dc2c-4c83-8930-535e5b0f6cdb-run-systemd\") pod \"ovnkube-node-j9h2f\" (UID: \"252dea6f-dc2c-4c83-8930-535e5b0f6cdb\") " pod="openshift-ovn-kubernetes/ovnkube-node-j9h2f" Jan 29 15:40:14 crc kubenswrapper[5008]: I0129 15:40:14.209272 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/252dea6f-dc2c-4c83-8930-535e5b0f6cdb-host-slash\") pod \"ovnkube-node-j9h2f\" (UID: \"252dea6f-dc2c-4c83-8930-535e5b0f6cdb\") " pod="openshift-ovn-kubernetes/ovnkube-node-j9h2f" Jan 29 15:40:14 crc kubenswrapper[5008]: I0129 15:40:14.209287 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/252dea6f-dc2c-4c83-8930-535e5b0f6cdb-host-run-netns\") pod \"ovnkube-node-j9h2f\" (UID: \"252dea6f-dc2c-4c83-8930-535e5b0f6cdb\") " pod="openshift-ovn-kubernetes/ovnkube-node-j9h2f" Jan 29 15:40:14 crc kubenswrapper[5008]: I0129 15:40:14.209308 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/252dea6f-dc2c-4c83-8930-535e5b0f6cdb-host-cni-bin\") pod \"ovnkube-node-j9h2f\" (UID: \"252dea6f-dc2c-4c83-8930-535e5b0f6cdb\") " pod="openshift-ovn-kubernetes/ovnkube-node-j9h2f" Jan 29 15:40:14 crc kubenswrapper[5008]: I0129 15:40:14.209366 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/252dea6f-dc2c-4c83-8930-535e5b0f6cdb-host-kubelet\") pod \"ovnkube-node-j9h2f\" (UID: \"252dea6f-dc2c-4c83-8930-535e5b0f6cdb\") " pod="openshift-ovn-kubernetes/ovnkube-node-j9h2f" Jan 29 15:40:14 crc kubenswrapper[5008]: I0129 15:40:14.209381 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/252dea6f-dc2c-4c83-8930-535e5b0f6cdb-log-socket\") pod \"ovnkube-node-j9h2f\" (UID: \"252dea6f-dc2c-4c83-8930-535e5b0f6cdb\") " pod="openshift-ovn-kubernetes/ovnkube-node-j9h2f" Jan 29 15:40:14 crc kubenswrapper[5008]: I0129 15:40:14.209397 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/252dea6f-dc2c-4c83-8930-535e5b0f6cdb-run-ovn\") pod \"ovnkube-node-j9h2f\" (UID: \"252dea6f-dc2c-4c83-8930-535e5b0f6cdb\") " pod="openshift-ovn-kubernetes/ovnkube-node-j9h2f" Jan 29 15:40:14 crc kubenswrapper[5008]: I0129 15:40:14.209416 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/252dea6f-dc2c-4c83-8930-535e5b0f6cdb-run-openvswitch\") pod \"ovnkube-node-j9h2f\" (UID: \"252dea6f-dc2c-4c83-8930-535e5b0f6cdb\") " pod="openshift-ovn-kubernetes/ovnkube-node-j9h2f" Jan 29 15:40:14 crc kubenswrapper[5008]: I0129 15:40:14.209421 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/252dea6f-dc2c-4c83-8930-535e5b0f6cdb-var-lib-openvswitch\") pod \"ovnkube-node-j9h2f\" (UID: \"252dea6f-dc2c-4c83-8930-535e5b0f6cdb\") " pod="openshift-ovn-kubernetes/ovnkube-node-j9h2f" Jan 29 15:40:14 crc kubenswrapper[5008]: I0129 15:40:14.209434 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/252dea6f-dc2c-4c83-8930-535e5b0f6cdb-systemd-units\") pod \"ovnkube-node-j9h2f\" (UID: \"252dea6f-dc2c-4c83-8930-535e5b0f6cdb\") " pod="openshift-ovn-kubernetes/ovnkube-node-j9h2f" Jan 29 15:40:14 crc kubenswrapper[5008]: I0129 15:40:14.209484 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/252dea6f-dc2c-4c83-8930-535e5b0f6cdb-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-j9h2f\" (UID: \"252dea6f-dc2c-4c83-8930-535e5b0f6cdb\") " pod="openshift-ovn-kubernetes/ovnkube-node-j9h2f" Jan 29 15:40:14 crc kubenswrapper[5008]: I0129 15:40:14.209504 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6s8mt\" (UniqueName: \"kubernetes.io/projected/252dea6f-dc2c-4c83-8930-535e5b0f6cdb-kube-api-access-6s8mt\") pod \"ovnkube-node-j9h2f\" (UID: \"252dea6f-dc2c-4c83-8930-535e5b0f6cdb\") " pod="openshift-ovn-kubernetes/ovnkube-node-j9h2f" Jan 29 15:40:14 crc kubenswrapper[5008]: I0129 15:40:14.209533 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/252dea6f-dc2c-4c83-8930-535e5b0f6cdb-ovnkube-config\") pod \"ovnkube-node-j9h2f\" (UID: \"252dea6f-dc2c-4c83-8930-535e5b0f6cdb\") " pod="openshift-ovn-kubernetes/ovnkube-node-j9h2f" Jan 29 15:40:14 crc kubenswrapper[5008]: I0129 15:40:14.210353 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/252dea6f-dc2c-4c83-8930-535e5b0f6cdb-ovnkube-config\") pod \"ovnkube-node-j9h2f\" (UID: \"252dea6f-dc2c-4c83-8930-535e5b0f6cdb\") " pod="openshift-ovn-kubernetes/ovnkube-node-j9h2f" Jan 29 15:40:14 crc kubenswrapper[5008]: I0129 15:40:14.210396 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/252dea6f-dc2c-4c83-8930-535e5b0f6cdb-host-kubelet\") pod \"ovnkube-node-j9h2f\" (UID: \"252dea6f-dc2c-4c83-8930-535e5b0f6cdb\") " pod="openshift-ovn-kubernetes/ovnkube-node-j9h2f" Jan 29 15:40:14 crc kubenswrapper[5008]: I0129 15:40:14.210462 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/252dea6f-dc2c-4c83-8930-535e5b0f6cdb-host-cni-bin\") pod \"ovnkube-node-j9h2f\" (UID: \"252dea6f-dc2c-4c83-8930-535e5b0f6cdb\") " pod="openshift-ovn-kubernetes/ovnkube-node-j9h2f" Jan 29 15:40:14 crc kubenswrapper[5008]: I0129 15:40:14.210471 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/252dea6f-dc2c-4c83-8930-535e5b0f6cdb-log-socket\") pod \"ovnkube-node-j9h2f\" (UID: \"252dea6f-dc2c-4c83-8930-535e5b0f6cdb\") " pod="openshift-ovn-kubernetes/ovnkube-node-j9h2f" Jan 29 15:40:14 crc kubenswrapper[5008]: I0129 15:40:14.210520 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/252dea6f-dc2c-4c83-8930-535e5b0f6cdb-run-ovn\") pod \"ovnkube-node-j9h2f\" (UID: \"252dea6f-dc2c-4c83-8930-535e5b0f6cdb\") " pod="openshift-ovn-kubernetes/ovnkube-node-j9h2f" Jan 29 15:40:14 crc kubenswrapper[5008]: I0129 15:40:14.210567 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/252dea6f-dc2c-4c83-8930-535e5b0f6cdb-run-openvswitch\") pod \"ovnkube-node-j9h2f\" (UID: \"252dea6f-dc2c-4c83-8930-535e5b0f6cdb\") " pod="openshift-ovn-kubernetes/ovnkube-node-j9h2f" Jan 29 15:40:14 crc kubenswrapper[5008]: I0129 15:40:14.210574 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/252dea6f-dc2c-4c83-8930-535e5b0f6cdb-etc-openvswitch\") pod \"ovnkube-node-j9h2f\" (UID: \"252dea6f-dc2c-4c83-8930-535e5b0f6cdb\") " pod="openshift-ovn-kubernetes/ovnkube-node-j9h2f" Jan 29 15:40:14 crc kubenswrapper[5008]: I0129 15:40:14.210600 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/252dea6f-dc2c-4c83-8930-535e5b0f6cdb-run-systemd\") pod \"ovnkube-node-j9h2f\" (UID: \"252dea6f-dc2c-4c83-8930-535e5b0f6cdb\") " pod="openshift-ovn-kubernetes/ovnkube-node-j9h2f" Jan 29 15:40:14 crc kubenswrapper[5008]: I0129 15:40:14.210621 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/252dea6f-dc2c-4c83-8930-535e5b0f6cdb-host-slash\") pod \"ovnkube-node-j9h2f\" (UID: \"252dea6f-dc2c-4c83-8930-535e5b0f6cdb\") " pod="openshift-ovn-kubernetes/ovnkube-node-j9h2f" Jan 29 15:40:14 crc kubenswrapper[5008]: I0129 15:40:14.210642 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/252dea6f-dc2c-4c83-8930-535e5b0f6cdb-host-run-netns\") pod \"ovnkube-node-j9h2f\" (UID: \"252dea6f-dc2c-4c83-8930-535e5b0f6cdb\") " pod="openshift-ovn-kubernetes/ovnkube-node-j9h2f" Jan 29 15:40:14 crc kubenswrapper[5008]: I0129 15:40:14.210666 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/252dea6f-dc2c-4c83-8930-535e5b0f6cdb-node-log\") pod \"ovnkube-node-j9h2f\" (UID: \"252dea6f-dc2c-4c83-8930-535e5b0f6cdb\") " pod="openshift-ovn-kubernetes/ovnkube-node-j9h2f" Jan 29 15:40:14 crc kubenswrapper[5008]: I0129 15:40:14.210737 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/252dea6f-dc2c-4c83-8930-535e5b0f6cdb-env-overrides\") pod \"ovnkube-node-j9h2f\" (UID: \"252dea6f-dc2c-4c83-8930-535e5b0f6cdb\") " pod="openshift-ovn-kubernetes/ovnkube-node-j9h2f" Jan 29 15:40:14 crc kubenswrapper[5008]: I0129 15:40:14.210743 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/252dea6f-dc2c-4c83-8930-535e5b0f6cdb-host-cni-netd\") pod \"ovnkube-node-j9h2f\" (UID: \"252dea6f-dc2c-4c83-8930-535e5b0f6cdb\") " pod="openshift-ovn-kubernetes/ovnkube-node-j9h2f" Jan 29 15:40:14 crc kubenswrapper[5008]: I0129 15:40:14.209467 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/252dea6f-dc2c-4c83-8930-535e5b0f6cdb-systemd-units\") pod \"ovnkube-node-j9h2f\" (UID: \"252dea6f-dc2c-4c83-8930-535e5b0f6cdb\") " pod="openshift-ovn-kubernetes/ovnkube-node-j9h2f" Jan 29 15:40:14 crc kubenswrapper[5008]: I0129 15:40:14.210852 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/252dea6f-dc2c-4c83-8930-535e5b0f6cdb-host-run-ovn-kubernetes\") pod \"ovnkube-node-j9h2f\" (UID: \"252dea6f-dc2c-4c83-8930-535e5b0f6cdb\") " pod="openshift-ovn-kubernetes/ovnkube-node-j9h2f" Jan 29 15:40:14 crc kubenswrapper[5008]: I0129 15:40:14.211467 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/252dea6f-dc2c-4c83-8930-535e5b0f6cdb-ovnkube-script-lib\") pod \"ovnkube-node-j9h2f\" (UID: \"252dea6f-dc2c-4c83-8930-535e5b0f6cdb\") " pod="openshift-ovn-kubernetes/ovnkube-node-j9h2f" Jan 29 15:40:14 crc kubenswrapper[5008]: I0129 15:40:14.217325 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/252dea6f-dc2c-4c83-8930-535e5b0f6cdb-ovn-node-metrics-cert\") pod \"ovnkube-node-j9h2f\" (UID: \"252dea6f-dc2c-4c83-8930-535e5b0f6cdb\") " pod="openshift-ovn-kubernetes/ovnkube-node-j9h2f" Jan 29 15:40:14 crc kubenswrapper[5008]: I0129 15:40:14.240936 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6s8mt\" (UniqueName: \"kubernetes.io/projected/252dea6f-dc2c-4c83-8930-535e5b0f6cdb-kube-api-access-6s8mt\") pod \"ovnkube-node-j9h2f\" (UID: \"252dea6f-dc2c-4c83-8930-535e5b0f6cdb\") " pod="openshift-ovn-kubernetes/ovnkube-node-j9h2f" Jan 29 15:40:14 crc kubenswrapper[5008]: I0129 15:40:14.520117 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-j9h2f" Jan 29 15:40:14 crc kubenswrapper[5008]: W0129 15:40:14.550007 5008 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod252dea6f_dc2c_4c83_8930_535e5b0f6cdb.slice/crio-ae1ef17fa87e70552ab49e9b4a89f9dfbeaebd92cd6bd29ade10978c8c8d56a4 WatchSource:0}: Error finding container ae1ef17fa87e70552ab49e9b4a89f9dfbeaebd92cd6bd29ade10978c8c8d56a4: Status 404 returned error can't find the container with id ae1ef17fa87e70552ab49e9b4a89f9dfbeaebd92cd6bd29ade10978c8c8d56a4 Jan 29 15:40:14 crc kubenswrapper[5008]: I0129 15:40:14.637296 5008 scope.go:117] "RemoveContainer" containerID="f8f1d8793cbf27bc352ee2009caccdffa0a765f416beee3df3c97018285f6f5c" Jan 29 15:40:14 crc kubenswrapper[5008]: I0129 15:40:14.664270 5008 scope.go:117] "RemoveContainer" containerID="c4894794fa383987c6dc74bda3cd40e56fa81dab982e631fe2fb043b74a6afd9" Jan 29 15:40:14 crc kubenswrapper[5008]: I0129 15:40:14.690266 5008 scope.go:117] "RemoveContainer" containerID="dc93128ecb53884c776154eafc7f29837e9c378a10c37df5d85d608ef14d7195" Jan 29 15:40:14 crc kubenswrapper[5008]: I0129 15:40:14.713496 5008 scope.go:117] "RemoveContainer" containerID="eddc7bcf8b28e2d71e41dbad61e84e0e0ac1e2702628a400e9c16dcc4303cad1" Jan 29 15:40:14 crc kubenswrapper[5008]: I0129 15:40:14.736173 5008 scope.go:117] "RemoveContainer" containerID="b82de879355c27b3c577b5d5a292b2c1db266e6d92a8e01409bf87ede71ba420" Jan 29 15:40:14 crc kubenswrapper[5008]: I0129 15:40:14.764950 5008 scope.go:117] "RemoveContainer" containerID="84bee79a5084a74e833cfe4bac65bc4b319e7a41e9f3e8c7ee7de383385da1a5" Jan 29 15:40:14 crc kubenswrapper[5008]: I0129 15:40:14.783322 5008 scope.go:117] "RemoveContainer" containerID="08beb10f1715c1ca4bbe5b5ecf918e595f3befca424a2b65a06e682936dcc9c1" Jan 29 15:40:14 crc kubenswrapper[5008]: I0129 15:40:14.803405 5008 scope.go:117] "RemoveContainer" containerID="3e0b6c0db5ed1e87ffade45aa1c7194322bbf680050f9b7328a3584db57e1554" Jan 29 15:40:14 crc kubenswrapper[5008]: I0129 15:40:14.821398 5008 scope.go:117] "RemoveContainer" containerID="676b28dc78242b0ec7c7a3643a048da9020c807de1f4ddd0cd801f60a1bf41a8" Jan 29 15:40:14 crc kubenswrapper[5008]: I0129 15:40:14.843933 5008 scope.go:117] "RemoveContainer" containerID="6807035e51f1a1b563d2c2de6ad73607b2a3bbb9b4336cb9dfeea693d35fdda6" Jan 29 15:40:14 crc kubenswrapper[5008]: I0129 15:40:14.861837 5008 scope.go:117] "RemoveContainer" containerID="f8f1d8793cbf27bc352ee2009caccdffa0a765f416beee3df3c97018285f6f5c" Jan 29 15:40:14 crc kubenswrapper[5008]: E0129 15:40:14.862210 5008 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f8f1d8793cbf27bc352ee2009caccdffa0a765f416beee3df3c97018285f6f5c\": container with ID starting with f8f1d8793cbf27bc352ee2009caccdffa0a765f416beee3df3c97018285f6f5c not found: ID does not exist" containerID="f8f1d8793cbf27bc352ee2009caccdffa0a765f416beee3df3c97018285f6f5c" Jan 29 15:40:14 crc kubenswrapper[5008]: I0129 15:40:14.862248 5008 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f8f1d8793cbf27bc352ee2009caccdffa0a765f416beee3df3c97018285f6f5c"} err="failed to get container status \"f8f1d8793cbf27bc352ee2009caccdffa0a765f416beee3df3c97018285f6f5c\": rpc error: code = NotFound desc = could not find container \"f8f1d8793cbf27bc352ee2009caccdffa0a765f416beee3df3c97018285f6f5c\": container with ID starting with f8f1d8793cbf27bc352ee2009caccdffa0a765f416beee3df3c97018285f6f5c not found: ID does not exist" Jan 29 15:40:14 crc kubenswrapper[5008]: I0129 15:40:14.862278 5008 scope.go:117] "RemoveContainer" containerID="c4894794fa383987c6dc74bda3cd40e56fa81dab982e631fe2fb043b74a6afd9" Jan 29 15:40:14 crc kubenswrapper[5008]: E0129 15:40:14.863495 5008 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c4894794fa383987c6dc74bda3cd40e56fa81dab982e631fe2fb043b74a6afd9\": container with ID starting with c4894794fa383987c6dc74bda3cd40e56fa81dab982e631fe2fb043b74a6afd9 not found: ID does not exist" containerID="c4894794fa383987c6dc74bda3cd40e56fa81dab982e631fe2fb043b74a6afd9" Jan 29 15:40:14 crc kubenswrapper[5008]: I0129 15:40:14.863531 5008 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c4894794fa383987c6dc74bda3cd40e56fa81dab982e631fe2fb043b74a6afd9"} err="failed to get container status \"c4894794fa383987c6dc74bda3cd40e56fa81dab982e631fe2fb043b74a6afd9\": rpc error: code = NotFound desc = could not find container \"c4894794fa383987c6dc74bda3cd40e56fa81dab982e631fe2fb043b74a6afd9\": container with ID starting with c4894794fa383987c6dc74bda3cd40e56fa81dab982e631fe2fb043b74a6afd9 not found: ID does not exist" Jan 29 15:40:14 crc kubenswrapper[5008]: I0129 15:40:14.863544 5008 scope.go:117] "RemoveContainer" containerID="dc93128ecb53884c776154eafc7f29837e9c378a10c37df5d85d608ef14d7195" Jan 29 15:40:14 crc kubenswrapper[5008]: E0129 15:40:14.863932 5008 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dc93128ecb53884c776154eafc7f29837e9c378a10c37df5d85d608ef14d7195\": container with ID starting with dc93128ecb53884c776154eafc7f29837e9c378a10c37df5d85d608ef14d7195 not found: ID does not exist" containerID="dc93128ecb53884c776154eafc7f29837e9c378a10c37df5d85d608ef14d7195" Jan 29 15:40:14 crc kubenswrapper[5008]: I0129 15:40:14.863981 5008 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dc93128ecb53884c776154eafc7f29837e9c378a10c37df5d85d608ef14d7195"} err="failed to get container status \"dc93128ecb53884c776154eafc7f29837e9c378a10c37df5d85d608ef14d7195\": rpc error: code = NotFound desc = could not find container \"dc93128ecb53884c776154eafc7f29837e9c378a10c37df5d85d608ef14d7195\": container with ID starting with dc93128ecb53884c776154eafc7f29837e9c378a10c37df5d85d608ef14d7195 not found: ID does not exist" Jan 29 15:40:14 crc kubenswrapper[5008]: I0129 15:40:14.864010 5008 scope.go:117] "RemoveContainer" containerID="eddc7bcf8b28e2d71e41dbad61e84e0e0ac1e2702628a400e9c16dcc4303cad1" Jan 29 15:40:14 crc kubenswrapper[5008]: E0129 15:40:14.864253 5008 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"eddc7bcf8b28e2d71e41dbad61e84e0e0ac1e2702628a400e9c16dcc4303cad1\": container with ID starting with eddc7bcf8b28e2d71e41dbad61e84e0e0ac1e2702628a400e9c16dcc4303cad1 not found: ID does not exist" containerID="eddc7bcf8b28e2d71e41dbad61e84e0e0ac1e2702628a400e9c16dcc4303cad1" Jan 29 15:40:14 crc kubenswrapper[5008]: I0129 15:40:14.864283 5008 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"eddc7bcf8b28e2d71e41dbad61e84e0e0ac1e2702628a400e9c16dcc4303cad1"} err="failed to get container status \"eddc7bcf8b28e2d71e41dbad61e84e0e0ac1e2702628a400e9c16dcc4303cad1\": rpc error: code = NotFound desc = could not find container \"eddc7bcf8b28e2d71e41dbad61e84e0e0ac1e2702628a400e9c16dcc4303cad1\": container with ID starting with eddc7bcf8b28e2d71e41dbad61e84e0e0ac1e2702628a400e9c16dcc4303cad1 not found: ID does not exist" Jan 29 15:40:14 crc kubenswrapper[5008]: I0129 15:40:14.864308 5008 scope.go:117] "RemoveContainer" containerID="b82de879355c27b3c577b5d5a292b2c1db266e6d92a8e01409bf87ede71ba420" Jan 29 15:40:14 crc kubenswrapper[5008]: E0129 15:40:14.864518 5008 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b82de879355c27b3c577b5d5a292b2c1db266e6d92a8e01409bf87ede71ba420\": container with ID starting with b82de879355c27b3c577b5d5a292b2c1db266e6d92a8e01409bf87ede71ba420 not found: ID does not exist" containerID="b82de879355c27b3c577b5d5a292b2c1db266e6d92a8e01409bf87ede71ba420" Jan 29 15:40:14 crc kubenswrapper[5008]: I0129 15:40:14.864568 5008 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b82de879355c27b3c577b5d5a292b2c1db266e6d92a8e01409bf87ede71ba420"} err="failed to get container status \"b82de879355c27b3c577b5d5a292b2c1db266e6d92a8e01409bf87ede71ba420\": rpc error: code = NotFound desc = could not find container \"b82de879355c27b3c577b5d5a292b2c1db266e6d92a8e01409bf87ede71ba420\": container with ID starting with b82de879355c27b3c577b5d5a292b2c1db266e6d92a8e01409bf87ede71ba420 not found: ID does not exist" Jan 29 15:40:14 crc kubenswrapper[5008]: I0129 15:40:14.864583 5008 scope.go:117] "RemoveContainer" containerID="84bee79a5084a74e833cfe4bac65bc4b319e7a41e9f3e8c7ee7de383385da1a5" Jan 29 15:40:14 crc kubenswrapper[5008]: E0129 15:40:14.864870 5008 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"84bee79a5084a74e833cfe4bac65bc4b319e7a41e9f3e8c7ee7de383385da1a5\": container with ID starting with 84bee79a5084a74e833cfe4bac65bc4b319e7a41e9f3e8c7ee7de383385da1a5 not found: ID does not exist" containerID="84bee79a5084a74e833cfe4bac65bc4b319e7a41e9f3e8c7ee7de383385da1a5" Jan 29 15:40:14 crc kubenswrapper[5008]: I0129 15:40:14.864919 5008 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"84bee79a5084a74e833cfe4bac65bc4b319e7a41e9f3e8c7ee7de383385da1a5"} err="failed to get container status \"84bee79a5084a74e833cfe4bac65bc4b319e7a41e9f3e8c7ee7de383385da1a5\": rpc error: code = NotFound desc = could not find container \"84bee79a5084a74e833cfe4bac65bc4b319e7a41e9f3e8c7ee7de383385da1a5\": container with ID starting with 84bee79a5084a74e833cfe4bac65bc4b319e7a41e9f3e8c7ee7de383385da1a5 not found: ID does not exist" Jan 29 15:40:14 crc kubenswrapper[5008]: I0129 15:40:14.864935 5008 scope.go:117] "RemoveContainer" containerID="08beb10f1715c1ca4bbe5b5ecf918e595f3befca424a2b65a06e682936dcc9c1" Jan 29 15:40:14 crc kubenswrapper[5008]: E0129 15:40:14.865149 5008 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"08beb10f1715c1ca4bbe5b5ecf918e595f3befca424a2b65a06e682936dcc9c1\": container with ID starting with 08beb10f1715c1ca4bbe5b5ecf918e595f3befca424a2b65a06e682936dcc9c1 not found: ID does not exist" containerID="08beb10f1715c1ca4bbe5b5ecf918e595f3befca424a2b65a06e682936dcc9c1" Jan 29 15:40:14 crc kubenswrapper[5008]: I0129 15:40:14.865176 5008 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"08beb10f1715c1ca4bbe5b5ecf918e595f3befca424a2b65a06e682936dcc9c1"} err="failed to get container status \"08beb10f1715c1ca4bbe5b5ecf918e595f3befca424a2b65a06e682936dcc9c1\": rpc error: code = NotFound desc = could not find container \"08beb10f1715c1ca4bbe5b5ecf918e595f3befca424a2b65a06e682936dcc9c1\": container with ID starting with 08beb10f1715c1ca4bbe5b5ecf918e595f3befca424a2b65a06e682936dcc9c1 not found: ID does not exist" Jan 29 15:40:14 crc kubenswrapper[5008]: I0129 15:40:14.865189 5008 scope.go:117] "RemoveContainer" containerID="3e0b6c0db5ed1e87ffade45aa1c7194322bbf680050f9b7328a3584db57e1554" Jan 29 15:40:14 crc kubenswrapper[5008]: E0129 15:40:14.865671 5008 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3e0b6c0db5ed1e87ffade45aa1c7194322bbf680050f9b7328a3584db57e1554\": container with ID starting with 3e0b6c0db5ed1e87ffade45aa1c7194322bbf680050f9b7328a3584db57e1554 not found: ID does not exist" containerID="3e0b6c0db5ed1e87ffade45aa1c7194322bbf680050f9b7328a3584db57e1554" Jan 29 15:40:14 crc kubenswrapper[5008]: I0129 15:40:14.865746 5008 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3e0b6c0db5ed1e87ffade45aa1c7194322bbf680050f9b7328a3584db57e1554"} err="failed to get container status \"3e0b6c0db5ed1e87ffade45aa1c7194322bbf680050f9b7328a3584db57e1554\": rpc error: code = NotFound desc = could not find container \"3e0b6c0db5ed1e87ffade45aa1c7194322bbf680050f9b7328a3584db57e1554\": container with ID starting with 3e0b6c0db5ed1e87ffade45aa1c7194322bbf680050f9b7328a3584db57e1554 not found: ID does not exist" Jan 29 15:40:14 crc kubenswrapper[5008]: I0129 15:40:14.865760 5008 scope.go:117] "RemoveContainer" containerID="676b28dc78242b0ec7c7a3643a048da9020c807de1f4ddd0cd801f60a1bf41a8" Jan 29 15:40:14 crc kubenswrapper[5008]: E0129 15:40:14.866126 5008 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"676b28dc78242b0ec7c7a3643a048da9020c807de1f4ddd0cd801f60a1bf41a8\": container with ID starting with 676b28dc78242b0ec7c7a3643a048da9020c807de1f4ddd0cd801f60a1bf41a8 not found: ID does not exist" containerID="676b28dc78242b0ec7c7a3643a048da9020c807de1f4ddd0cd801f60a1bf41a8" Jan 29 15:40:14 crc kubenswrapper[5008]: I0129 15:40:14.866202 5008 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"676b28dc78242b0ec7c7a3643a048da9020c807de1f4ddd0cd801f60a1bf41a8"} err="failed to get container status \"676b28dc78242b0ec7c7a3643a048da9020c807de1f4ddd0cd801f60a1bf41a8\": rpc error: code = NotFound desc = could not find container \"676b28dc78242b0ec7c7a3643a048da9020c807de1f4ddd0cd801f60a1bf41a8\": container with ID starting with 676b28dc78242b0ec7c7a3643a048da9020c807de1f4ddd0cd801f60a1bf41a8 not found: ID does not exist" Jan 29 15:40:14 crc kubenswrapper[5008]: I0129 15:40:14.866261 5008 scope.go:117] "RemoveContainer" containerID="6807035e51f1a1b563d2c2de6ad73607b2a3bbb9b4336cb9dfeea693d35fdda6" Jan 29 15:40:14 crc kubenswrapper[5008]: E0129 15:40:14.866642 5008 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6807035e51f1a1b563d2c2de6ad73607b2a3bbb9b4336cb9dfeea693d35fdda6\": container with ID starting with 6807035e51f1a1b563d2c2de6ad73607b2a3bbb9b4336cb9dfeea693d35fdda6 not found: ID does not exist" containerID="6807035e51f1a1b563d2c2de6ad73607b2a3bbb9b4336cb9dfeea693d35fdda6" Jan 29 15:40:14 crc kubenswrapper[5008]: I0129 15:40:14.866700 5008 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6807035e51f1a1b563d2c2de6ad73607b2a3bbb9b4336cb9dfeea693d35fdda6"} err="failed to get container status \"6807035e51f1a1b563d2c2de6ad73607b2a3bbb9b4336cb9dfeea693d35fdda6\": rpc error: code = NotFound desc = could not find container \"6807035e51f1a1b563d2c2de6ad73607b2a3bbb9b4336cb9dfeea693d35fdda6\": container with ID starting with 6807035e51f1a1b563d2c2de6ad73607b2a3bbb9b4336cb9dfeea693d35fdda6 not found: ID does not exist" Jan 29 15:40:14 crc kubenswrapper[5008]: I0129 15:40:14.866715 5008 scope.go:117] "RemoveContainer" containerID="f8f1d8793cbf27bc352ee2009caccdffa0a765f416beee3df3c97018285f6f5c" Jan 29 15:40:14 crc kubenswrapper[5008]: I0129 15:40:14.866947 5008 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f8f1d8793cbf27bc352ee2009caccdffa0a765f416beee3df3c97018285f6f5c"} err="failed to get container status \"f8f1d8793cbf27bc352ee2009caccdffa0a765f416beee3df3c97018285f6f5c\": rpc error: code = NotFound desc = could not find container \"f8f1d8793cbf27bc352ee2009caccdffa0a765f416beee3df3c97018285f6f5c\": container with ID starting with f8f1d8793cbf27bc352ee2009caccdffa0a765f416beee3df3c97018285f6f5c not found: ID does not exist" Jan 29 15:40:14 crc kubenswrapper[5008]: I0129 15:40:14.867003 5008 scope.go:117] "RemoveContainer" containerID="c4894794fa383987c6dc74bda3cd40e56fa81dab982e631fe2fb043b74a6afd9" Jan 29 15:40:14 crc kubenswrapper[5008]: I0129 15:40:14.867257 5008 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c4894794fa383987c6dc74bda3cd40e56fa81dab982e631fe2fb043b74a6afd9"} err="failed to get container status \"c4894794fa383987c6dc74bda3cd40e56fa81dab982e631fe2fb043b74a6afd9\": rpc error: code = NotFound desc = could not find container \"c4894794fa383987c6dc74bda3cd40e56fa81dab982e631fe2fb043b74a6afd9\": container with ID starting with c4894794fa383987c6dc74bda3cd40e56fa81dab982e631fe2fb043b74a6afd9 not found: ID does not exist" Jan 29 15:40:14 crc kubenswrapper[5008]: I0129 15:40:14.867277 5008 scope.go:117] "RemoveContainer" containerID="dc93128ecb53884c776154eafc7f29837e9c378a10c37df5d85d608ef14d7195" Jan 29 15:40:14 crc kubenswrapper[5008]: I0129 15:40:14.867532 5008 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dc93128ecb53884c776154eafc7f29837e9c378a10c37df5d85d608ef14d7195"} err="failed to get container status \"dc93128ecb53884c776154eafc7f29837e9c378a10c37df5d85d608ef14d7195\": rpc error: code = NotFound desc = could not find container \"dc93128ecb53884c776154eafc7f29837e9c378a10c37df5d85d608ef14d7195\": container with ID starting with dc93128ecb53884c776154eafc7f29837e9c378a10c37df5d85d608ef14d7195 not found: ID does not exist" Jan 29 15:40:14 crc kubenswrapper[5008]: I0129 15:40:14.867552 5008 scope.go:117] "RemoveContainer" containerID="eddc7bcf8b28e2d71e41dbad61e84e0e0ac1e2702628a400e9c16dcc4303cad1" Jan 29 15:40:14 crc kubenswrapper[5008]: I0129 15:40:14.867693 5008 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"eddc7bcf8b28e2d71e41dbad61e84e0e0ac1e2702628a400e9c16dcc4303cad1"} err="failed to get container status \"eddc7bcf8b28e2d71e41dbad61e84e0e0ac1e2702628a400e9c16dcc4303cad1\": rpc error: code = NotFound desc = could not find container \"eddc7bcf8b28e2d71e41dbad61e84e0e0ac1e2702628a400e9c16dcc4303cad1\": container with ID starting with eddc7bcf8b28e2d71e41dbad61e84e0e0ac1e2702628a400e9c16dcc4303cad1 not found: ID does not exist" Jan 29 15:40:14 crc kubenswrapper[5008]: I0129 15:40:14.867707 5008 scope.go:117] "RemoveContainer" containerID="b82de879355c27b3c577b5d5a292b2c1db266e6d92a8e01409bf87ede71ba420" Jan 29 15:40:14 crc kubenswrapper[5008]: I0129 15:40:14.867976 5008 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b82de879355c27b3c577b5d5a292b2c1db266e6d92a8e01409bf87ede71ba420"} err="failed to get container status \"b82de879355c27b3c577b5d5a292b2c1db266e6d92a8e01409bf87ede71ba420\": rpc error: code = NotFound desc = could not find container \"b82de879355c27b3c577b5d5a292b2c1db266e6d92a8e01409bf87ede71ba420\": container with ID starting with b82de879355c27b3c577b5d5a292b2c1db266e6d92a8e01409bf87ede71ba420 not found: ID does not exist" Jan 29 15:40:14 crc kubenswrapper[5008]: I0129 15:40:14.867998 5008 scope.go:117] "RemoveContainer" containerID="84bee79a5084a74e833cfe4bac65bc4b319e7a41e9f3e8c7ee7de383385da1a5" Jan 29 15:40:14 crc kubenswrapper[5008]: I0129 15:40:14.868274 5008 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"84bee79a5084a74e833cfe4bac65bc4b319e7a41e9f3e8c7ee7de383385da1a5"} err="failed to get container status \"84bee79a5084a74e833cfe4bac65bc4b319e7a41e9f3e8c7ee7de383385da1a5\": rpc error: code = NotFound desc = could not find container \"84bee79a5084a74e833cfe4bac65bc4b319e7a41e9f3e8c7ee7de383385da1a5\": container with ID starting with 84bee79a5084a74e833cfe4bac65bc4b319e7a41e9f3e8c7ee7de383385da1a5 not found: ID does not exist" Jan 29 15:40:14 crc kubenswrapper[5008]: I0129 15:40:14.868301 5008 scope.go:117] "RemoveContainer" containerID="08beb10f1715c1ca4bbe5b5ecf918e595f3befca424a2b65a06e682936dcc9c1" Jan 29 15:40:14 crc kubenswrapper[5008]: I0129 15:40:14.868476 5008 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"08beb10f1715c1ca4bbe5b5ecf918e595f3befca424a2b65a06e682936dcc9c1"} err="failed to get container status \"08beb10f1715c1ca4bbe5b5ecf918e595f3befca424a2b65a06e682936dcc9c1\": rpc error: code = NotFound desc = could not find container \"08beb10f1715c1ca4bbe5b5ecf918e595f3befca424a2b65a06e682936dcc9c1\": container with ID starting with 08beb10f1715c1ca4bbe5b5ecf918e595f3befca424a2b65a06e682936dcc9c1 not found: ID does not exist" Jan 29 15:40:14 crc kubenswrapper[5008]: I0129 15:40:14.868500 5008 scope.go:117] "RemoveContainer" containerID="3e0b6c0db5ed1e87ffade45aa1c7194322bbf680050f9b7328a3584db57e1554" Jan 29 15:40:14 crc kubenswrapper[5008]: I0129 15:40:14.868819 5008 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3e0b6c0db5ed1e87ffade45aa1c7194322bbf680050f9b7328a3584db57e1554"} err="failed to get container status \"3e0b6c0db5ed1e87ffade45aa1c7194322bbf680050f9b7328a3584db57e1554\": rpc error: code = NotFound desc = could not find container \"3e0b6c0db5ed1e87ffade45aa1c7194322bbf680050f9b7328a3584db57e1554\": container with ID starting with 3e0b6c0db5ed1e87ffade45aa1c7194322bbf680050f9b7328a3584db57e1554 not found: ID does not exist" Jan 29 15:40:14 crc kubenswrapper[5008]: I0129 15:40:14.868850 5008 scope.go:117] "RemoveContainer" containerID="676b28dc78242b0ec7c7a3643a048da9020c807de1f4ddd0cd801f60a1bf41a8" Jan 29 15:40:14 crc kubenswrapper[5008]: I0129 15:40:14.869937 5008 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"676b28dc78242b0ec7c7a3643a048da9020c807de1f4ddd0cd801f60a1bf41a8"} err="failed to get container status \"676b28dc78242b0ec7c7a3643a048da9020c807de1f4ddd0cd801f60a1bf41a8\": rpc error: code = NotFound desc = could not find container \"676b28dc78242b0ec7c7a3643a048da9020c807de1f4ddd0cd801f60a1bf41a8\": container with ID starting with 676b28dc78242b0ec7c7a3643a048da9020c807de1f4ddd0cd801f60a1bf41a8 not found: ID does not exist" Jan 29 15:40:14 crc kubenswrapper[5008]: I0129 15:40:14.869959 5008 scope.go:117] "RemoveContainer" containerID="6807035e51f1a1b563d2c2de6ad73607b2a3bbb9b4336cb9dfeea693d35fdda6" Jan 29 15:40:14 crc kubenswrapper[5008]: I0129 15:40:14.870140 5008 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6807035e51f1a1b563d2c2de6ad73607b2a3bbb9b4336cb9dfeea693d35fdda6"} err="failed to get container status \"6807035e51f1a1b563d2c2de6ad73607b2a3bbb9b4336cb9dfeea693d35fdda6\": rpc error: code = NotFound desc = could not find container \"6807035e51f1a1b563d2c2de6ad73607b2a3bbb9b4336cb9dfeea693d35fdda6\": container with ID starting with 6807035e51f1a1b563d2c2de6ad73607b2a3bbb9b4336cb9dfeea693d35fdda6 not found: ID does not exist" Jan 29 15:40:14 crc kubenswrapper[5008]: I0129 15:40:14.870155 5008 scope.go:117] "RemoveContainer" containerID="f8f1d8793cbf27bc352ee2009caccdffa0a765f416beee3df3c97018285f6f5c" Jan 29 15:40:14 crc kubenswrapper[5008]: I0129 15:40:14.870324 5008 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f8f1d8793cbf27bc352ee2009caccdffa0a765f416beee3df3c97018285f6f5c"} err="failed to get container status \"f8f1d8793cbf27bc352ee2009caccdffa0a765f416beee3df3c97018285f6f5c\": rpc error: code = NotFound desc = could not find container \"f8f1d8793cbf27bc352ee2009caccdffa0a765f416beee3df3c97018285f6f5c\": container with ID starting with f8f1d8793cbf27bc352ee2009caccdffa0a765f416beee3df3c97018285f6f5c not found: ID does not exist" Jan 29 15:40:14 crc kubenswrapper[5008]: I0129 15:40:14.870336 5008 scope.go:117] "RemoveContainer" containerID="c4894794fa383987c6dc74bda3cd40e56fa81dab982e631fe2fb043b74a6afd9" Jan 29 15:40:14 crc kubenswrapper[5008]: I0129 15:40:14.870489 5008 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c4894794fa383987c6dc74bda3cd40e56fa81dab982e631fe2fb043b74a6afd9"} err="failed to get container status \"c4894794fa383987c6dc74bda3cd40e56fa81dab982e631fe2fb043b74a6afd9\": rpc error: code = NotFound desc = could not find container \"c4894794fa383987c6dc74bda3cd40e56fa81dab982e631fe2fb043b74a6afd9\": container with ID starting with c4894794fa383987c6dc74bda3cd40e56fa81dab982e631fe2fb043b74a6afd9 not found: ID does not exist" Jan 29 15:40:14 crc kubenswrapper[5008]: I0129 15:40:14.870502 5008 scope.go:117] "RemoveContainer" containerID="dc93128ecb53884c776154eafc7f29837e9c378a10c37df5d85d608ef14d7195" Jan 29 15:40:14 crc kubenswrapper[5008]: I0129 15:40:14.870635 5008 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dc93128ecb53884c776154eafc7f29837e9c378a10c37df5d85d608ef14d7195"} err="failed to get container status \"dc93128ecb53884c776154eafc7f29837e9c378a10c37df5d85d608ef14d7195\": rpc error: code = NotFound desc = could not find container \"dc93128ecb53884c776154eafc7f29837e9c378a10c37df5d85d608ef14d7195\": container with ID starting with dc93128ecb53884c776154eafc7f29837e9c378a10c37df5d85d608ef14d7195 not found: ID does not exist" Jan 29 15:40:14 crc kubenswrapper[5008]: I0129 15:40:14.870646 5008 scope.go:117] "RemoveContainer" containerID="eddc7bcf8b28e2d71e41dbad61e84e0e0ac1e2702628a400e9c16dcc4303cad1" Jan 29 15:40:14 crc kubenswrapper[5008]: I0129 15:40:14.870768 5008 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"eddc7bcf8b28e2d71e41dbad61e84e0e0ac1e2702628a400e9c16dcc4303cad1"} err="failed to get container status \"eddc7bcf8b28e2d71e41dbad61e84e0e0ac1e2702628a400e9c16dcc4303cad1\": rpc error: code = NotFound desc = could not find container \"eddc7bcf8b28e2d71e41dbad61e84e0e0ac1e2702628a400e9c16dcc4303cad1\": container with ID starting with eddc7bcf8b28e2d71e41dbad61e84e0e0ac1e2702628a400e9c16dcc4303cad1 not found: ID does not exist" Jan 29 15:40:14 crc kubenswrapper[5008]: I0129 15:40:14.870822 5008 scope.go:117] "RemoveContainer" containerID="b82de879355c27b3c577b5d5a292b2c1db266e6d92a8e01409bf87ede71ba420" Jan 29 15:40:14 crc kubenswrapper[5008]: I0129 15:40:14.870974 5008 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b82de879355c27b3c577b5d5a292b2c1db266e6d92a8e01409bf87ede71ba420"} err="failed to get container status \"b82de879355c27b3c577b5d5a292b2c1db266e6d92a8e01409bf87ede71ba420\": rpc error: code = NotFound desc = could not find container \"b82de879355c27b3c577b5d5a292b2c1db266e6d92a8e01409bf87ede71ba420\": container with ID starting with b82de879355c27b3c577b5d5a292b2c1db266e6d92a8e01409bf87ede71ba420 not found: ID does not exist" Jan 29 15:40:14 crc kubenswrapper[5008]: I0129 15:40:14.870987 5008 scope.go:117] "RemoveContainer" containerID="84bee79a5084a74e833cfe4bac65bc4b319e7a41e9f3e8c7ee7de383385da1a5" Jan 29 15:40:14 crc kubenswrapper[5008]: I0129 15:40:14.871183 5008 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"84bee79a5084a74e833cfe4bac65bc4b319e7a41e9f3e8c7ee7de383385da1a5"} err="failed to get container status \"84bee79a5084a74e833cfe4bac65bc4b319e7a41e9f3e8c7ee7de383385da1a5\": rpc error: code = NotFound desc = could not find container \"84bee79a5084a74e833cfe4bac65bc4b319e7a41e9f3e8c7ee7de383385da1a5\": container with ID starting with 84bee79a5084a74e833cfe4bac65bc4b319e7a41e9f3e8c7ee7de383385da1a5 not found: ID does not exist" Jan 29 15:40:14 crc kubenswrapper[5008]: I0129 15:40:14.871195 5008 scope.go:117] "RemoveContainer" containerID="08beb10f1715c1ca4bbe5b5ecf918e595f3befca424a2b65a06e682936dcc9c1" Jan 29 15:40:14 crc kubenswrapper[5008]: I0129 15:40:14.871413 5008 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"08beb10f1715c1ca4bbe5b5ecf918e595f3befca424a2b65a06e682936dcc9c1"} err="failed to get container status \"08beb10f1715c1ca4bbe5b5ecf918e595f3befca424a2b65a06e682936dcc9c1\": rpc error: code = NotFound desc = could not find container \"08beb10f1715c1ca4bbe5b5ecf918e595f3befca424a2b65a06e682936dcc9c1\": container with ID starting with 08beb10f1715c1ca4bbe5b5ecf918e595f3befca424a2b65a06e682936dcc9c1 not found: ID does not exist" Jan 29 15:40:14 crc kubenswrapper[5008]: I0129 15:40:14.871440 5008 scope.go:117] "RemoveContainer" containerID="3e0b6c0db5ed1e87ffade45aa1c7194322bbf680050f9b7328a3584db57e1554" Jan 29 15:40:14 crc kubenswrapper[5008]: I0129 15:40:14.871635 5008 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3e0b6c0db5ed1e87ffade45aa1c7194322bbf680050f9b7328a3584db57e1554"} err="failed to get container status \"3e0b6c0db5ed1e87ffade45aa1c7194322bbf680050f9b7328a3584db57e1554\": rpc error: code = NotFound desc = could not find container \"3e0b6c0db5ed1e87ffade45aa1c7194322bbf680050f9b7328a3584db57e1554\": container with ID starting with 3e0b6c0db5ed1e87ffade45aa1c7194322bbf680050f9b7328a3584db57e1554 not found: ID does not exist" Jan 29 15:40:14 crc kubenswrapper[5008]: I0129 15:40:14.871649 5008 scope.go:117] "RemoveContainer" containerID="676b28dc78242b0ec7c7a3643a048da9020c807de1f4ddd0cd801f60a1bf41a8" Jan 29 15:40:14 crc kubenswrapper[5008]: I0129 15:40:14.871858 5008 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"676b28dc78242b0ec7c7a3643a048da9020c807de1f4ddd0cd801f60a1bf41a8"} err="failed to get container status \"676b28dc78242b0ec7c7a3643a048da9020c807de1f4ddd0cd801f60a1bf41a8\": rpc error: code = NotFound desc = could not find container \"676b28dc78242b0ec7c7a3643a048da9020c807de1f4ddd0cd801f60a1bf41a8\": container with ID starting with 676b28dc78242b0ec7c7a3643a048da9020c807de1f4ddd0cd801f60a1bf41a8 not found: ID does not exist" Jan 29 15:40:14 crc kubenswrapper[5008]: I0129 15:40:14.871883 5008 scope.go:117] "RemoveContainer" containerID="6807035e51f1a1b563d2c2de6ad73607b2a3bbb9b4336cb9dfeea693d35fdda6" Jan 29 15:40:14 crc kubenswrapper[5008]: I0129 15:40:14.872017 5008 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6807035e51f1a1b563d2c2de6ad73607b2a3bbb9b4336cb9dfeea693d35fdda6"} err="failed to get container status \"6807035e51f1a1b563d2c2de6ad73607b2a3bbb9b4336cb9dfeea693d35fdda6\": rpc error: code = NotFound desc = could not find container \"6807035e51f1a1b563d2c2de6ad73607b2a3bbb9b4336cb9dfeea693d35fdda6\": container with ID starting with 6807035e51f1a1b563d2c2de6ad73607b2a3bbb9b4336cb9dfeea693d35fdda6 not found: ID does not exist" Jan 29 15:40:14 crc kubenswrapper[5008]: I0129 15:40:14.872029 5008 scope.go:117] "RemoveContainer" containerID="f8f1d8793cbf27bc352ee2009caccdffa0a765f416beee3df3c97018285f6f5c" Jan 29 15:40:14 crc kubenswrapper[5008]: I0129 15:40:14.872157 5008 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f8f1d8793cbf27bc352ee2009caccdffa0a765f416beee3df3c97018285f6f5c"} err="failed to get container status \"f8f1d8793cbf27bc352ee2009caccdffa0a765f416beee3df3c97018285f6f5c\": rpc error: code = NotFound desc = could not find container \"f8f1d8793cbf27bc352ee2009caccdffa0a765f416beee3df3c97018285f6f5c\": container with ID starting with f8f1d8793cbf27bc352ee2009caccdffa0a765f416beee3df3c97018285f6f5c not found: ID does not exist" Jan 29 15:40:14 crc kubenswrapper[5008]: I0129 15:40:14.872177 5008 scope.go:117] "RemoveContainer" containerID="c4894794fa383987c6dc74bda3cd40e56fa81dab982e631fe2fb043b74a6afd9" Jan 29 15:40:14 crc kubenswrapper[5008]: I0129 15:40:14.872447 5008 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c4894794fa383987c6dc74bda3cd40e56fa81dab982e631fe2fb043b74a6afd9"} err="failed to get container status \"c4894794fa383987c6dc74bda3cd40e56fa81dab982e631fe2fb043b74a6afd9\": rpc error: code = NotFound desc = could not find container \"c4894794fa383987c6dc74bda3cd40e56fa81dab982e631fe2fb043b74a6afd9\": container with ID starting with c4894794fa383987c6dc74bda3cd40e56fa81dab982e631fe2fb043b74a6afd9 not found: ID does not exist" Jan 29 15:40:14 crc kubenswrapper[5008]: I0129 15:40:14.872496 5008 scope.go:117] "RemoveContainer" containerID="dc93128ecb53884c776154eafc7f29837e9c378a10c37df5d85d608ef14d7195" Jan 29 15:40:14 crc kubenswrapper[5008]: I0129 15:40:14.873902 5008 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dc93128ecb53884c776154eafc7f29837e9c378a10c37df5d85d608ef14d7195"} err="failed to get container status \"dc93128ecb53884c776154eafc7f29837e9c378a10c37df5d85d608ef14d7195\": rpc error: code = NotFound desc = could not find container \"dc93128ecb53884c776154eafc7f29837e9c378a10c37df5d85d608ef14d7195\": container with ID starting with dc93128ecb53884c776154eafc7f29837e9c378a10c37df5d85d608ef14d7195 not found: ID does not exist" Jan 29 15:40:14 crc kubenswrapper[5008]: I0129 15:40:14.873927 5008 scope.go:117] "RemoveContainer" containerID="eddc7bcf8b28e2d71e41dbad61e84e0e0ac1e2702628a400e9c16dcc4303cad1" Jan 29 15:40:14 crc kubenswrapper[5008]: I0129 15:40:14.874129 5008 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"eddc7bcf8b28e2d71e41dbad61e84e0e0ac1e2702628a400e9c16dcc4303cad1"} err="failed to get container status \"eddc7bcf8b28e2d71e41dbad61e84e0e0ac1e2702628a400e9c16dcc4303cad1\": rpc error: code = NotFound desc = could not find container \"eddc7bcf8b28e2d71e41dbad61e84e0e0ac1e2702628a400e9c16dcc4303cad1\": container with ID starting with eddc7bcf8b28e2d71e41dbad61e84e0e0ac1e2702628a400e9c16dcc4303cad1 not found: ID does not exist" Jan 29 15:40:14 crc kubenswrapper[5008]: I0129 15:40:14.874145 5008 scope.go:117] "RemoveContainer" containerID="b82de879355c27b3c577b5d5a292b2c1db266e6d92a8e01409bf87ede71ba420" Jan 29 15:40:14 crc kubenswrapper[5008]: I0129 15:40:14.874302 5008 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b82de879355c27b3c577b5d5a292b2c1db266e6d92a8e01409bf87ede71ba420"} err="failed to get container status \"b82de879355c27b3c577b5d5a292b2c1db266e6d92a8e01409bf87ede71ba420\": rpc error: code = NotFound desc = could not find container \"b82de879355c27b3c577b5d5a292b2c1db266e6d92a8e01409bf87ede71ba420\": container with ID starting with b82de879355c27b3c577b5d5a292b2c1db266e6d92a8e01409bf87ede71ba420 not found: ID does not exist" Jan 29 15:40:14 crc kubenswrapper[5008]: I0129 15:40:14.874323 5008 scope.go:117] "RemoveContainer" containerID="84bee79a5084a74e833cfe4bac65bc4b319e7a41e9f3e8c7ee7de383385da1a5" Jan 29 15:40:14 crc kubenswrapper[5008]: I0129 15:40:14.874492 5008 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"84bee79a5084a74e833cfe4bac65bc4b319e7a41e9f3e8c7ee7de383385da1a5"} err="failed to get container status \"84bee79a5084a74e833cfe4bac65bc4b319e7a41e9f3e8c7ee7de383385da1a5\": rpc error: code = NotFound desc = could not find container \"84bee79a5084a74e833cfe4bac65bc4b319e7a41e9f3e8c7ee7de383385da1a5\": container with ID starting with 84bee79a5084a74e833cfe4bac65bc4b319e7a41e9f3e8c7ee7de383385da1a5 not found: ID does not exist" Jan 29 15:40:14 crc kubenswrapper[5008]: I0129 15:40:14.874507 5008 scope.go:117] "RemoveContainer" containerID="08beb10f1715c1ca4bbe5b5ecf918e595f3befca424a2b65a06e682936dcc9c1" Jan 29 15:40:14 crc kubenswrapper[5008]: I0129 15:40:14.874654 5008 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"08beb10f1715c1ca4bbe5b5ecf918e595f3befca424a2b65a06e682936dcc9c1"} err="failed to get container status \"08beb10f1715c1ca4bbe5b5ecf918e595f3befca424a2b65a06e682936dcc9c1\": rpc error: code = NotFound desc = could not find container \"08beb10f1715c1ca4bbe5b5ecf918e595f3befca424a2b65a06e682936dcc9c1\": container with ID starting with 08beb10f1715c1ca4bbe5b5ecf918e595f3befca424a2b65a06e682936dcc9c1 not found: ID does not exist" Jan 29 15:40:14 crc kubenswrapper[5008]: I0129 15:40:14.874671 5008 scope.go:117] "RemoveContainer" containerID="3e0b6c0db5ed1e87ffade45aa1c7194322bbf680050f9b7328a3584db57e1554" Jan 29 15:40:14 crc kubenswrapper[5008]: I0129 15:40:14.874811 5008 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3e0b6c0db5ed1e87ffade45aa1c7194322bbf680050f9b7328a3584db57e1554"} err="failed to get container status \"3e0b6c0db5ed1e87ffade45aa1c7194322bbf680050f9b7328a3584db57e1554\": rpc error: code = NotFound desc = could not find container \"3e0b6c0db5ed1e87ffade45aa1c7194322bbf680050f9b7328a3584db57e1554\": container with ID starting with 3e0b6c0db5ed1e87ffade45aa1c7194322bbf680050f9b7328a3584db57e1554 not found: ID does not exist" Jan 29 15:40:14 crc kubenswrapper[5008]: I0129 15:40:14.874823 5008 scope.go:117] "RemoveContainer" containerID="676b28dc78242b0ec7c7a3643a048da9020c807de1f4ddd0cd801f60a1bf41a8" Jan 29 15:40:14 crc kubenswrapper[5008]: I0129 15:40:14.874946 5008 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"676b28dc78242b0ec7c7a3643a048da9020c807de1f4ddd0cd801f60a1bf41a8"} err="failed to get container status \"676b28dc78242b0ec7c7a3643a048da9020c807de1f4ddd0cd801f60a1bf41a8\": rpc error: code = NotFound desc = could not find container \"676b28dc78242b0ec7c7a3643a048da9020c807de1f4ddd0cd801f60a1bf41a8\": container with ID starting with 676b28dc78242b0ec7c7a3643a048da9020c807de1f4ddd0cd801f60a1bf41a8 not found: ID does not exist" Jan 29 15:40:14 crc kubenswrapper[5008]: I0129 15:40:14.874958 5008 scope.go:117] "RemoveContainer" containerID="6807035e51f1a1b563d2c2de6ad73607b2a3bbb9b4336cb9dfeea693d35fdda6" Jan 29 15:40:14 crc kubenswrapper[5008]: I0129 15:40:14.875077 5008 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6807035e51f1a1b563d2c2de6ad73607b2a3bbb9b4336cb9dfeea693d35fdda6"} err="failed to get container status \"6807035e51f1a1b563d2c2de6ad73607b2a3bbb9b4336cb9dfeea693d35fdda6\": rpc error: code = NotFound desc = could not find container \"6807035e51f1a1b563d2c2de6ad73607b2a3bbb9b4336cb9dfeea693d35fdda6\": container with ID starting with 6807035e51f1a1b563d2c2de6ad73607b2a3bbb9b4336cb9dfeea693d35fdda6 not found: ID does not exist" Jan 29 15:40:15 crc kubenswrapper[5008]: I0129 15:40:15.111309 5008 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-42hcz_cdd8ae23-3f9f-49f8-928d-46dad823fde4/kube-multus/2.log" Jan 29 15:40:15 crc kubenswrapper[5008]: I0129 15:40:15.111501 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-42hcz" event={"ID":"cdd8ae23-3f9f-49f8-928d-46dad823fde4","Type":"ContainerStarted","Data":"d46a39529b97b61410e13a7d9304aa0dd14dbc6d16966288979eb24becea51db"} Jan 29 15:40:15 crc kubenswrapper[5008]: I0129 15:40:15.114036 5008 generic.go:334] "Generic (PLEG): container finished" podID="252dea6f-dc2c-4c83-8930-535e5b0f6cdb" containerID="bb73a6ec1fa63921bda754c59c764e3d2bd7db1e1393b4a9216781bf6be1c628" exitCode=0 Jan 29 15:40:15 crc kubenswrapper[5008]: I0129 15:40:15.114122 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-j9h2f" event={"ID":"252dea6f-dc2c-4c83-8930-535e5b0f6cdb","Type":"ContainerDied","Data":"bb73a6ec1fa63921bda754c59c764e3d2bd7db1e1393b4a9216781bf6be1c628"} Jan 29 15:40:15 crc kubenswrapper[5008]: I0129 15:40:15.114176 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-j9h2f" event={"ID":"252dea6f-dc2c-4c83-8930-535e5b0f6cdb","Type":"ContainerStarted","Data":"ae1ef17fa87e70552ab49e9b4a89f9dfbeaebd92cd6bd29ade10978c8c8d56a4"} Jan 29 15:40:15 crc kubenswrapper[5008]: I0129 15:40:15.342445 5008 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1d092513-7735-4c98-9734-57bc46b99280" path="/var/lib/kubelet/pods/1d092513-7735-4c98-9734-57bc46b99280/volumes" Jan 29 15:40:16 crc kubenswrapper[5008]: I0129 15:40:16.134425 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-j9h2f" event={"ID":"252dea6f-dc2c-4c83-8930-535e5b0f6cdb","Type":"ContainerStarted","Data":"dd7fd4620a00cce2e9471153be8380bfedf0aa02f055f8c8eb7c8213056c94cc"} Jan 29 15:40:16 crc kubenswrapper[5008]: I0129 15:40:16.134480 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-j9h2f" event={"ID":"252dea6f-dc2c-4c83-8930-535e5b0f6cdb","Type":"ContainerStarted","Data":"72c1ca5d6995c298daadff32b340b5c8ac9f6657fe5d11b2744d0d7bc88498cc"} Jan 29 15:40:16 crc kubenswrapper[5008]: I0129 15:40:16.134558 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-j9h2f" event={"ID":"252dea6f-dc2c-4c83-8930-535e5b0f6cdb","Type":"ContainerStarted","Data":"243c6d38dfa7aec0df8e192e309178df3c79e26176d7e0c55ec79c45bd588bd2"} Jan 29 15:40:16 crc kubenswrapper[5008]: I0129 15:40:16.134577 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-j9h2f" event={"ID":"252dea6f-dc2c-4c83-8930-535e5b0f6cdb","Type":"ContainerStarted","Data":"c9d3d074e21fba1b8b2b0b6c9b269e1ea430aaf44a008d77d3577f7b0c3f056c"} Jan 29 15:40:16 crc kubenswrapper[5008]: I0129 15:40:16.134597 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-j9h2f" event={"ID":"252dea6f-dc2c-4c83-8930-535e5b0f6cdb","Type":"ContainerStarted","Data":"070cb0ab65e3f22ee8eb14977bc7ad1cd9a0ce4c6bae2bf411b38ae768696216"} Jan 29 15:40:17 crc kubenswrapper[5008]: I0129 15:40:17.143579 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-j9h2f" event={"ID":"252dea6f-dc2c-4c83-8930-535e5b0f6cdb","Type":"ContainerStarted","Data":"54228a876c93cb98d1f7f195a35b2db9846b1a211ccb6604cf4b5f4cc5e72ae0"} Jan 29 15:40:18 crc kubenswrapper[5008]: I0129 15:40:18.581644 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="cert-manager/cert-manager-webhook-687f57d79b-wvlhn" Jan 29 15:40:19 crc kubenswrapper[5008]: I0129 15:40:19.158175 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-j9h2f" event={"ID":"252dea6f-dc2c-4c83-8930-535e5b0f6cdb","Type":"ContainerStarted","Data":"0e385850439d8343e6a7c9a32f04d03f3a179368ab5366fd5c6c2a330cff055a"} Jan 29 15:40:21 crc kubenswrapper[5008]: I0129 15:40:21.177714 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-j9h2f" event={"ID":"252dea6f-dc2c-4c83-8930-535e5b0f6cdb","Type":"ContainerStarted","Data":"3efbffe74014b04edd9c26fd66f4583e39dde1552fa69a8c378893b640904fe3"} Jan 29 15:40:21 crc kubenswrapper[5008]: I0129 15:40:21.178192 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-j9h2f" Jan 29 15:40:21 crc kubenswrapper[5008]: I0129 15:40:21.178246 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-j9h2f" Jan 29 15:40:21 crc kubenswrapper[5008]: I0129 15:40:21.223096 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-j9h2f" Jan 29 15:40:21 crc kubenswrapper[5008]: I0129 15:40:21.224922 5008 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-j9h2f" podStartSLOduration=8.224901451 podStartE2EDuration="8.224901451s" podCreationTimestamp="2026-01-29 15:40:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 15:40:21.219238914 +0000 UTC m=+764.892093171" watchObservedRunningTime="2026-01-29 15:40:21.224901451 +0000 UTC m=+764.897755728" Jan 29 15:40:22 crc kubenswrapper[5008]: I0129 15:40:22.184616 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-j9h2f" Jan 29 15:40:22 crc kubenswrapper[5008]: I0129 15:40:22.216627 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-j9h2f" Jan 29 15:40:22 crc kubenswrapper[5008]: I0129 15:40:22.823915 5008 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Jan 29 15:40:43 crc kubenswrapper[5008]: I0129 15:40:43.990679 5008 patch_prober.go:28] interesting pod/machine-config-daemon-gk9q8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 15:40:43 crc kubenswrapper[5008]: I0129 15:40:43.991367 5008 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-gk9q8" podUID="ca0fcb2d-733d-4bde-9bbf-3f7082d0e244" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 15:40:44 crc kubenswrapper[5008]: I0129 15:40:44.553176 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-j9h2f" Jan 29 15:41:01 crc kubenswrapper[5008]: I0129 15:41:01.276490 5008 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713n8j4s"] Jan 29 15:41:01 crc kubenswrapper[5008]: I0129 15:41:01.278970 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713n8j4s" Jan 29 15:41:01 crc kubenswrapper[5008]: I0129 15:41:01.284306 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Jan 29 15:41:01 crc kubenswrapper[5008]: I0129 15:41:01.293230 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713n8j4s"] Jan 29 15:41:01 crc kubenswrapper[5008]: I0129 15:41:01.373492 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/d4466921-85af-471c-956d-71f6576ca8f1-util\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713n8j4s\" (UID: \"d4466921-85af-471c-956d-71f6576ca8f1\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713n8j4s" Jan 29 15:41:01 crc kubenswrapper[5008]: I0129 15:41:01.373631 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9vvch\" (UniqueName: \"kubernetes.io/projected/d4466921-85af-471c-956d-71f6576ca8f1-kube-api-access-9vvch\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713n8j4s\" (UID: \"d4466921-85af-471c-956d-71f6576ca8f1\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713n8j4s" Jan 29 15:41:01 crc kubenswrapper[5008]: I0129 15:41:01.373834 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/d4466921-85af-471c-956d-71f6576ca8f1-bundle\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713n8j4s\" (UID: \"d4466921-85af-471c-956d-71f6576ca8f1\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713n8j4s" Jan 29 15:41:01 crc kubenswrapper[5008]: I0129 15:41:01.475492 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9vvch\" (UniqueName: \"kubernetes.io/projected/d4466921-85af-471c-956d-71f6576ca8f1-kube-api-access-9vvch\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713n8j4s\" (UID: \"d4466921-85af-471c-956d-71f6576ca8f1\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713n8j4s" Jan 29 15:41:01 crc kubenswrapper[5008]: I0129 15:41:01.475558 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/d4466921-85af-471c-956d-71f6576ca8f1-bundle\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713n8j4s\" (UID: \"d4466921-85af-471c-956d-71f6576ca8f1\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713n8j4s" Jan 29 15:41:01 crc kubenswrapper[5008]: I0129 15:41:01.475626 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/d4466921-85af-471c-956d-71f6576ca8f1-util\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713n8j4s\" (UID: \"d4466921-85af-471c-956d-71f6576ca8f1\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713n8j4s" Jan 29 15:41:01 crc kubenswrapper[5008]: I0129 15:41:01.476162 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/d4466921-85af-471c-956d-71f6576ca8f1-util\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713n8j4s\" (UID: \"d4466921-85af-471c-956d-71f6576ca8f1\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713n8j4s" Jan 29 15:41:01 crc kubenswrapper[5008]: I0129 15:41:01.476572 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/d4466921-85af-471c-956d-71f6576ca8f1-bundle\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713n8j4s\" (UID: \"d4466921-85af-471c-956d-71f6576ca8f1\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713n8j4s" Jan 29 15:41:01 crc kubenswrapper[5008]: I0129 15:41:01.503254 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9vvch\" (UniqueName: \"kubernetes.io/projected/d4466921-85af-471c-956d-71f6576ca8f1-kube-api-access-9vvch\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713n8j4s\" (UID: \"d4466921-85af-471c-956d-71f6576ca8f1\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713n8j4s" Jan 29 15:41:01 crc kubenswrapper[5008]: I0129 15:41:01.626517 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713n8j4s" Jan 29 15:41:02 crc kubenswrapper[5008]: I0129 15:41:02.083857 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713n8j4s"] Jan 29 15:41:02 crc kubenswrapper[5008]: I0129 15:41:02.439231 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713n8j4s" event={"ID":"d4466921-85af-471c-956d-71f6576ca8f1","Type":"ContainerStarted","Data":"45c0d3bfc02dd3d17f027c6ab3a004555b3fd15eea464fca480ac0ab9176088b"} Jan 29 15:41:02 crc kubenswrapper[5008]: I0129 15:41:02.439678 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713n8j4s" event={"ID":"d4466921-85af-471c-956d-71f6576ca8f1","Type":"ContainerStarted","Data":"a322a92a50a11512f291e3cd16751143a7c8cc3846e26fe4b2393775dd3b9eb4"} Jan 29 15:41:03 crc kubenswrapper[5008]: I0129 15:41:03.447699 5008 generic.go:334] "Generic (PLEG): container finished" podID="d4466921-85af-471c-956d-71f6576ca8f1" containerID="45c0d3bfc02dd3d17f027c6ab3a004555b3fd15eea464fca480ac0ab9176088b" exitCode=0 Jan 29 15:41:03 crc kubenswrapper[5008]: I0129 15:41:03.447765 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713n8j4s" event={"ID":"d4466921-85af-471c-956d-71f6576ca8f1","Type":"ContainerDied","Data":"45c0d3bfc02dd3d17f027c6ab3a004555b3fd15eea464fca480ac0ab9176088b"} Jan 29 15:41:03 crc kubenswrapper[5008]: I0129 15:41:03.571351 5008 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-l4krl"] Jan 29 15:41:03 crc kubenswrapper[5008]: I0129 15:41:03.572763 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-l4krl" Jan 29 15:41:03 crc kubenswrapper[5008]: I0129 15:41:03.581991 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-l4krl"] Jan 29 15:41:03 crc kubenswrapper[5008]: I0129 15:41:03.707271 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/800868e4-e114-49d4-a9b4-3ee8fc4ea341-catalog-content\") pod \"redhat-operators-l4krl\" (UID: \"800868e4-e114-49d4-a9b4-3ee8fc4ea341\") " pod="openshift-marketplace/redhat-operators-l4krl" Jan 29 15:41:03 crc kubenswrapper[5008]: I0129 15:41:03.707306 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mpmc9\" (UniqueName: \"kubernetes.io/projected/800868e4-e114-49d4-a9b4-3ee8fc4ea341-kube-api-access-mpmc9\") pod \"redhat-operators-l4krl\" (UID: \"800868e4-e114-49d4-a9b4-3ee8fc4ea341\") " pod="openshift-marketplace/redhat-operators-l4krl" Jan 29 15:41:03 crc kubenswrapper[5008]: I0129 15:41:03.707418 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/800868e4-e114-49d4-a9b4-3ee8fc4ea341-utilities\") pod \"redhat-operators-l4krl\" (UID: \"800868e4-e114-49d4-a9b4-3ee8fc4ea341\") " pod="openshift-marketplace/redhat-operators-l4krl" Jan 29 15:41:03 crc kubenswrapper[5008]: I0129 15:41:03.809231 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/800868e4-e114-49d4-a9b4-3ee8fc4ea341-catalog-content\") pod \"redhat-operators-l4krl\" (UID: \"800868e4-e114-49d4-a9b4-3ee8fc4ea341\") " pod="openshift-marketplace/redhat-operators-l4krl" Jan 29 15:41:03 crc kubenswrapper[5008]: I0129 15:41:03.809290 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mpmc9\" (UniqueName: \"kubernetes.io/projected/800868e4-e114-49d4-a9b4-3ee8fc4ea341-kube-api-access-mpmc9\") pod \"redhat-operators-l4krl\" (UID: \"800868e4-e114-49d4-a9b4-3ee8fc4ea341\") " pod="openshift-marketplace/redhat-operators-l4krl" Jan 29 15:41:03 crc kubenswrapper[5008]: I0129 15:41:03.809359 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/800868e4-e114-49d4-a9b4-3ee8fc4ea341-utilities\") pod \"redhat-operators-l4krl\" (UID: \"800868e4-e114-49d4-a9b4-3ee8fc4ea341\") " pod="openshift-marketplace/redhat-operators-l4krl" Jan 29 15:41:03 crc kubenswrapper[5008]: I0129 15:41:03.809719 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/800868e4-e114-49d4-a9b4-3ee8fc4ea341-catalog-content\") pod \"redhat-operators-l4krl\" (UID: \"800868e4-e114-49d4-a9b4-3ee8fc4ea341\") " pod="openshift-marketplace/redhat-operators-l4krl" Jan 29 15:41:03 crc kubenswrapper[5008]: I0129 15:41:03.809813 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/800868e4-e114-49d4-a9b4-3ee8fc4ea341-utilities\") pod \"redhat-operators-l4krl\" (UID: \"800868e4-e114-49d4-a9b4-3ee8fc4ea341\") " pod="openshift-marketplace/redhat-operators-l4krl" Jan 29 15:41:03 crc kubenswrapper[5008]: I0129 15:41:03.830804 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mpmc9\" (UniqueName: \"kubernetes.io/projected/800868e4-e114-49d4-a9b4-3ee8fc4ea341-kube-api-access-mpmc9\") pod \"redhat-operators-l4krl\" (UID: \"800868e4-e114-49d4-a9b4-3ee8fc4ea341\") " pod="openshift-marketplace/redhat-operators-l4krl" Jan 29 15:41:03 crc kubenswrapper[5008]: I0129 15:41:03.925861 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-l4krl" Jan 29 15:41:04 crc kubenswrapper[5008]: I0129 15:41:04.170293 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-l4krl"] Jan 29 15:41:04 crc kubenswrapper[5008]: I0129 15:41:04.454549 5008 generic.go:334] "Generic (PLEG): container finished" podID="800868e4-e114-49d4-a9b4-3ee8fc4ea341" containerID="cbc18e3bc2643b57a3277fc511d873137c3e97944270bc0d5e5eb0f4dc1ee274" exitCode=0 Jan 29 15:41:04 crc kubenswrapper[5008]: I0129 15:41:04.454639 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-l4krl" event={"ID":"800868e4-e114-49d4-a9b4-3ee8fc4ea341","Type":"ContainerDied","Data":"cbc18e3bc2643b57a3277fc511d873137c3e97944270bc0d5e5eb0f4dc1ee274"} Jan 29 15:41:04 crc kubenswrapper[5008]: I0129 15:41:04.454927 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-l4krl" event={"ID":"800868e4-e114-49d4-a9b4-3ee8fc4ea341","Type":"ContainerStarted","Data":"2b7ddcb62ecbf05357c096771ab213c80505ee7aadc4ebe5c0c9a2c9f79dd618"} Jan 29 15:41:05 crc kubenswrapper[5008]: I0129 15:41:05.464325 5008 generic.go:334] "Generic (PLEG): container finished" podID="d4466921-85af-471c-956d-71f6576ca8f1" containerID="52c8b124b65393d43aabd0cbd342b413321b2e5bbf39a0745c27f0859f1430c4" exitCode=0 Jan 29 15:41:05 crc kubenswrapper[5008]: I0129 15:41:05.464772 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713n8j4s" event={"ID":"d4466921-85af-471c-956d-71f6576ca8f1","Type":"ContainerDied","Data":"52c8b124b65393d43aabd0cbd342b413321b2e5bbf39a0745c27f0859f1430c4"} Jan 29 15:41:06 crc kubenswrapper[5008]: I0129 15:41:06.478244 5008 generic.go:334] "Generic (PLEG): container finished" podID="800868e4-e114-49d4-a9b4-3ee8fc4ea341" containerID="8a618c4eb07f9ce54bdbc184cbab44314977f7271a6bdf791d1706f757f3f4e4" exitCode=0 Jan 29 15:41:06 crc kubenswrapper[5008]: I0129 15:41:06.478396 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-l4krl" event={"ID":"800868e4-e114-49d4-a9b4-3ee8fc4ea341","Type":"ContainerDied","Data":"8a618c4eb07f9ce54bdbc184cbab44314977f7271a6bdf791d1706f757f3f4e4"} Jan 29 15:41:06 crc kubenswrapper[5008]: I0129 15:41:06.486735 5008 generic.go:334] "Generic (PLEG): container finished" podID="d4466921-85af-471c-956d-71f6576ca8f1" containerID="e209c509177916138d041664c5ab18ee9523ce749806cd585afa3713d8559e13" exitCode=0 Jan 29 15:41:06 crc kubenswrapper[5008]: I0129 15:41:06.486841 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713n8j4s" event={"ID":"d4466921-85af-471c-956d-71f6576ca8f1","Type":"ContainerDied","Data":"e209c509177916138d041664c5ab18ee9523ce749806cd585afa3713d8559e13"} Jan 29 15:41:07 crc kubenswrapper[5008]: I0129 15:41:07.500500 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-l4krl" event={"ID":"800868e4-e114-49d4-a9b4-3ee8fc4ea341","Type":"ContainerStarted","Data":"e6d3b279a87e2912316c06cfc0ad9c6a6abf9dd98262ae2255f24dc8fb87f07a"} Jan 29 15:41:07 crc kubenswrapper[5008]: I0129 15:41:07.522120 5008 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-l4krl" podStartSLOduration=1.858617206 podStartE2EDuration="4.522105705s" podCreationTimestamp="2026-01-29 15:41:03 +0000 UTC" firstStartedPulling="2026-01-29 15:41:04.459438283 +0000 UTC m=+808.132292530" lastFinishedPulling="2026-01-29 15:41:07.122926762 +0000 UTC m=+810.795781029" observedRunningTime="2026-01-29 15:41:07.520727551 +0000 UTC m=+811.193581788" watchObservedRunningTime="2026-01-29 15:41:07.522105705 +0000 UTC m=+811.194959942" Jan 29 15:41:07 crc kubenswrapper[5008]: I0129 15:41:07.731003 5008 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713n8j4s" Jan 29 15:41:07 crc kubenswrapper[5008]: I0129 15:41:07.861641 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/d4466921-85af-471c-956d-71f6576ca8f1-util\") pod \"d4466921-85af-471c-956d-71f6576ca8f1\" (UID: \"d4466921-85af-471c-956d-71f6576ca8f1\") " Jan 29 15:41:07 crc kubenswrapper[5008]: I0129 15:41:07.861808 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9vvch\" (UniqueName: \"kubernetes.io/projected/d4466921-85af-471c-956d-71f6576ca8f1-kube-api-access-9vvch\") pod \"d4466921-85af-471c-956d-71f6576ca8f1\" (UID: \"d4466921-85af-471c-956d-71f6576ca8f1\") " Jan 29 15:41:07 crc kubenswrapper[5008]: I0129 15:41:07.861857 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/d4466921-85af-471c-956d-71f6576ca8f1-bundle\") pod \"d4466921-85af-471c-956d-71f6576ca8f1\" (UID: \"d4466921-85af-471c-956d-71f6576ca8f1\") " Jan 29 15:41:07 crc kubenswrapper[5008]: I0129 15:41:07.862580 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d4466921-85af-471c-956d-71f6576ca8f1-bundle" (OuterVolumeSpecName: "bundle") pod "d4466921-85af-471c-956d-71f6576ca8f1" (UID: "d4466921-85af-471c-956d-71f6576ca8f1"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 15:41:07 crc kubenswrapper[5008]: I0129 15:41:07.873049 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d4466921-85af-471c-956d-71f6576ca8f1-kube-api-access-9vvch" (OuterVolumeSpecName: "kube-api-access-9vvch") pod "d4466921-85af-471c-956d-71f6576ca8f1" (UID: "d4466921-85af-471c-956d-71f6576ca8f1"). InnerVolumeSpecName "kube-api-access-9vvch". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:41:07 crc kubenswrapper[5008]: I0129 15:41:07.963320 5008 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9vvch\" (UniqueName: \"kubernetes.io/projected/d4466921-85af-471c-956d-71f6576ca8f1-kube-api-access-9vvch\") on node \"crc\" DevicePath \"\"" Jan 29 15:41:07 crc kubenswrapper[5008]: I0129 15:41:07.963369 5008 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/d4466921-85af-471c-956d-71f6576ca8f1-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 15:41:08 crc kubenswrapper[5008]: I0129 15:41:08.014760 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d4466921-85af-471c-956d-71f6576ca8f1-util" (OuterVolumeSpecName: "util") pod "d4466921-85af-471c-956d-71f6576ca8f1" (UID: "d4466921-85af-471c-956d-71f6576ca8f1"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 15:41:08 crc kubenswrapper[5008]: I0129 15:41:08.064631 5008 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/d4466921-85af-471c-956d-71f6576ca8f1-util\") on node \"crc\" DevicePath \"\"" Jan 29 15:41:08 crc kubenswrapper[5008]: I0129 15:41:08.508944 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713n8j4s" event={"ID":"d4466921-85af-471c-956d-71f6576ca8f1","Type":"ContainerDied","Data":"a322a92a50a11512f291e3cd16751143a7c8cc3846e26fe4b2393775dd3b9eb4"} Jan 29 15:41:08 crc kubenswrapper[5008]: I0129 15:41:08.509018 5008 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a322a92a50a11512f291e3cd16751143a7c8cc3846e26fe4b2393775dd3b9eb4" Jan 29 15:41:08 crc kubenswrapper[5008]: I0129 15:41:08.509079 5008 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713n8j4s" Jan 29 15:41:10 crc kubenswrapper[5008]: I0129 15:41:10.888642 5008 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-operator-646758c888-dkpn2"] Jan 29 15:41:10 crc kubenswrapper[5008]: E0129 15:41:10.889203 5008 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d4466921-85af-471c-956d-71f6576ca8f1" containerName="extract" Jan 29 15:41:10 crc kubenswrapper[5008]: I0129 15:41:10.889217 5008 state_mem.go:107] "Deleted CPUSet assignment" podUID="d4466921-85af-471c-956d-71f6576ca8f1" containerName="extract" Jan 29 15:41:10 crc kubenswrapper[5008]: E0129 15:41:10.889231 5008 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d4466921-85af-471c-956d-71f6576ca8f1" containerName="util" Jan 29 15:41:10 crc kubenswrapper[5008]: I0129 15:41:10.889237 5008 state_mem.go:107] "Deleted CPUSet assignment" podUID="d4466921-85af-471c-956d-71f6576ca8f1" containerName="util" Jan 29 15:41:10 crc kubenswrapper[5008]: E0129 15:41:10.889252 5008 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d4466921-85af-471c-956d-71f6576ca8f1" containerName="pull" Jan 29 15:41:10 crc kubenswrapper[5008]: I0129 15:41:10.889259 5008 state_mem.go:107] "Deleted CPUSet assignment" podUID="d4466921-85af-471c-956d-71f6576ca8f1" containerName="pull" Jan 29 15:41:10 crc kubenswrapper[5008]: I0129 15:41:10.889366 5008 memory_manager.go:354] "RemoveStaleState removing state" podUID="d4466921-85af-471c-956d-71f6576ca8f1" containerName="extract" Jan 29 15:41:10 crc kubenswrapper[5008]: I0129 15:41:10.889799 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-646758c888-dkpn2" Jan 29 15:41:10 crc kubenswrapper[5008]: I0129 15:41:10.891521 5008 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"openshift-service-ca.crt" Jan 29 15:41:10 crc kubenswrapper[5008]: I0129 15:41:10.891537 5008 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"kube-root-ca.crt" Jan 29 15:41:10 crc kubenswrapper[5008]: I0129 15:41:10.894794 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-operator-dockercfg-8mbqk" Jan 29 15:41:10 crc kubenswrapper[5008]: I0129 15:41:10.899043 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-646758c888-dkpn2"] Jan 29 15:41:11 crc kubenswrapper[5008]: I0129 15:41:11.003465 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xgw85\" (UniqueName: \"kubernetes.io/projected/5fab4312-8998-4667-af25-ba459fcb4a68-kube-api-access-xgw85\") pod \"nmstate-operator-646758c888-dkpn2\" (UID: \"5fab4312-8998-4667-af25-ba459fcb4a68\") " pod="openshift-nmstate/nmstate-operator-646758c888-dkpn2" Jan 29 15:41:11 crc kubenswrapper[5008]: I0129 15:41:11.105031 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xgw85\" (UniqueName: \"kubernetes.io/projected/5fab4312-8998-4667-af25-ba459fcb4a68-kube-api-access-xgw85\") pod \"nmstate-operator-646758c888-dkpn2\" (UID: \"5fab4312-8998-4667-af25-ba459fcb4a68\") " pod="openshift-nmstate/nmstate-operator-646758c888-dkpn2" Jan 29 15:41:11 crc kubenswrapper[5008]: I0129 15:41:11.123305 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xgw85\" (UniqueName: \"kubernetes.io/projected/5fab4312-8998-4667-af25-ba459fcb4a68-kube-api-access-xgw85\") pod \"nmstate-operator-646758c888-dkpn2\" (UID: \"5fab4312-8998-4667-af25-ba459fcb4a68\") " pod="openshift-nmstate/nmstate-operator-646758c888-dkpn2" Jan 29 15:41:11 crc kubenswrapper[5008]: I0129 15:41:11.203220 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-646758c888-dkpn2" Jan 29 15:41:11 crc kubenswrapper[5008]: I0129 15:41:11.401596 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-646758c888-dkpn2"] Jan 29 15:41:11 crc kubenswrapper[5008]: I0129 15:41:11.526970 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-646758c888-dkpn2" event={"ID":"5fab4312-8998-4667-af25-ba459fcb4a68","Type":"ContainerStarted","Data":"cec16a797b95797585798004c4b06dcbd977b678809533a539c0fa270affa418"} Jan 29 15:41:13 crc kubenswrapper[5008]: I0129 15:41:13.928027 5008 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-l4krl" Jan 29 15:41:13 crc kubenswrapper[5008]: I0129 15:41:13.928390 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-l4krl" Jan 29 15:41:13 crc kubenswrapper[5008]: I0129 15:41:13.990976 5008 patch_prober.go:28] interesting pod/machine-config-daemon-gk9q8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 15:41:13 crc kubenswrapper[5008]: I0129 15:41:13.991068 5008 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-gk9q8" podUID="ca0fcb2d-733d-4bde-9bbf-3f7082d0e244" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 15:41:14 crc kubenswrapper[5008]: I0129 15:41:14.988840 5008 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-l4krl" podUID="800868e4-e114-49d4-a9b4-3ee8fc4ea341" containerName="registry-server" probeResult="failure" output=< Jan 29 15:41:14 crc kubenswrapper[5008]: timeout: failed to connect service ":50051" within 1s Jan 29 15:41:14 crc kubenswrapper[5008]: > Jan 29 15:41:18 crc kubenswrapper[5008]: I0129 15:41:18.584061 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-646758c888-dkpn2" event={"ID":"5fab4312-8998-4667-af25-ba459fcb4a68","Type":"ContainerStarted","Data":"999f53e8a1e30402259b51e8007e6fc217a82447d0da60a1d1277a177303b708"} Jan 29 15:41:18 crc kubenswrapper[5008]: I0129 15:41:18.611472 5008 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-operator-646758c888-dkpn2" podStartSLOduration=2.064072083 podStartE2EDuration="8.611449144s" podCreationTimestamp="2026-01-29 15:41:10 +0000 UTC" firstStartedPulling="2026-01-29 15:41:11.413804104 +0000 UTC m=+815.086658351" lastFinishedPulling="2026-01-29 15:41:17.961181165 +0000 UTC m=+821.634035412" observedRunningTime="2026-01-29 15:41:18.611365662 +0000 UTC m=+822.284219979" watchObservedRunningTime="2026-01-29 15:41:18.611449144 +0000 UTC m=+822.284303421" Jan 29 15:41:21 crc kubenswrapper[5008]: I0129 15:41:21.550172 5008 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-metrics-54757c584b-mtz4q"] Jan 29 15:41:21 crc kubenswrapper[5008]: I0129 15:41:21.551693 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-54757c584b-mtz4q" Jan 29 15:41:21 crc kubenswrapper[5008]: I0129 15:41:21.554019 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-handler-dockercfg-72hz4" Jan 29 15:41:21 crc kubenswrapper[5008]: I0129 15:41:21.563848 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-54757c584b-mtz4q"] Jan 29 15:41:21 crc kubenswrapper[5008]: I0129 15:41:21.572796 5008 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-webhook-8474b5b9d8-qz5xs"] Jan 29 15:41:21 crc kubenswrapper[5008]: I0129 15:41:21.573968 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-qz5xs" Jan 29 15:41:21 crc kubenswrapper[5008]: I0129 15:41:21.576586 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"openshift-nmstate-webhook" Jan 29 15:41:21 crc kubenswrapper[5008]: I0129 15:41:21.604722 5008 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-handler-8hxxx"] Jan 29 15:41:21 crc kubenswrapper[5008]: I0129 15:41:21.605800 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-8hxxx" Jan 29 15:41:21 crc kubenswrapper[5008]: I0129 15:41:21.636897 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-8474b5b9d8-qz5xs"] Jan 29 15:41:21 crc kubenswrapper[5008]: I0129 15:41:21.641682 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/beee9730-825d-4a7e-9ef1-d735b1bddd07-nmstate-lock\") pod \"nmstate-handler-8hxxx\" (UID: \"beee9730-825d-4a7e-9ef1-d735b1bddd07\") " pod="openshift-nmstate/nmstate-handler-8hxxx" Jan 29 15:41:21 crc kubenswrapper[5008]: I0129 15:41:21.641849 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/6a7e5f12-26c5-4197-81ed-559569651fab-tls-key-pair\") pod \"nmstate-webhook-8474b5b9d8-qz5xs\" (UID: \"6a7e5f12-26c5-4197-81ed-559569651fab\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-qz5xs" Jan 29 15:41:21 crc kubenswrapper[5008]: I0129 15:41:21.641951 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qcfb8\" (UniqueName: \"kubernetes.io/projected/6a7e5f12-26c5-4197-81ed-559569651fab-kube-api-access-qcfb8\") pod \"nmstate-webhook-8474b5b9d8-qz5xs\" (UID: \"6a7e5f12-26c5-4197-81ed-559569651fab\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-qz5xs" Jan 29 15:41:21 crc kubenswrapper[5008]: I0129 15:41:21.642140 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/beee9730-825d-4a7e-9ef1-d735b1bddd07-dbus-socket\") pod \"nmstate-handler-8hxxx\" (UID: \"beee9730-825d-4a7e-9ef1-d735b1bddd07\") " pod="openshift-nmstate/nmstate-handler-8hxxx" Jan 29 15:41:21 crc kubenswrapper[5008]: I0129 15:41:21.642247 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/beee9730-825d-4a7e-9ef1-d735b1bddd07-ovs-socket\") pod \"nmstate-handler-8hxxx\" (UID: \"beee9730-825d-4a7e-9ef1-d735b1bddd07\") " pod="openshift-nmstate/nmstate-handler-8hxxx" Jan 29 15:41:21 crc kubenswrapper[5008]: I0129 15:41:21.642290 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hkhxt\" (UniqueName: \"kubernetes.io/projected/beee9730-825d-4a7e-9ef1-d735b1bddd07-kube-api-access-hkhxt\") pod \"nmstate-handler-8hxxx\" (UID: \"beee9730-825d-4a7e-9ef1-d735b1bddd07\") " pod="openshift-nmstate/nmstate-handler-8hxxx" Jan 29 15:41:21 crc kubenswrapper[5008]: I0129 15:41:21.642451 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cqz8v\" (UniqueName: \"kubernetes.io/projected/5379965a-18ce-41a4-8753-7a70ed4a5efc-kube-api-access-cqz8v\") pod \"nmstate-metrics-54757c584b-mtz4q\" (UID: \"5379965a-18ce-41a4-8753-7a70ed4a5efc\") " pod="openshift-nmstate/nmstate-metrics-54757c584b-mtz4q" Jan 29 15:41:21 crc kubenswrapper[5008]: I0129 15:41:21.686363 5008 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-console-plugin-7754f76f8b-dvn47"] Jan 29 15:41:21 crc kubenswrapper[5008]: I0129 15:41:21.689933 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-dvn47" Jan 29 15:41:21 crc kubenswrapper[5008]: I0129 15:41:21.696074 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-7754f76f8b-dvn47"] Jan 29 15:41:21 crc kubenswrapper[5008]: I0129 15:41:21.698428 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"default-dockercfg-ls5ll" Jan 29 15:41:21 crc kubenswrapper[5008]: I0129 15:41:21.698469 5008 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"nginx-conf" Jan 29 15:41:21 crc kubenswrapper[5008]: I0129 15:41:21.698496 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"plugin-serving-cert" Jan 29 15:41:21 crc kubenswrapper[5008]: I0129 15:41:21.743583 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qcfb8\" (UniqueName: \"kubernetes.io/projected/6a7e5f12-26c5-4197-81ed-559569651fab-kube-api-access-qcfb8\") pod \"nmstate-webhook-8474b5b9d8-qz5xs\" (UID: \"6a7e5f12-26c5-4197-81ed-559569651fab\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-qz5xs" Jan 29 15:41:21 crc kubenswrapper[5008]: I0129 15:41:21.743647 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/beee9730-825d-4a7e-9ef1-d735b1bddd07-dbus-socket\") pod \"nmstate-handler-8hxxx\" (UID: \"beee9730-825d-4a7e-9ef1-d735b1bddd07\") " pod="openshift-nmstate/nmstate-handler-8hxxx" Jan 29 15:41:21 crc kubenswrapper[5008]: I0129 15:41:21.743679 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/beee9730-825d-4a7e-9ef1-d735b1bddd07-ovs-socket\") pod \"nmstate-handler-8hxxx\" (UID: \"beee9730-825d-4a7e-9ef1-d735b1bddd07\") " pod="openshift-nmstate/nmstate-handler-8hxxx" Jan 29 15:41:21 crc kubenswrapper[5008]: I0129 15:41:21.743704 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hkhxt\" (UniqueName: \"kubernetes.io/projected/beee9730-825d-4a7e-9ef1-d735b1bddd07-kube-api-access-hkhxt\") pod \"nmstate-handler-8hxxx\" (UID: \"beee9730-825d-4a7e-9ef1-d735b1bddd07\") " pod="openshift-nmstate/nmstate-handler-8hxxx" Jan 29 15:41:21 crc kubenswrapper[5008]: I0129 15:41:21.743740 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/75f20405-b349-4e5f-ba1a-b6bf348766ce-plugin-serving-cert\") pod \"nmstate-console-plugin-7754f76f8b-dvn47\" (UID: \"75f20405-b349-4e5f-ba1a-b6bf348766ce\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-dvn47" Jan 29 15:41:21 crc kubenswrapper[5008]: I0129 15:41:21.743775 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqz8v\" (UniqueName: \"kubernetes.io/projected/5379965a-18ce-41a4-8753-7a70ed4a5efc-kube-api-access-cqz8v\") pod \"nmstate-metrics-54757c584b-mtz4q\" (UID: \"5379965a-18ce-41a4-8753-7a70ed4a5efc\") " pod="openshift-nmstate/nmstate-metrics-54757c584b-mtz4q" Jan 29 15:41:21 crc kubenswrapper[5008]: I0129 15:41:21.743818 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/beee9730-825d-4a7e-9ef1-d735b1bddd07-nmstate-lock\") pod \"nmstate-handler-8hxxx\" (UID: \"beee9730-825d-4a7e-9ef1-d735b1bddd07\") " pod="openshift-nmstate/nmstate-handler-8hxxx" Jan 29 15:41:21 crc kubenswrapper[5008]: I0129 15:41:21.743841 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2kzz5\" (UniqueName: \"kubernetes.io/projected/75f20405-b349-4e5f-ba1a-b6bf348766ce-kube-api-access-2kzz5\") pod \"nmstate-console-plugin-7754f76f8b-dvn47\" (UID: \"75f20405-b349-4e5f-ba1a-b6bf348766ce\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-dvn47" Jan 29 15:41:21 crc kubenswrapper[5008]: I0129 15:41:21.743881 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/75f20405-b349-4e5f-ba1a-b6bf348766ce-nginx-conf\") pod \"nmstate-console-plugin-7754f76f8b-dvn47\" (UID: \"75f20405-b349-4e5f-ba1a-b6bf348766ce\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-dvn47" Jan 29 15:41:21 crc kubenswrapper[5008]: I0129 15:41:21.743912 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/6a7e5f12-26c5-4197-81ed-559569651fab-tls-key-pair\") pod \"nmstate-webhook-8474b5b9d8-qz5xs\" (UID: \"6a7e5f12-26c5-4197-81ed-559569651fab\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-qz5xs" Jan 29 15:41:21 crc kubenswrapper[5008]: I0129 15:41:21.744127 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/beee9730-825d-4a7e-9ef1-d735b1bddd07-dbus-socket\") pod \"nmstate-handler-8hxxx\" (UID: \"beee9730-825d-4a7e-9ef1-d735b1bddd07\") " pod="openshift-nmstate/nmstate-handler-8hxxx" Jan 29 15:41:21 crc kubenswrapper[5008]: I0129 15:41:21.744426 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/beee9730-825d-4a7e-9ef1-d735b1bddd07-nmstate-lock\") pod \"nmstate-handler-8hxxx\" (UID: \"beee9730-825d-4a7e-9ef1-d735b1bddd07\") " pod="openshift-nmstate/nmstate-handler-8hxxx" Jan 29 15:41:21 crc kubenswrapper[5008]: I0129 15:41:21.744602 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/beee9730-825d-4a7e-9ef1-d735b1bddd07-ovs-socket\") pod \"nmstate-handler-8hxxx\" (UID: \"beee9730-825d-4a7e-9ef1-d735b1bddd07\") " pod="openshift-nmstate/nmstate-handler-8hxxx" Jan 29 15:41:21 crc kubenswrapper[5008]: I0129 15:41:21.757405 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/6a7e5f12-26c5-4197-81ed-559569651fab-tls-key-pair\") pod \"nmstate-webhook-8474b5b9d8-qz5xs\" (UID: \"6a7e5f12-26c5-4197-81ed-559569651fab\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-qz5xs" Jan 29 15:41:21 crc kubenswrapper[5008]: I0129 15:41:21.760521 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cqz8v\" (UniqueName: \"kubernetes.io/projected/5379965a-18ce-41a4-8753-7a70ed4a5efc-kube-api-access-cqz8v\") pod \"nmstate-metrics-54757c584b-mtz4q\" (UID: \"5379965a-18ce-41a4-8753-7a70ed4a5efc\") " pod="openshift-nmstate/nmstate-metrics-54757c584b-mtz4q" Jan 29 15:41:21 crc kubenswrapper[5008]: I0129 15:41:21.760589 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qcfb8\" (UniqueName: \"kubernetes.io/projected/6a7e5f12-26c5-4197-81ed-559569651fab-kube-api-access-qcfb8\") pod \"nmstate-webhook-8474b5b9d8-qz5xs\" (UID: \"6a7e5f12-26c5-4197-81ed-559569651fab\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-qz5xs" Jan 29 15:41:21 crc kubenswrapper[5008]: I0129 15:41:21.761171 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hkhxt\" (UniqueName: \"kubernetes.io/projected/beee9730-825d-4a7e-9ef1-d735b1bddd07-kube-api-access-hkhxt\") pod \"nmstate-handler-8hxxx\" (UID: \"beee9730-825d-4a7e-9ef1-d735b1bddd07\") " pod="openshift-nmstate/nmstate-handler-8hxxx" Jan 29 15:41:21 crc kubenswrapper[5008]: I0129 15:41:21.845361 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/75f20405-b349-4e5f-ba1a-b6bf348766ce-plugin-serving-cert\") pod \"nmstate-console-plugin-7754f76f8b-dvn47\" (UID: \"75f20405-b349-4e5f-ba1a-b6bf348766ce\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-dvn47" Jan 29 15:41:21 crc kubenswrapper[5008]: I0129 15:41:21.845429 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2kzz5\" (UniqueName: \"kubernetes.io/projected/75f20405-b349-4e5f-ba1a-b6bf348766ce-kube-api-access-2kzz5\") pod \"nmstate-console-plugin-7754f76f8b-dvn47\" (UID: \"75f20405-b349-4e5f-ba1a-b6bf348766ce\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-dvn47" Jan 29 15:41:21 crc kubenswrapper[5008]: I0129 15:41:21.845458 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/75f20405-b349-4e5f-ba1a-b6bf348766ce-nginx-conf\") pod \"nmstate-console-plugin-7754f76f8b-dvn47\" (UID: \"75f20405-b349-4e5f-ba1a-b6bf348766ce\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-dvn47" Jan 29 15:41:21 crc kubenswrapper[5008]: E0129 15:41:21.845646 5008 secret.go:188] Couldn't get secret openshift-nmstate/plugin-serving-cert: secret "plugin-serving-cert" not found Jan 29 15:41:21 crc kubenswrapper[5008]: E0129 15:41:21.845796 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/75f20405-b349-4e5f-ba1a-b6bf348766ce-plugin-serving-cert podName:75f20405-b349-4e5f-ba1a-b6bf348766ce nodeName:}" failed. No retries permitted until 2026-01-29 15:41:22.345741101 +0000 UTC m=+826.018595338 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "plugin-serving-cert" (UniqueName: "kubernetes.io/secret/75f20405-b349-4e5f-ba1a-b6bf348766ce-plugin-serving-cert") pod "nmstate-console-plugin-7754f76f8b-dvn47" (UID: "75f20405-b349-4e5f-ba1a-b6bf348766ce") : secret "plugin-serving-cert" not found Jan 29 15:41:21 crc kubenswrapper[5008]: I0129 15:41:21.846346 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/75f20405-b349-4e5f-ba1a-b6bf348766ce-nginx-conf\") pod \"nmstate-console-plugin-7754f76f8b-dvn47\" (UID: \"75f20405-b349-4e5f-ba1a-b6bf348766ce\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-dvn47" Jan 29 15:41:21 crc kubenswrapper[5008]: I0129 15:41:21.866764 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2kzz5\" (UniqueName: \"kubernetes.io/projected/75f20405-b349-4e5f-ba1a-b6bf348766ce-kube-api-access-2kzz5\") pod \"nmstate-console-plugin-7754f76f8b-dvn47\" (UID: \"75f20405-b349-4e5f-ba1a-b6bf348766ce\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-dvn47" Jan 29 15:41:21 crc kubenswrapper[5008]: I0129 15:41:21.871167 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-54757c584b-mtz4q" Jan 29 15:41:21 crc kubenswrapper[5008]: I0129 15:41:21.884225 5008 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-7784897869-4b45r"] Jan 29 15:41:21 crc kubenswrapper[5008]: I0129 15:41:21.885888 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-7784897869-4b45r" Jan 29 15:41:21 crc kubenswrapper[5008]: I0129 15:41:21.893501 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-qz5xs" Jan 29 15:41:21 crc kubenswrapper[5008]: I0129 15:41:21.897002 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-7784897869-4b45r"] Jan 29 15:41:21 crc kubenswrapper[5008]: I0129 15:41:21.927661 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-8hxxx" Jan 29 15:41:21 crc kubenswrapper[5008]: I0129 15:41:21.946532 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/5e6ae51d-3821-446e-9067-fa071506ad47-service-ca\") pod \"console-7784897869-4b45r\" (UID: \"5e6ae51d-3821-446e-9067-fa071506ad47\") " pod="openshift-console/console-7784897869-4b45r" Jan 29 15:41:21 crc kubenswrapper[5008]: I0129 15:41:21.946590 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/5e6ae51d-3821-446e-9067-fa071506ad47-console-serving-cert\") pod \"console-7784897869-4b45r\" (UID: \"5e6ae51d-3821-446e-9067-fa071506ad47\") " pod="openshift-console/console-7784897869-4b45r" Jan 29 15:41:21 crc kubenswrapper[5008]: I0129 15:41:21.946622 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/5e6ae51d-3821-446e-9067-fa071506ad47-oauth-serving-cert\") pod \"console-7784897869-4b45r\" (UID: \"5e6ae51d-3821-446e-9067-fa071506ad47\") " pod="openshift-console/console-7784897869-4b45r" Jan 29 15:41:21 crc kubenswrapper[5008]: I0129 15:41:21.946671 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hdwsw\" (UniqueName: \"kubernetes.io/projected/5e6ae51d-3821-446e-9067-fa071506ad47-kube-api-access-hdwsw\") pod \"console-7784897869-4b45r\" (UID: \"5e6ae51d-3821-446e-9067-fa071506ad47\") " pod="openshift-console/console-7784897869-4b45r" Jan 29 15:41:21 crc kubenswrapper[5008]: I0129 15:41:21.946705 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/5e6ae51d-3821-446e-9067-fa071506ad47-console-oauth-config\") pod \"console-7784897869-4b45r\" (UID: \"5e6ae51d-3821-446e-9067-fa071506ad47\") " pod="openshift-console/console-7784897869-4b45r" Jan 29 15:41:21 crc kubenswrapper[5008]: I0129 15:41:21.946728 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/5e6ae51d-3821-446e-9067-fa071506ad47-console-config\") pod \"console-7784897869-4b45r\" (UID: \"5e6ae51d-3821-446e-9067-fa071506ad47\") " pod="openshift-console/console-7784897869-4b45r" Jan 29 15:41:21 crc kubenswrapper[5008]: I0129 15:41:21.946757 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5e6ae51d-3821-446e-9067-fa071506ad47-trusted-ca-bundle\") pod \"console-7784897869-4b45r\" (UID: \"5e6ae51d-3821-446e-9067-fa071506ad47\") " pod="openshift-console/console-7784897869-4b45r" Jan 29 15:41:21 crc kubenswrapper[5008]: W0129 15:41:21.958832 5008 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbeee9730_825d_4a7e_9ef1_d735b1bddd07.slice/crio-1d31c29c216e09183f799a1201be0a962527f8782edd86fb0cac17ac946a7021 WatchSource:0}: Error finding container 1d31c29c216e09183f799a1201be0a962527f8782edd86fb0cac17ac946a7021: Status 404 returned error can't find the container with id 1d31c29c216e09183f799a1201be0a962527f8782edd86fb0cac17ac946a7021 Jan 29 15:41:22 crc kubenswrapper[5008]: I0129 15:41:22.048042 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/5e6ae51d-3821-446e-9067-fa071506ad47-service-ca\") pod \"console-7784897869-4b45r\" (UID: \"5e6ae51d-3821-446e-9067-fa071506ad47\") " pod="openshift-console/console-7784897869-4b45r" Jan 29 15:41:22 crc kubenswrapper[5008]: I0129 15:41:22.048108 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/5e6ae51d-3821-446e-9067-fa071506ad47-console-serving-cert\") pod \"console-7784897869-4b45r\" (UID: \"5e6ae51d-3821-446e-9067-fa071506ad47\") " pod="openshift-console/console-7784897869-4b45r" Jan 29 15:41:22 crc kubenswrapper[5008]: I0129 15:41:22.048163 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/5e6ae51d-3821-446e-9067-fa071506ad47-oauth-serving-cert\") pod \"console-7784897869-4b45r\" (UID: \"5e6ae51d-3821-446e-9067-fa071506ad47\") " pod="openshift-console/console-7784897869-4b45r" Jan 29 15:41:22 crc kubenswrapper[5008]: I0129 15:41:22.048226 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hdwsw\" (UniqueName: \"kubernetes.io/projected/5e6ae51d-3821-446e-9067-fa071506ad47-kube-api-access-hdwsw\") pod \"console-7784897869-4b45r\" (UID: \"5e6ae51d-3821-446e-9067-fa071506ad47\") " pod="openshift-console/console-7784897869-4b45r" Jan 29 15:41:22 crc kubenswrapper[5008]: I0129 15:41:22.048266 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/5e6ae51d-3821-446e-9067-fa071506ad47-console-oauth-config\") pod \"console-7784897869-4b45r\" (UID: \"5e6ae51d-3821-446e-9067-fa071506ad47\") " pod="openshift-console/console-7784897869-4b45r" Jan 29 15:41:22 crc kubenswrapper[5008]: I0129 15:41:22.048293 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/5e6ae51d-3821-446e-9067-fa071506ad47-console-config\") pod \"console-7784897869-4b45r\" (UID: \"5e6ae51d-3821-446e-9067-fa071506ad47\") " pod="openshift-console/console-7784897869-4b45r" Jan 29 15:41:22 crc kubenswrapper[5008]: I0129 15:41:22.048330 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5e6ae51d-3821-446e-9067-fa071506ad47-trusted-ca-bundle\") pod \"console-7784897869-4b45r\" (UID: \"5e6ae51d-3821-446e-9067-fa071506ad47\") " pod="openshift-console/console-7784897869-4b45r" Jan 29 15:41:22 crc kubenswrapper[5008]: I0129 15:41:22.051391 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/5e6ae51d-3821-446e-9067-fa071506ad47-service-ca\") pod \"console-7784897869-4b45r\" (UID: \"5e6ae51d-3821-446e-9067-fa071506ad47\") " pod="openshift-console/console-7784897869-4b45r" Jan 29 15:41:22 crc kubenswrapper[5008]: I0129 15:41:22.051420 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/5e6ae51d-3821-446e-9067-fa071506ad47-console-config\") pod \"console-7784897869-4b45r\" (UID: \"5e6ae51d-3821-446e-9067-fa071506ad47\") " pod="openshift-console/console-7784897869-4b45r" Jan 29 15:41:22 crc kubenswrapper[5008]: I0129 15:41:22.052029 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5e6ae51d-3821-446e-9067-fa071506ad47-trusted-ca-bundle\") pod \"console-7784897869-4b45r\" (UID: \"5e6ae51d-3821-446e-9067-fa071506ad47\") " pod="openshift-console/console-7784897869-4b45r" Jan 29 15:41:22 crc kubenswrapper[5008]: I0129 15:41:22.053966 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/5e6ae51d-3821-446e-9067-fa071506ad47-oauth-serving-cert\") pod \"console-7784897869-4b45r\" (UID: \"5e6ae51d-3821-446e-9067-fa071506ad47\") " pod="openshift-console/console-7784897869-4b45r" Jan 29 15:41:22 crc kubenswrapper[5008]: I0129 15:41:22.054104 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/5e6ae51d-3821-446e-9067-fa071506ad47-console-oauth-config\") pod \"console-7784897869-4b45r\" (UID: \"5e6ae51d-3821-446e-9067-fa071506ad47\") " pod="openshift-console/console-7784897869-4b45r" Jan 29 15:41:22 crc kubenswrapper[5008]: I0129 15:41:22.054850 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/5e6ae51d-3821-446e-9067-fa071506ad47-console-serving-cert\") pod \"console-7784897869-4b45r\" (UID: \"5e6ae51d-3821-446e-9067-fa071506ad47\") " pod="openshift-console/console-7784897869-4b45r" Jan 29 15:41:22 crc kubenswrapper[5008]: I0129 15:41:22.071119 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hdwsw\" (UniqueName: \"kubernetes.io/projected/5e6ae51d-3821-446e-9067-fa071506ad47-kube-api-access-hdwsw\") pod \"console-7784897869-4b45r\" (UID: \"5e6ae51d-3821-446e-9067-fa071506ad47\") " pod="openshift-console/console-7784897869-4b45r" Jan 29 15:41:22 crc kubenswrapper[5008]: I0129 15:41:22.148456 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-8474b5b9d8-qz5xs"] Jan 29 15:41:22 crc kubenswrapper[5008]: W0129 15:41:22.160611 5008 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6a7e5f12_26c5_4197_81ed_559569651fab.slice/crio-a4c7d48d897d6fb3cf4d945954cf52108dd325acaef59cdc5bd16346ceb455fe WatchSource:0}: Error finding container a4c7d48d897d6fb3cf4d945954cf52108dd325acaef59cdc5bd16346ceb455fe: Status 404 returned error can't find the container with id a4c7d48d897d6fb3cf4d945954cf52108dd325acaef59cdc5bd16346ceb455fe Jan 29 15:41:22 crc kubenswrapper[5008]: I0129 15:41:22.258841 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-7784897869-4b45r" Jan 29 15:41:22 crc kubenswrapper[5008]: I0129 15:41:22.335227 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-54757c584b-mtz4q"] Jan 29 15:41:22 crc kubenswrapper[5008]: W0129 15:41:22.347621 5008 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5379965a_18ce_41a4_8753_7a70ed4a5efc.slice/crio-1233f27f6c132b984b2ecf59fca5f611a06def688d695fd0a3c1db41fe4e3484 WatchSource:0}: Error finding container 1233f27f6c132b984b2ecf59fca5f611a06def688d695fd0a3c1db41fe4e3484: Status 404 returned error can't find the container with id 1233f27f6c132b984b2ecf59fca5f611a06def688d695fd0a3c1db41fe4e3484 Jan 29 15:41:22 crc kubenswrapper[5008]: I0129 15:41:22.350950 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/75f20405-b349-4e5f-ba1a-b6bf348766ce-plugin-serving-cert\") pod \"nmstate-console-plugin-7754f76f8b-dvn47\" (UID: \"75f20405-b349-4e5f-ba1a-b6bf348766ce\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-dvn47" Jan 29 15:41:22 crc kubenswrapper[5008]: I0129 15:41:22.359852 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/75f20405-b349-4e5f-ba1a-b6bf348766ce-plugin-serving-cert\") pod \"nmstate-console-plugin-7754f76f8b-dvn47\" (UID: \"75f20405-b349-4e5f-ba1a-b6bf348766ce\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-dvn47" Jan 29 15:41:22 crc kubenswrapper[5008]: I0129 15:41:22.513893 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-7784897869-4b45r"] Jan 29 15:41:22 crc kubenswrapper[5008]: W0129 15:41:22.531544 5008 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5e6ae51d_3821_446e_9067_fa071506ad47.slice/crio-3ac9ddbb43eb16fd98940bdcf7043adbadb238e7e9f6657e7fa72aa83946d295 WatchSource:0}: Error finding container 3ac9ddbb43eb16fd98940bdcf7043adbadb238e7e9f6657e7fa72aa83946d295: Status 404 returned error can't find the container with id 3ac9ddbb43eb16fd98940bdcf7043adbadb238e7e9f6657e7fa72aa83946d295 Jan 29 15:41:22 crc kubenswrapper[5008]: I0129 15:41:22.605464 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-54757c584b-mtz4q" event={"ID":"5379965a-18ce-41a4-8753-7a70ed4a5efc","Type":"ContainerStarted","Data":"1233f27f6c132b984b2ecf59fca5f611a06def688d695fd0a3c1db41fe4e3484"} Jan 29 15:41:22 crc kubenswrapper[5008]: I0129 15:41:22.607873 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-dvn47" Jan 29 15:41:22 crc kubenswrapper[5008]: I0129 15:41:22.608348 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-7784897869-4b45r" event={"ID":"5e6ae51d-3821-446e-9067-fa071506ad47","Type":"ContainerStarted","Data":"3ac9ddbb43eb16fd98940bdcf7043adbadb238e7e9f6657e7fa72aa83946d295"} Jan 29 15:41:22 crc kubenswrapper[5008]: I0129 15:41:22.609622 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-8hxxx" event={"ID":"beee9730-825d-4a7e-9ef1-d735b1bddd07","Type":"ContainerStarted","Data":"1d31c29c216e09183f799a1201be0a962527f8782edd86fb0cac17ac946a7021"} Jan 29 15:41:22 crc kubenswrapper[5008]: I0129 15:41:22.610902 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-qz5xs" event={"ID":"6a7e5f12-26c5-4197-81ed-559569651fab","Type":"ContainerStarted","Data":"a4c7d48d897d6fb3cf4d945954cf52108dd325acaef59cdc5bd16346ceb455fe"} Jan 29 15:41:22 crc kubenswrapper[5008]: I0129 15:41:22.841455 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-7754f76f8b-dvn47"] Jan 29 15:41:22 crc kubenswrapper[5008]: W0129 15:41:22.858354 5008 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod75f20405_b349_4e5f_ba1a_b6bf348766ce.slice/crio-0aeb49a9dea232a8a4a74351828d95aedba893305348b9dede739303fdda53e2 WatchSource:0}: Error finding container 0aeb49a9dea232a8a4a74351828d95aedba893305348b9dede739303fdda53e2: Status 404 returned error can't find the container with id 0aeb49a9dea232a8a4a74351828d95aedba893305348b9dede739303fdda53e2 Jan 29 15:41:23 crc kubenswrapper[5008]: I0129 15:41:23.617221 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-7784897869-4b45r" event={"ID":"5e6ae51d-3821-446e-9067-fa071506ad47","Type":"ContainerStarted","Data":"1a27d433a0f71d8244bbffd1ba9aad37a6a4b581856277b05f1dee6abaf8a784"} Jan 29 15:41:23 crc kubenswrapper[5008]: I0129 15:41:23.619772 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-dvn47" event={"ID":"75f20405-b349-4e5f-ba1a-b6bf348766ce","Type":"ContainerStarted","Data":"0aeb49a9dea232a8a4a74351828d95aedba893305348b9dede739303fdda53e2"} Jan 29 15:41:23 crc kubenswrapper[5008]: I0129 15:41:23.636412 5008 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-7784897869-4b45r" podStartSLOduration=2.636394165 podStartE2EDuration="2.636394165s" podCreationTimestamp="2026-01-29 15:41:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 15:41:23.632549891 +0000 UTC m=+827.305404128" watchObservedRunningTime="2026-01-29 15:41:23.636394165 +0000 UTC m=+827.309248402" Jan 29 15:41:23 crc kubenswrapper[5008]: I0129 15:41:23.986259 5008 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-l4krl" Jan 29 15:41:24 crc kubenswrapper[5008]: I0129 15:41:24.029072 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-l4krl" Jan 29 15:41:24 crc kubenswrapper[5008]: I0129 15:41:24.213063 5008 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-l4krl"] Jan 29 15:41:25 crc kubenswrapper[5008]: I0129 15:41:25.632029 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-8hxxx" event={"ID":"beee9730-825d-4a7e-9ef1-d735b1bddd07","Type":"ContainerStarted","Data":"256527c06d4ff808980cdf28e7090e8f13a3e67879d7867ad01d3e7a3a5b9977"} Jan 29 15:41:25 crc kubenswrapper[5008]: I0129 15:41:25.632594 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-handler-8hxxx" Jan 29 15:41:25 crc kubenswrapper[5008]: I0129 15:41:25.634597 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-qz5xs" event={"ID":"6a7e5f12-26c5-4197-81ed-559569651fab","Type":"ContainerStarted","Data":"57aa5848698dbbb36c1797fc413596c957e4cd0e709452a69362eabb0d85a81e"} Jan 29 15:41:25 crc kubenswrapper[5008]: I0129 15:41:25.634881 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-qz5xs" Jan 29 15:41:25 crc kubenswrapper[5008]: I0129 15:41:25.638855 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-54757c584b-mtz4q" event={"ID":"5379965a-18ce-41a4-8753-7a70ed4a5efc","Type":"ContainerStarted","Data":"ba905c62e61bf16ac634d619ed5b107d2e63c19383f9df4e2387ea322a278500"} Jan 29 15:41:25 crc kubenswrapper[5008]: I0129 15:41:25.639011 5008 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-l4krl" podUID="800868e4-e114-49d4-a9b4-3ee8fc4ea341" containerName="registry-server" containerID="cri-o://e6d3b279a87e2912316c06cfc0ad9c6a6abf9dd98262ae2255f24dc8fb87f07a" gracePeriod=2 Jan 29 15:41:25 crc kubenswrapper[5008]: I0129 15:41:25.653173 5008 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-handler-8hxxx" podStartSLOduration=2.16896541 podStartE2EDuration="4.6531557s" podCreationTimestamp="2026-01-29 15:41:21 +0000 UTC" firstStartedPulling="2026-01-29 15:41:21.961965645 +0000 UTC m=+825.634819872" lastFinishedPulling="2026-01-29 15:41:24.446155885 +0000 UTC m=+828.119010162" observedRunningTime="2026-01-29 15:41:25.650554868 +0000 UTC m=+829.323409125" watchObservedRunningTime="2026-01-29 15:41:25.6531557 +0000 UTC m=+829.326009947" Jan 29 15:41:25 crc kubenswrapper[5008]: I0129 15:41:25.668266 5008 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-qz5xs" podStartSLOduration=2.382695214 podStartE2EDuration="4.668248766s" podCreationTimestamp="2026-01-29 15:41:21 +0000 UTC" firstStartedPulling="2026-01-29 15:41:22.162420897 +0000 UTC m=+825.835275134" lastFinishedPulling="2026-01-29 15:41:24.447974449 +0000 UTC m=+828.120828686" observedRunningTime="2026-01-29 15:41:25.66553548 +0000 UTC m=+829.338389737" watchObservedRunningTime="2026-01-29 15:41:25.668248766 +0000 UTC m=+829.341103003" Jan 29 15:41:26 crc kubenswrapper[5008]: I0129 15:41:26.037589 5008 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-l4krl" Jan 29 15:41:26 crc kubenswrapper[5008]: I0129 15:41:26.107816 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/800868e4-e114-49d4-a9b4-3ee8fc4ea341-catalog-content\") pod \"800868e4-e114-49d4-a9b4-3ee8fc4ea341\" (UID: \"800868e4-e114-49d4-a9b4-3ee8fc4ea341\") " Jan 29 15:41:26 crc kubenswrapper[5008]: I0129 15:41:26.107977 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/800868e4-e114-49d4-a9b4-3ee8fc4ea341-utilities\") pod \"800868e4-e114-49d4-a9b4-3ee8fc4ea341\" (UID: \"800868e4-e114-49d4-a9b4-3ee8fc4ea341\") " Jan 29 15:41:26 crc kubenswrapper[5008]: I0129 15:41:26.108017 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mpmc9\" (UniqueName: \"kubernetes.io/projected/800868e4-e114-49d4-a9b4-3ee8fc4ea341-kube-api-access-mpmc9\") pod \"800868e4-e114-49d4-a9b4-3ee8fc4ea341\" (UID: \"800868e4-e114-49d4-a9b4-3ee8fc4ea341\") " Jan 29 15:41:26 crc kubenswrapper[5008]: I0129 15:41:26.109299 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/800868e4-e114-49d4-a9b4-3ee8fc4ea341-utilities" (OuterVolumeSpecName: "utilities") pod "800868e4-e114-49d4-a9b4-3ee8fc4ea341" (UID: "800868e4-e114-49d4-a9b4-3ee8fc4ea341"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 15:41:26 crc kubenswrapper[5008]: I0129 15:41:26.112126 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/800868e4-e114-49d4-a9b4-3ee8fc4ea341-kube-api-access-mpmc9" (OuterVolumeSpecName: "kube-api-access-mpmc9") pod "800868e4-e114-49d4-a9b4-3ee8fc4ea341" (UID: "800868e4-e114-49d4-a9b4-3ee8fc4ea341"). InnerVolumeSpecName "kube-api-access-mpmc9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:41:26 crc kubenswrapper[5008]: I0129 15:41:26.209871 5008 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/800868e4-e114-49d4-a9b4-3ee8fc4ea341-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 15:41:26 crc kubenswrapper[5008]: I0129 15:41:26.209921 5008 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mpmc9\" (UniqueName: \"kubernetes.io/projected/800868e4-e114-49d4-a9b4-3ee8fc4ea341-kube-api-access-mpmc9\") on node \"crc\" DevicePath \"\"" Jan 29 15:41:26 crc kubenswrapper[5008]: I0129 15:41:26.263848 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/800868e4-e114-49d4-a9b4-3ee8fc4ea341-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "800868e4-e114-49d4-a9b4-3ee8fc4ea341" (UID: "800868e4-e114-49d4-a9b4-3ee8fc4ea341"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 15:41:26 crc kubenswrapper[5008]: I0129 15:41:26.312155 5008 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/800868e4-e114-49d4-a9b4-3ee8fc4ea341-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 15:41:26 crc kubenswrapper[5008]: I0129 15:41:26.645084 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-dvn47" event={"ID":"75f20405-b349-4e5f-ba1a-b6bf348766ce","Type":"ContainerStarted","Data":"09b95f8b8c4a1f8411757fded669ac923c7d1fb8c53c82f277add267da6a3f1d"} Jan 29 15:41:26 crc kubenswrapper[5008]: I0129 15:41:26.647680 5008 generic.go:334] "Generic (PLEG): container finished" podID="800868e4-e114-49d4-a9b4-3ee8fc4ea341" containerID="e6d3b279a87e2912316c06cfc0ad9c6a6abf9dd98262ae2255f24dc8fb87f07a" exitCode=0 Jan 29 15:41:26 crc kubenswrapper[5008]: I0129 15:41:26.647733 5008 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-l4krl" Jan 29 15:41:26 crc kubenswrapper[5008]: I0129 15:41:26.647744 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-l4krl" event={"ID":"800868e4-e114-49d4-a9b4-3ee8fc4ea341","Type":"ContainerDied","Data":"e6d3b279a87e2912316c06cfc0ad9c6a6abf9dd98262ae2255f24dc8fb87f07a"} Jan 29 15:41:26 crc kubenswrapper[5008]: I0129 15:41:26.647826 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-l4krl" event={"ID":"800868e4-e114-49d4-a9b4-3ee8fc4ea341","Type":"ContainerDied","Data":"2b7ddcb62ecbf05357c096771ab213c80505ee7aadc4ebe5c0c9a2c9f79dd618"} Jan 29 15:41:26 crc kubenswrapper[5008]: I0129 15:41:26.647852 5008 scope.go:117] "RemoveContainer" containerID="e6d3b279a87e2912316c06cfc0ad9c6a6abf9dd98262ae2255f24dc8fb87f07a" Jan 29 15:41:26 crc kubenswrapper[5008]: I0129 15:41:26.666570 5008 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-dvn47" podStartSLOduration=3.001872721 podStartE2EDuration="5.66654812s" podCreationTimestamp="2026-01-29 15:41:21 +0000 UTC" firstStartedPulling="2026-01-29 15:41:22.860894953 +0000 UTC m=+826.533749200" lastFinishedPulling="2026-01-29 15:41:25.525570362 +0000 UTC m=+829.198424599" observedRunningTime="2026-01-29 15:41:26.663890235 +0000 UTC m=+830.336744492" watchObservedRunningTime="2026-01-29 15:41:26.66654812 +0000 UTC m=+830.339402377" Jan 29 15:41:26 crc kubenswrapper[5008]: I0129 15:41:26.689159 5008 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-l4krl"] Jan 29 15:41:26 crc kubenswrapper[5008]: I0129 15:41:26.695318 5008 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-l4krl"] Jan 29 15:41:27 crc kubenswrapper[5008]: I0129 15:41:27.052450 5008 scope.go:117] "RemoveContainer" containerID="8a618c4eb07f9ce54bdbc184cbab44314977f7271a6bdf791d1706f757f3f4e4" Jan 29 15:41:27 crc kubenswrapper[5008]: I0129 15:41:27.105079 5008 scope.go:117] "RemoveContainer" containerID="cbc18e3bc2643b57a3277fc511d873137c3e97944270bc0d5e5eb0f4dc1ee274" Jan 29 15:41:27 crc kubenswrapper[5008]: I0129 15:41:27.137887 5008 scope.go:117] "RemoveContainer" containerID="e6d3b279a87e2912316c06cfc0ad9c6a6abf9dd98262ae2255f24dc8fb87f07a" Jan 29 15:41:27 crc kubenswrapper[5008]: E0129 15:41:27.138758 5008 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e6d3b279a87e2912316c06cfc0ad9c6a6abf9dd98262ae2255f24dc8fb87f07a\": container with ID starting with e6d3b279a87e2912316c06cfc0ad9c6a6abf9dd98262ae2255f24dc8fb87f07a not found: ID does not exist" containerID="e6d3b279a87e2912316c06cfc0ad9c6a6abf9dd98262ae2255f24dc8fb87f07a" Jan 29 15:41:27 crc kubenswrapper[5008]: I0129 15:41:27.138860 5008 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e6d3b279a87e2912316c06cfc0ad9c6a6abf9dd98262ae2255f24dc8fb87f07a"} err="failed to get container status \"e6d3b279a87e2912316c06cfc0ad9c6a6abf9dd98262ae2255f24dc8fb87f07a\": rpc error: code = NotFound desc = could not find container \"e6d3b279a87e2912316c06cfc0ad9c6a6abf9dd98262ae2255f24dc8fb87f07a\": container with ID starting with e6d3b279a87e2912316c06cfc0ad9c6a6abf9dd98262ae2255f24dc8fb87f07a not found: ID does not exist" Jan 29 15:41:27 crc kubenswrapper[5008]: I0129 15:41:27.138903 5008 scope.go:117] "RemoveContainer" containerID="8a618c4eb07f9ce54bdbc184cbab44314977f7271a6bdf791d1706f757f3f4e4" Jan 29 15:41:27 crc kubenswrapper[5008]: E0129 15:41:27.139426 5008 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8a618c4eb07f9ce54bdbc184cbab44314977f7271a6bdf791d1706f757f3f4e4\": container with ID starting with 8a618c4eb07f9ce54bdbc184cbab44314977f7271a6bdf791d1706f757f3f4e4 not found: ID does not exist" containerID="8a618c4eb07f9ce54bdbc184cbab44314977f7271a6bdf791d1706f757f3f4e4" Jan 29 15:41:27 crc kubenswrapper[5008]: I0129 15:41:27.139492 5008 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8a618c4eb07f9ce54bdbc184cbab44314977f7271a6bdf791d1706f757f3f4e4"} err="failed to get container status \"8a618c4eb07f9ce54bdbc184cbab44314977f7271a6bdf791d1706f757f3f4e4\": rpc error: code = NotFound desc = could not find container \"8a618c4eb07f9ce54bdbc184cbab44314977f7271a6bdf791d1706f757f3f4e4\": container with ID starting with 8a618c4eb07f9ce54bdbc184cbab44314977f7271a6bdf791d1706f757f3f4e4 not found: ID does not exist" Jan 29 15:41:27 crc kubenswrapper[5008]: I0129 15:41:27.139528 5008 scope.go:117] "RemoveContainer" containerID="cbc18e3bc2643b57a3277fc511d873137c3e97944270bc0d5e5eb0f4dc1ee274" Jan 29 15:41:27 crc kubenswrapper[5008]: E0129 15:41:27.139916 5008 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cbc18e3bc2643b57a3277fc511d873137c3e97944270bc0d5e5eb0f4dc1ee274\": container with ID starting with cbc18e3bc2643b57a3277fc511d873137c3e97944270bc0d5e5eb0f4dc1ee274 not found: ID does not exist" containerID="cbc18e3bc2643b57a3277fc511d873137c3e97944270bc0d5e5eb0f4dc1ee274" Jan 29 15:41:27 crc kubenswrapper[5008]: I0129 15:41:27.139958 5008 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cbc18e3bc2643b57a3277fc511d873137c3e97944270bc0d5e5eb0f4dc1ee274"} err="failed to get container status \"cbc18e3bc2643b57a3277fc511d873137c3e97944270bc0d5e5eb0f4dc1ee274\": rpc error: code = NotFound desc = could not find container \"cbc18e3bc2643b57a3277fc511d873137c3e97944270bc0d5e5eb0f4dc1ee274\": container with ID starting with cbc18e3bc2643b57a3277fc511d873137c3e97944270bc0d5e5eb0f4dc1ee274 not found: ID does not exist" Jan 29 15:41:27 crc kubenswrapper[5008]: I0129 15:41:27.337996 5008 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="800868e4-e114-49d4-a9b4-3ee8fc4ea341" path="/var/lib/kubelet/pods/800868e4-e114-49d4-a9b4-3ee8fc4ea341/volumes" Jan 29 15:41:27 crc kubenswrapper[5008]: I0129 15:41:27.655036 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-54757c584b-mtz4q" event={"ID":"5379965a-18ce-41a4-8753-7a70ed4a5efc","Type":"ContainerStarted","Data":"c3ca73bc03981863ddf8f1a68738ef20dbc2d8fe1d7193bd6471bc69d2f0c5b7"} Jan 29 15:41:27 crc kubenswrapper[5008]: I0129 15:41:27.690177 5008 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-metrics-54757c584b-mtz4q" podStartSLOduration=1.934683419 podStartE2EDuration="6.689980372s" podCreationTimestamp="2026-01-29 15:41:21 +0000 UTC" firstStartedPulling="2026-01-29 15:41:22.351038762 +0000 UTC m=+826.023893039" lastFinishedPulling="2026-01-29 15:41:27.106335745 +0000 UTC m=+830.779189992" observedRunningTime="2026-01-29 15:41:27.676742862 +0000 UTC m=+831.349597119" watchObservedRunningTime="2026-01-29 15:41:27.689980372 +0000 UTC m=+831.362834619" Jan 29 15:41:31 crc kubenswrapper[5008]: I0129 15:41:31.969474 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-handler-8hxxx" Jan 29 15:41:32 crc kubenswrapper[5008]: I0129 15:41:32.260015 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-7784897869-4b45r" Jan 29 15:41:32 crc kubenswrapper[5008]: I0129 15:41:32.260195 5008 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-7784897869-4b45r" Jan 29 15:41:32 crc kubenswrapper[5008]: I0129 15:41:32.267694 5008 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-7784897869-4b45r" Jan 29 15:41:32 crc kubenswrapper[5008]: I0129 15:41:32.701835 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-7784897869-4b45r" Jan 29 15:41:32 crc kubenswrapper[5008]: I0129 15:41:32.795617 5008 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-g2rk6"] Jan 29 15:41:41 crc kubenswrapper[5008]: I0129 15:41:41.903705 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-qz5xs" Jan 29 15:41:43 crc kubenswrapper[5008]: I0129 15:41:43.990543 5008 patch_prober.go:28] interesting pod/machine-config-daemon-gk9q8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 15:41:43 crc kubenswrapper[5008]: I0129 15:41:43.991051 5008 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-gk9q8" podUID="ca0fcb2d-733d-4bde-9bbf-3f7082d0e244" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 15:41:43 crc kubenswrapper[5008]: I0129 15:41:43.991143 5008 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-gk9q8" Jan 29 15:41:43 crc kubenswrapper[5008]: I0129 15:41:43.992101 5008 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"d89267ade5f0f1bc5747291958183960695e4e4e932d44027e6c4704ebb5c4ef"} pod="openshift-machine-config-operator/machine-config-daemon-gk9q8" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 29 15:41:43 crc kubenswrapper[5008]: I0129 15:41:43.992259 5008 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-gk9q8" podUID="ca0fcb2d-733d-4bde-9bbf-3f7082d0e244" containerName="machine-config-daemon" containerID="cri-o://d89267ade5f0f1bc5747291958183960695e4e4e932d44027e6c4704ebb5c4ef" gracePeriod=600 Jan 29 15:41:44 crc kubenswrapper[5008]: I0129 15:41:44.789283 5008 generic.go:334] "Generic (PLEG): container finished" podID="ca0fcb2d-733d-4bde-9bbf-3f7082d0e244" containerID="d89267ade5f0f1bc5747291958183960695e4e4e932d44027e6c4704ebb5c4ef" exitCode=0 Jan 29 15:41:44 crc kubenswrapper[5008]: I0129 15:41:44.789422 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-gk9q8" event={"ID":"ca0fcb2d-733d-4bde-9bbf-3f7082d0e244","Type":"ContainerDied","Data":"d89267ade5f0f1bc5747291958183960695e4e4e932d44027e6c4704ebb5c4ef"} Jan 29 15:41:44 crc kubenswrapper[5008]: I0129 15:41:44.790159 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-gk9q8" event={"ID":"ca0fcb2d-733d-4bde-9bbf-3f7082d0e244","Type":"ContainerStarted","Data":"f87de1e980db0bd16d914932ff79d49ee9898f73c25f93235e4e1fda574d4c5a"} Jan 29 15:41:44 crc kubenswrapper[5008]: I0129 15:41:44.790189 5008 scope.go:117] "RemoveContainer" containerID="9850a434d4d07df0fe32aef86e993277e84b797db07cefc7dc516322c6794dab" Jan 29 15:41:56 crc kubenswrapper[5008]: I0129 15:41:56.458153 5008 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc5tkrx"] Jan 29 15:41:56 crc kubenswrapper[5008]: E0129 15:41:56.459174 5008 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="800868e4-e114-49d4-a9b4-3ee8fc4ea341" containerName="registry-server" Jan 29 15:41:56 crc kubenswrapper[5008]: I0129 15:41:56.459212 5008 state_mem.go:107] "Deleted CPUSet assignment" podUID="800868e4-e114-49d4-a9b4-3ee8fc4ea341" containerName="registry-server" Jan 29 15:41:56 crc kubenswrapper[5008]: E0129 15:41:56.459241 5008 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="800868e4-e114-49d4-a9b4-3ee8fc4ea341" containerName="extract-utilities" Jan 29 15:41:56 crc kubenswrapper[5008]: I0129 15:41:56.459254 5008 state_mem.go:107] "Deleted CPUSet assignment" podUID="800868e4-e114-49d4-a9b4-3ee8fc4ea341" containerName="extract-utilities" Jan 29 15:41:56 crc kubenswrapper[5008]: E0129 15:41:56.459283 5008 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="800868e4-e114-49d4-a9b4-3ee8fc4ea341" containerName="extract-content" Jan 29 15:41:56 crc kubenswrapper[5008]: I0129 15:41:56.459296 5008 state_mem.go:107] "Deleted CPUSet assignment" podUID="800868e4-e114-49d4-a9b4-3ee8fc4ea341" containerName="extract-content" Jan 29 15:41:56 crc kubenswrapper[5008]: I0129 15:41:56.459487 5008 memory_manager.go:354] "RemoveStaleState removing state" podUID="800868e4-e114-49d4-a9b4-3ee8fc4ea341" containerName="registry-server" Jan 29 15:41:56 crc kubenswrapper[5008]: I0129 15:41:56.461141 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc5tkrx" Jan 29 15:41:56 crc kubenswrapper[5008]: I0129 15:41:56.463843 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Jan 29 15:41:56 crc kubenswrapper[5008]: I0129 15:41:56.470984 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc5tkrx"] Jan 29 15:41:56 crc kubenswrapper[5008]: I0129 15:41:56.546743 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/451500d6-673a-42ac-84b5-75d3b9d46998-util\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc5tkrx\" (UID: \"451500d6-673a-42ac-84b5-75d3b9d46998\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc5tkrx" Jan 29 15:41:56 crc kubenswrapper[5008]: I0129 15:41:56.546914 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cw99z\" (UniqueName: \"kubernetes.io/projected/451500d6-673a-42ac-84b5-75d3b9d46998-kube-api-access-cw99z\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc5tkrx\" (UID: \"451500d6-673a-42ac-84b5-75d3b9d46998\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc5tkrx" Jan 29 15:41:56 crc kubenswrapper[5008]: I0129 15:41:56.546988 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/451500d6-673a-42ac-84b5-75d3b9d46998-bundle\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc5tkrx\" (UID: \"451500d6-673a-42ac-84b5-75d3b9d46998\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc5tkrx" Jan 29 15:41:56 crc kubenswrapper[5008]: I0129 15:41:56.647720 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/451500d6-673a-42ac-84b5-75d3b9d46998-bundle\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc5tkrx\" (UID: \"451500d6-673a-42ac-84b5-75d3b9d46998\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc5tkrx" Jan 29 15:41:56 crc kubenswrapper[5008]: I0129 15:41:56.647813 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/451500d6-673a-42ac-84b5-75d3b9d46998-util\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc5tkrx\" (UID: \"451500d6-673a-42ac-84b5-75d3b9d46998\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc5tkrx" Jan 29 15:41:56 crc kubenswrapper[5008]: I0129 15:41:56.647877 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cw99z\" (UniqueName: \"kubernetes.io/projected/451500d6-673a-42ac-84b5-75d3b9d46998-kube-api-access-cw99z\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc5tkrx\" (UID: \"451500d6-673a-42ac-84b5-75d3b9d46998\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc5tkrx" Jan 29 15:41:56 crc kubenswrapper[5008]: I0129 15:41:56.648665 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/451500d6-673a-42ac-84b5-75d3b9d46998-bundle\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc5tkrx\" (UID: \"451500d6-673a-42ac-84b5-75d3b9d46998\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc5tkrx" Jan 29 15:41:56 crc kubenswrapper[5008]: I0129 15:41:56.649511 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/451500d6-673a-42ac-84b5-75d3b9d46998-util\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc5tkrx\" (UID: \"451500d6-673a-42ac-84b5-75d3b9d46998\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc5tkrx" Jan 29 15:41:56 crc kubenswrapper[5008]: I0129 15:41:56.676764 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cw99z\" (UniqueName: \"kubernetes.io/projected/451500d6-673a-42ac-84b5-75d3b9d46998-kube-api-access-cw99z\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc5tkrx\" (UID: \"451500d6-673a-42ac-84b5-75d3b9d46998\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc5tkrx" Jan 29 15:41:56 crc kubenswrapper[5008]: I0129 15:41:56.796620 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc5tkrx" Jan 29 15:41:57 crc kubenswrapper[5008]: I0129 15:41:57.025244 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc5tkrx"] Jan 29 15:41:57 crc kubenswrapper[5008]: W0129 15:41:57.030978 5008 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod451500d6_673a_42ac_84b5_75d3b9d46998.slice/crio-97d237cbc2e6be8a6f4fd2df6d72e70d9fb059732c83f0201153ef8959ae43d2 WatchSource:0}: Error finding container 97d237cbc2e6be8a6f4fd2df6d72e70d9fb059732c83f0201153ef8959ae43d2: Status 404 returned error can't find the container with id 97d237cbc2e6be8a6f4fd2df6d72e70d9fb059732c83f0201153ef8959ae43d2 Jan 29 15:41:57 crc kubenswrapper[5008]: I0129 15:41:57.848034 5008 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-f9d7485db-g2rk6" podUID="3f7de4a5-3819-41c0-9e2e-766dcff408bb" containerName="console" containerID="cri-o://df5ae52d7003ab128c12d9fe4ed77a8f1ef6ec06ad705d9f914ff4635fb217e5" gracePeriod=15 Jan 29 15:41:57 crc kubenswrapper[5008]: I0129 15:41:57.887625 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc5tkrx" event={"ID":"451500d6-673a-42ac-84b5-75d3b9d46998","Type":"ContainerDied","Data":"c7d76c04dc424b63c19953db970e3e26a1b3ddd5f0a8ed063c0b7d3a54534b5f"} Jan 29 15:41:57 crc kubenswrapper[5008]: I0129 15:41:57.887457 5008 generic.go:334] "Generic (PLEG): container finished" podID="451500d6-673a-42ac-84b5-75d3b9d46998" containerID="c7d76c04dc424b63c19953db970e3e26a1b3ddd5f0a8ed063c0b7d3a54534b5f" exitCode=0 Jan 29 15:41:57 crc kubenswrapper[5008]: I0129 15:41:57.887766 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc5tkrx" event={"ID":"451500d6-673a-42ac-84b5-75d3b9d46998","Type":"ContainerStarted","Data":"97d237cbc2e6be8a6f4fd2df6d72e70d9fb059732c83f0201153ef8959ae43d2"} Jan 29 15:41:58 crc kubenswrapper[5008]: I0129 15:41:58.240982 5008 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-g2rk6_3f7de4a5-3819-41c0-9e2e-766dcff408bb/console/0.log" Jan 29 15:41:58 crc kubenswrapper[5008]: I0129 15:41:58.241356 5008 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-g2rk6" Jan 29 15:41:58 crc kubenswrapper[5008]: I0129 15:41:58.274743 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/3f7de4a5-3819-41c0-9e2e-766dcff408bb-console-oauth-config\") pod \"3f7de4a5-3819-41c0-9e2e-766dcff408bb\" (UID: \"3f7de4a5-3819-41c0-9e2e-766dcff408bb\") " Jan 29 15:41:58 crc kubenswrapper[5008]: I0129 15:41:58.274904 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/3f7de4a5-3819-41c0-9e2e-766dcff408bb-console-config\") pod \"3f7de4a5-3819-41c0-9e2e-766dcff408bb\" (UID: \"3f7de4a5-3819-41c0-9e2e-766dcff408bb\") " Jan 29 15:41:58 crc kubenswrapper[5008]: I0129 15:41:58.274936 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/3f7de4a5-3819-41c0-9e2e-766dcff408bb-console-serving-cert\") pod \"3f7de4a5-3819-41c0-9e2e-766dcff408bb\" (UID: \"3f7de4a5-3819-41c0-9e2e-766dcff408bb\") " Jan 29 15:41:58 crc kubenswrapper[5008]: I0129 15:41:58.274964 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4pz26\" (UniqueName: \"kubernetes.io/projected/3f7de4a5-3819-41c0-9e2e-766dcff408bb-kube-api-access-4pz26\") pod \"3f7de4a5-3819-41c0-9e2e-766dcff408bb\" (UID: \"3f7de4a5-3819-41c0-9e2e-766dcff408bb\") " Jan 29 15:41:58 crc kubenswrapper[5008]: I0129 15:41:58.275963 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3f7de4a5-3819-41c0-9e2e-766dcff408bb-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "3f7de4a5-3819-41c0-9e2e-766dcff408bb" (UID: "3f7de4a5-3819-41c0-9e2e-766dcff408bb"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:41:58 crc kubenswrapper[5008]: I0129 15:41:58.276643 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3f7de4a5-3819-41c0-9e2e-766dcff408bb-console-config" (OuterVolumeSpecName: "console-config") pod "3f7de4a5-3819-41c0-9e2e-766dcff408bb" (UID: "3f7de4a5-3819-41c0-9e2e-766dcff408bb"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:41:58 crc kubenswrapper[5008]: I0129 15:41:58.276939 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3f7de4a5-3819-41c0-9e2e-766dcff408bb-trusted-ca-bundle\") pod \"3f7de4a5-3819-41c0-9e2e-766dcff408bb\" (UID: \"3f7de4a5-3819-41c0-9e2e-766dcff408bb\") " Jan 29 15:41:58 crc kubenswrapper[5008]: I0129 15:41:58.277411 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/3f7de4a5-3819-41c0-9e2e-766dcff408bb-oauth-serving-cert\") pod \"3f7de4a5-3819-41c0-9e2e-766dcff408bb\" (UID: \"3f7de4a5-3819-41c0-9e2e-766dcff408bb\") " Jan 29 15:41:58 crc kubenswrapper[5008]: I0129 15:41:58.277470 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/3f7de4a5-3819-41c0-9e2e-766dcff408bb-service-ca\") pod \"3f7de4a5-3819-41c0-9e2e-766dcff408bb\" (UID: \"3f7de4a5-3819-41c0-9e2e-766dcff408bb\") " Jan 29 15:41:58 crc kubenswrapper[5008]: I0129 15:41:58.277853 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3f7de4a5-3819-41c0-9e2e-766dcff408bb-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "3f7de4a5-3819-41c0-9e2e-766dcff408bb" (UID: "3f7de4a5-3819-41c0-9e2e-766dcff408bb"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:41:58 crc kubenswrapper[5008]: I0129 15:41:58.278041 5008 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/3f7de4a5-3819-41c0-9e2e-766dcff408bb-console-config\") on node \"crc\" DevicePath \"\"" Jan 29 15:41:58 crc kubenswrapper[5008]: I0129 15:41:58.278070 5008 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3f7de4a5-3819-41c0-9e2e-766dcff408bb-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 15:41:58 crc kubenswrapper[5008]: I0129 15:41:58.278083 5008 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/3f7de4a5-3819-41c0-9e2e-766dcff408bb-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 29 15:41:58 crc kubenswrapper[5008]: I0129 15:41:58.278396 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3f7de4a5-3819-41c0-9e2e-766dcff408bb-service-ca" (OuterVolumeSpecName: "service-ca") pod "3f7de4a5-3819-41c0-9e2e-766dcff408bb" (UID: "3f7de4a5-3819-41c0-9e2e-766dcff408bb"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:41:58 crc kubenswrapper[5008]: I0129 15:41:58.282305 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3f7de4a5-3819-41c0-9e2e-766dcff408bb-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "3f7de4a5-3819-41c0-9e2e-766dcff408bb" (UID: "3f7de4a5-3819-41c0-9e2e-766dcff408bb"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:41:58 crc kubenswrapper[5008]: I0129 15:41:58.282575 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3f7de4a5-3819-41c0-9e2e-766dcff408bb-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "3f7de4a5-3819-41c0-9e2e-766dcff408bb" (UID: "3f7de4a5-3819-41c0-9e2e-766dcff408bb"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:41:58 crc kubenswrapper[5008]: I0129 15:41:58.284183 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3f7de4a5-3819-41c0-9e2e-766dcff408bb-kube-api-access-4pz26" (OuterVolumeSpecName: "kube-api-access-4pz26") pod "3f7de4a5-3819-41c0-9e2e-766dcff408bb" (UID: "3f7de4a5-3819-41c0-9e2e-766dcff408bb"). InnerVolumeSpecName "kube-api-access-4pz26". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:41:58 crc kubenswrapper[5008]: I0129 15:41:58.379536 5008 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/3f7de4a5-3819-41c0-9e2e-766dcff408bb-console-oauth-config\") on node \"crc\" DevicePath \"\"" Jan 29 15:41:58 crc kubenswrapper[5008]: I0129 15:41:58.379592 5008 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/3f7de4a5-3819-41c0-9e2e-766dcff408bb-console-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 29 15:41:58 crc kubenswrapper[5008]: I0129 15:41:58.379612 5008 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4pz26\" (UniqueName: \"kubernetes.io/projected/3f7de4a5-3819-41c0-9e2e-766dcff408bb-kube-api-access-4pz26\") on node \"crc\" DevicePath \"\"" Jan 29 15:41:58 crc kubenswrapper[5008]: I0129 15:41:58.379634 5008 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/3f7de4a5-3819-41c0-9e2e-766dcff408bb-service-ca\") on node \"crc\" DevicePath \"\"" Jan 29 15:41:58 crc kubenswrapper[5008]: I0129 15:41:58.897094 5008 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-g2rk6_3f7de4a5-3819-41c0-9e2e-766dcff408bb/console/0.log" Jan 29 15:41:58 crc kubenswrapper[5008]: I0129 15:41:58.897186 5008 generic.go:334] "Generic (PLEG): container finished" podID="3f7de4a5-3819-41c0-9e2e-766dcff408bb" containerID="df5ae52d7003ab128c12d9fe4ed77a8f1ef6ec06ad705d9f914ff4635fb217e5" exitCode=2 Jan 29 15:41:58 crc kubenswrapper[5008]: I0129 15:41:58.897237 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-g2rk6" event={"ID":"3f7de4a5-3819-41c0-9e2e-766dcff408bb","Type":"ContainerDied","Data":"df5ae52d7003ab128c12d9fe4ed77a8f1ef6ec06ad705d9f914ff4635fb217e5"} Jan 29 15:41:58 crc kubenswrapper[5008]: I0129 15:41:58.897298 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-g2rk6" event={"ID":"3f7de4a5-3819-41c0-9e2e-766dcff408bb","Type":"ContainerDied","Data":"0d50d0b75f6e0f8a4026a940843934088791e81f1a0bc633f602d35cd43598eb"} Jan 29 15:41:58 crc kubenswrapper[5008]: I0129 15:41:58.897319 5008 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-g2rk6" Jan 29 15:41:58 crc kubenswrapper[5008]: I0129 15:41:58.897332 5008 scope.go:117] "RemoveContainer" containerID="df5ae52d7003ab128c12d9fe4ed77a8f1ef6ec06ad705d9f914ff4635fb217e5" Jan 29 15:41:58 crc kubenswrapper[5008]: I0129 15:41:58.926953 5008 scope.go:117] "RemoveContainer" containerID="df5ae52d7003ab128c12d9fe4ed77a8f1ef6ec06ad705d9f914ff4635fb217e5" Jan 29 15:41:58 crc kubenswrapper[5008]: E0129 15:41:58.927622 5008 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"df5ae52d7003ab128c12d9fe4ed77a8f1ef6ec06ad705d9f914ff4635fb217e5\": container with ID starting with df5ae52d7003ab128c12d9fe4ed77a8f1ef6ec06ad705d9f914ff4635fb217e5 not found: ID does not exist" containerID="df5ae52d7003ab128c12d9fe4ed77a8f1ef6ec06ad705d9f914ff4635fb217e5" Jan 29 15:41:58 crc kubenswrapper[5008]: I0129 15:41:58.927684 5008 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"df5ae52d7003ab128c12d9fe4ed77a8f1ef6ec06ad705d9f914ff4635fb217e5"} err="failed to get container status \"df5ae52d7003ab128c12d9fe4ed77a8f1ef6ec06ad705d9f914ff4635fb217e5\": rpc error: code = NotFound desc = could not find container \"df5ae52d7003ab128c12d9fe4ed77a8f1ef6ec06ad705d9f914ff4635fb217e5\": container with ID starting with df5ae52d7003ab128c12d9fe4ed77a8f1ef6ec06ad705d9f914ff4635fb217e5 not found: ID does not exist" Jan 29 15:41:58 crc kubenswrapper[5008]: I0129 15:41:58.947914 5008 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-g2rk6"] Jan 29 15:41:58 crc kubenswrapper[5008]: I0129 15:41:58.954557 5008 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-f9d7485db-g2rk6"] Jan 29 15:41:59 crc kubenswrapper[5008]: I0129 15:41:59.332361 5008 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3f7de4a5-3819-41c0-9e2e-766dcff408bb" path="/var/lib/kubelet/pods/3f7de4a5-3819-41c0-9e2e-766dcff408bb/volumes" Jan 29 15:42:00 crc kubenswrapper[5008]: I0129 15:42:00.915398 5008 generic.go:334] "Generic (PLEG): container finished" podID="451500d6-673a-42ac-84b5-75d3b9d46998" containerID="9b68156a7941132fb9e50897803f7be82cd15c7a699bcc0fb1a329ae9ae48b4f" exitCode=0 Jan 29 15:42:00 crc kubenswrapper[5008]: I0129 15:42:00.915466 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc5tkrx" event={"ID":"451500d6-673a-42ac-84b5-75d3b9d46998","Type":"ContainerDied","Data":"9b68156a7941132fb9e50897803f7be82cd15c7a699bcc0fb1a329ae9ae48b4f"} Jan 29 15:42:01 crc kubenswrapper[5008]: I0129 15:42:01.926689 5008 generic.go:334] "Generic (PLEG): container finished" podID="451500d6-673a-42ac-84b5-75d3b9d46998" containerID="877b0ce4c3dc2404fe743931e1c40d52b8b08e0ded6fb97f08fca18d660def06" exitCode=0 Jan 29 15:42:01 crc kubenswrapper[5008]: I0129 15:42:01.926768 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc5tkrx" event={"ID":"451500d6-673a-42ac-84b5-75d3b9d46998","Type":"ContainerDied","Data":"877b0ce4c3dc2404fe743931e1c40d52b8b08e0ded6fb97f08fca18d660def06"} Jan 29 15:42:03 crc kubenswrapper[5008]: I0129 15:42:03.254150 5008 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc5tkrx" Jan 29 15:42:03 crc kubenswrapper[5008]: I0129 15:42:03.445951 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cw99z\" (UniqueName: \"kubernetes.io/projected/451500d6-673a-42ac-84b5-75d3b9d46998-kube-api-access-cw99z\") pod \"451500d6-673a-42ac-84b5-75d3b9d46998\" (UID: \"451500d6-673a-42ac-84b5-75d3b9d46998\") " Jan 29 15:42:03 crc kubenswrapper[5008]: I0129 15:42:03.446145 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/451500d6-673a-42ac-84b5-75d3b9d46998-bundle\") pod \"451500d6-673a-42ac-84b5-75d3b9d46998\" (UID: \"451500d6-673a-42ac-84b5-75d3b9d46998\") " Jan 29 15:42:03 crc kubenswrapper[5008]: I0129 15:42:03.446229 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/451500d6-673a-42ac-84b5-75d3b9d46998-util\") pod \"451500d6-673a-42ac-84b5-75d3b9d46998\" (UID: \"451500d6-673a-42ac-84b5-75d3b9d46998\") " Jan 29 15:42:03 crc kubenswrapper[5008]: I0129 15:42:03.447136 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/451500d6-673a-42ac-84b5-75d3b9d46998-bundle" (OuterVolumeSpecName: "bundle") pod "451500d6-673a-42ac-84b5-75d3b9d46998" (UID: "451500d6-673a-42ac-84b5-75d3b9d46998"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 15:42:03 crc kubenswrapper[5008]: I0129 15:42:03.457031 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/451500d6-673a-42ac-84b5-75d3b9d46998-kube-api-access-cw99z" (OuterVolumeSpecName: "kube-api-access-cw99z") pod "451500d6-673a-42ac-84b5-75d3b9d46998" (UID: "451500d6-673a-42ac-84b5-75d3b9d46998"). InnerVolumeSpecName "kube-api-access-cw99z". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:42:03 crc kubenswrapper[5008]: I0129 15:42:03.470458 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/451500d6-673a-42ac-84b5-75d3b9d46998-util" (OuterVolumeSpecName: "util") pod "451500d6-673a-42ac-84b5-75d3b9d46998" (UID: "451500d6-673a-42ac-84b5-75d3b9d46998"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 15:42:03 crc kubenswrapper[5008]: I0129 15:42:03.548249 5008 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/451500d6-673a-42ac-84b5-75d3b9d46998-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 15:42:03 crc kubenswrapper[5008]: I0129 15:42:03.548344 5008 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/451500d6-673a-42ac-84b5-75d3b9d46998-util\") on node \"crc\" DevicePath \"\"" Jan 29 15:42:03 crc kubenswrapper[5008]: I0129 15:42:03.548365 5008 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cw99z\" (UniqueName: \"kubernetes.io/projected/451500d6-673a-42ac-84b5-75d3b9d46998-kube-api-access-cw99z\") on node \"crc\" DevicePath \"\"" Jan 29 15:42:03 crc kubenswrapper[5008]: I0129 15:42:03.940996 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc5tkrx" event={"ID":"451500d6-673a-42ac-84b5-75d3b9d46998","Type":"ContainerDied","Data":"97d237cbc2e6be8a6f4fd2df6d72e70d9fb059732c83f0201153ef8959ae43d2"} Jan 29 15:42:03 crc kubenswrapper[5008]: I0129 15:42:03.941033 5008 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="97d237cbc2e6be8a6f4fd2df6d72e70d9fb059732c83f0201153ef8959ae43d2" Jan 29 15:42:03 crc kubenswrapper[5008]: I0129 15:42:03.941094 5008 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc5tkrx" Jan 29 15:42:11 crc kubenswrapper[5008]: I0129 15:42:11.677869 5008 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-controller-manager-8644cb7465-xww64"] Jan 29 15:42:11 crc kubenswrapper[5008]: E0129 15:42:11.678304 5008 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="451500d6-673a-42ac-84b5-75d3b9d46998" containerName="extract" Jan 29 15:42:11 crc kubenswrapper[5008]: I0129 15:42:11.678315 5008 state_mem.go:107] "Deleted CPUSet assignment" podUID="451500d6-673a-42ac-84b5-75d3b9d46998" containerName="extract" Jan 29 15:42:11 crc kubenswrapper[5008]: E0129 15:42:11.678328 5008 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3f7de4a5-3819-41c0-9e2e-766dcff408bb" containerName="console" Jan 29 15:42:11 crc kubenswrapper[5008]: I0129 15:42:11.678334 5008 state_mem.go:107] "Deleted CPUSet assignment" podUID="3f7de4a5-3819-41c0-9e2e-766dcff408bb" containerName="console" Jan 29 15:42:11 crc kubenswrapper[5008]: E0129 15:42:11.678349 5008 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="451500d6-673a-42ac-84b5-75d3b9d46998" containerName="util" Jan 29 15:42:11 crc kubenswrapper[5008]: I0129 15:42:11.678356 5008 state_mem.go:107] "Deleted CPUSet assignment" podUID="451500d6-673a-42ac-84b5-75d3b9d46998" containerName="util" Jan 29 15:42:11 crc kubenswrapper[5008]: E0129 15:42:11.678365 5008 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="451500d6-673a-42ac-84b5-75d3b9d46998" containerName="pull" Jan 29 15:42:11 crc kubenswrapper[5008]: I0129 15:42:11.678371 5008 state_mem.go:107] "Deleted CPUSet assignment" podUID="451500d6-673a-42ac-84b5-75d3b9d46998" containerName="pull" Jan 29 15:42:11 crc kubenswrapper[5008]: I0129 15:42:11.678458 5008 memory_manager.go:354] "RemoveStaleState removing state" podUID="3f7de4a5-3819-41c0-9e2e-766dcff408bb" containerName="console" Jan 29 15:42:11 crc kubenswrapper[5008]: I0129 15:42:11.678466 5008 memory_manager.go:354] "RemoveStaleState removing state" podUID="451500d6-673a-42ac-84b5-75d3b9d46998" containerName="extract" Jan 29 15:42:11 crc kubenswrapper[5008]: I0129 15:42:11.678836 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-8644cb7465-xww64" Jan 29 15:42:11 crc kubenswrapper[5008]: I0129 15:42:11.681367 5008 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-cert" Jan 29 15:42:11 crc kubenswrapper[5008]: I0129 15:42:11.681571 5008 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"openshift-service-ca.crt" Jan 29 15:42:11 crc kubenswrapper[5008]: I0129 15:42:11.681658 5008 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-controller-manager-service-cert" Jan 29 15:42:11 crc kubenswrapper[5008]: I0129 15:42:11.682286 5008 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"manager-account-dockercfg-vzd7g" Jan 29 15:42:11 crc kubenswrapper[5008]: I0129 15:42:11.682976 5008 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"kube-root-ca.crt" Jan 29 15:42:11 crc kubenswrapper[5008]: I0129 15:42:11.708462 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-8644cb7465-xww64"] Jan 29 15:42:11 crc kubenswrapper[5008]: I0129 15:42:11.744733 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-544bf\" (UniqueName: \"kubernetes.io/projected/65797f8d-98da-4cbc-a7df-cd6d00fda635-kube-api-access-544bf\") pod \"metallb-operator-controller-manager-8644cb7465-xww64\" (UID: \"65797f8d-98da-4cbc-a7df-cd6d00fda635\") " pod="metallb-system/metallb-operator-controller-manager-8644cb7465-xww64" Jan 29 15:42:11 crc kubenswrapper[5008]: I0129 15:42:11.744913 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/65797f8d-98da-4cbc-a7df-cd6d00fda635-webhook-cert\") pod \"metallb-operator-controller-manager-8644cb7465-xww64\" (UID: \"65797f8d-98da-4cbc-a7df-cd6d00fda635\") " pod="metallb-system/metallb-operator-controller-manager-8644cb7465-xww64" Jan 29 15:42:11 crc kubenswrapper[5008]: I0129 15:42:11.744957 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/65797f8d-98da-4cbc-a7df-cd6d00fda635-apiservice-cert\") pod \"metallb-operator-controller-manager-8644cb7465-xww64\" (UID: \"65797f8d-98da-4cbc-a7df-cd6d00fda635\") " pod="metallb-system/metallb-operator-controller-manager-8644cb7465-xww64" Jan 29 15:42:11 crc kubenswrapper[5008]: I0129 15:42:11.845713 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-544bf\" (UniqueName: \"kubernetes.io/projected/65797f8d-98da-4cbc-a7df-cd6d00fda635-kube-api-access-544bf\") pod \"metallb-operator-controller-manager-8644cb7465-xww64\" (UID: \"65797f8d-98da-4cbc-a7df-cd6d00fda635\") " pod="metallb-system/metallb-operator-controller-manager-8644cb7465-xww64" Jan 29 15:42:11 crc kubenswrapper[5008]: I0129 15:42:11.845862 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/65797f8d-98da-4cbc-a7df-cd6d00fda635-webhook-cert\") pod \"metallb-operator-controller-manager-8644cb7465-xww64\" (UID: \"65797f8d-98da-4cbc-a7df-cd6d00fda635\") " pod="metallb-system/metallb-operator-controller-manager-8644cb7465-xww64" Jan 29 15:42:11 crc kubenswrapper[5008]: I0129 15:42:11.845905 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/65797f8d-98da-4cbc-a7df-cd6d00fda635-apiservice-cert\") pod \"metallb-operator-controller-manager-8644cb7465-xww64\" (UID: \"65797f8d-98da-4cbc-a7df-cd6d00fda635\") " pod="metallb-system/metallb-operator-controller-manager-8644cb7465-xww64" Jan 29 15:42:11 crc kubenswrapper[5008]: I0129 15:42:11.863017 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/65797f8d-98da-4cbc-a7df-cd6d00fda635-apiservice-cert\") pod \"metallb-operator-controller-manager-8644cb7465-xww64\" (UID: \"65797f8d-98da-4cbc-a7df-cd6d00fda635\") " pod="metallb-system/metallb-operator-controller-manager-8644cb7465-xww64" Jan 29 15:42:11 crc kubenswrapper[5008]: I0129 15:42:11.865288 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/65797f8d-98da-4cbc-a7df-cd6d00fda635-webhook-cert\") pod \"metallb-operator-controller-manager-8644cb7465-xww64\" (UID: \"65797f8d-98da-4cbc-a7df-cd6d00fda635\") " pod="metallb-system/metallb-operator-controller-manager-8644cb7465-xww64" Jan 29 15:42:11 crc kubenswrapper[5008]: I0129 15:42:11.867920 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-544bf\" (UniqueName: \"kubernetes.io/projected/65797f8d-98da-4cbc-a7df-cd6d00fda635-kube-api-access-544bf\") pod \"metallb-operator-controller-manager-8644cb7465-xww64\" (UID: \"65797f8d-98da-4cbc-a7df-cd6d00fda635\") " pod="metallb-system/metallb-operator-controller-manager-8644cb7465-xww64" Jan 29 15:42:11 crc kubenswrapper[5008]: I0129 15:42:11.933078 5008 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-webhook-server-6b97546cb-r5lk9"] Jan 29 15:42:11 crc kubenswrapper[5008]: I0129 15:42:11.933707 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-6b97546cb-r5lk9" Jan 29 15:42:11 crc kubenswrapper[5008]: I0129 15:42:11.936361 5008 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Jan 29 15:42:11 crc kubenswrapper[5008]: I0129 15:42:11.936381 5008 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-service-cert" Jan 29 15:42:11 crc kubenswrapper[5008]: I0129 15:42:11.936972 5008 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-dockercfg-tb9k6" Jan 29 15:42:11 crc kubenswrapper[5008]: I0129 15:42:11.946865 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7dv4q\" (UniqueName: \"kubernetes.io/projected/42235713-405f-4dc1-9e60-3b1615ec49a2-kube-api-access-7dv4q\") pod \"metallb-operator-webhook-server-6b97546cb-r5lk9\" (UID: \"42235713-405f-4dc1-9e60-3b1615ec49a2\") " pod="metallb-system/metallb-operator-webhook-server-6b97546cb-r5lk9" Jan 29 15:42:11 crc kubenswrapper[5008]: I0129 15:42:11.946903 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/42235713-405f-4dc1-9e60-3b1615ec49a2-apiservice-cert\") pod \"metallb-operator-webhook-server-6b97546cb-r5lk9\" (UID: \"42235713-405f-4dc1-9e60-3b1615ec49a2\") " pod="metallb-system/metallb-operator-webhook-server-6b97546cb-r5lk9" Jan 29 15:42:11 crc kubenswrapper[5008]: I0129 15:42:11.946935 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/42235713-405f-4dc1-9e60-3b1615ec49a2-webhook-cert\") pod \"metallb-operator-webhook-server-6b97546cb-r5lk9\" (UID: \"42235713-405f-4dc1-9e60-3b1615ec49a2\") " pod="metallb-system/metallb-operator-webhook-server-6b97546cb-r5lk9" Jan 29 15:42:11 crc kubenswrapper[5008]: I0129 15:42:11.953588 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-6b97546cb-r5lk9"] Jan 29 15:42:11 crc kubenswrapper[5008]: I0129 15:42:11.992288 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-8644cb7465-xww64" Jan 29 15:42:12 crc kubenswrapper[5008]: I0129 15:42:12.048086 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7dv4q\" (UniqueName: \"kubernetes.io/projected/42235713-405f-4dc1-9e60-3b1615ec49a2-kube-api-access-7dv4q\") pod \"metallb-operator-webhook-server-6b97546cb-r5lk9\" (UID: \"42235713-405f-4dc1-9e60-3b1615ec49a2\") " pod="metallb-system/metallb-operator-webhook-server-6b97546cb-r5lk9" Jan 29 15:42:12 crc kubenswrapper[5008]: I0129 15:42:12.048130 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/42235713-405f-4dc1-9e60-3b1615ec49a2-apiservice-cert\") pod \"metallb-operator-webhook-server-6b97546cb-r5lk9\" (UID: \"42235713-405f-4dc1-9e60-3b1615ec49a2\") " pod="metallb-system/metallb-operator-webhook-server-6b97546cb-r5lk9" Jan 29 15:42:12 crc kubenswrapper[5008]: I0129 15:42:12.048162 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/42235713-405f-4dc1-9e60-3b1615ec49a2-webhook-cert\") pod \"metallb-operator-webhook-server-6b97546cb-r5lk9\" (UID: \"42235713-405f-4dc1-9e60-3b1615ec49a2\") " pod="metallb-system/metallb-operator-webhook-server-6b97546cb-r5lk9" Jan 29 15:42:12 crc kubenswrapper[5008]: I0129 15:42:12.052624 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/42235713-405f-4dc1-9e60-3b1615ec49a2-webhook-cert\") pod \"metallb-operator-webhook-server-6b97546cb-r5lk9\" (UID: \"42235713-405f-4dc1-9e60-3b1615ec49a2\") " pod="metallb-system/metallb-operator-webhook-server-6b97546cb-r5lk9" Jan 29 15:42:12 crc kubenswrapper[5008]: I0129 15:42:12.052628 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/42235713-405f-4dc1-9e60-3b1615ec49a2-apiservice-cert\") pod \"metallb-operator-webhook-server-6b97546cb-r5lk9\" (UID: \"42235713-405f-4dc1-9e60-3b1615ec49a2\") " pod="metallb-system/metallb-operator-webhook-server-6b97546cb-r5lk9" Jan 29 15:42:12 crc kubenswrapper[5008]: I0129 15:42:12.070614 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7dv4q\" (UniqueName: \"kubernetes.io/projected/42235713-405f-4dc1-9e60-3b1615ec49a2-kube-api-access-7dv4q\") pod \"metallb-operator-webhook-server-6b97546cb-r5lk9\" (UID: \"42235713-405f-4dc1-9e60-3b1615ec49a2\") " pod="metallb-system/metallb-operator-webhook-server-6b97546cb-r5lk9" Jan 29 15:42:12 crc kubenswrapper[5008]: I0129 15:42:12.250520 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-6b97546cb-r5lk9" Jan 29 15:42:12 crc kubenswrapper[5008]: I0129 15:42:12.446610 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-8644cb7465-xww64"] Jan 29 15:42:12 crc kubenswrapper[5008]: I0129 15:42:12.687034 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-6b97546cb-r5lk9"] Jan 29 15:42:12 crc kubenswrapper[5008]: W0129 15:42:12.688667 5008 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod42235713_405f_4dc1_9e60_3b1615ec49a2.slice/crio-00778d443446d4228b3684740cab94c02807024a47b2307dbbd66897b8f2c40b WatchSource:0}: Error finding container 00778d443446d4228b3684740cab94c02807024a47b2307dbbd66897b8f2c40b: Status 404 returned error can't find the container with id 00778d443446d4228b3684740cab94c02807024a47b2307dbbd66897b8f2c40b Jan 29 15:42:12 crc kubenswrapper[5008]: I0129 15:42:12.988323 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-6b97546cb-r5lk9" event={"ID":"42235713-405f-4dc1-9e60-3b1615ec49a2","Type":"ContainerStarted","Data":"00778d443446d4228b3684740cab94c02807024a47b2307dbbd66897b8f2c40b"} Jan 29 15:42:12 crc kubenswrapper[5008]: I0129 15:42:12.989303 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-8644cb7465-xww64" event={"ID":"65797f8d-98da-4cbc-a7df-cd6d00fda635","Type":"ContainerStarted","Data":"992a9109d9f61c2dffc8a568ce4f4d2ef6f3e1496f092aea367052b4a5d0bc40"} Jan 29 15:42:18 crc kubenswrapper[5008]: I0129 15:42:18.021433 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-8644cb7465-xww64" event={"ID":"65797f8d-98da-4cbc-a7df-cd6d00fda635","Type":"ContainerStarted","Data":"dd6ecf4d9c9d10631c17c9542fcc50fc6daee52b69dd97e0eb43fdf0fecf228d"} Jan 29 15:42:18 crc kubenswrapper[5008]: I0129 15:42:18.022188 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-controller-manager-8644cb7465-xww64" Jan 29 15:42:18 crc kubenswrapper[5008]: I0129 15:42:18.034033 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-6b97546cb-r5lk9" event={"ID":"42235713-405f-4dc1-9e60-3b1615ec49a2","Type":"ContainerStarted","Data":"4bd0ea5c3666d117001376d771c4539d9a45028b6d8c8333357dda28aeb1d5b9"} Jan 29 15:42:18 crc kubenswrapper[5008]: I0129 15:42:18.034410 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-webhook-server-6b97546cb-r5lk9" Jan 29 15:42:18 crc kubenswrapper[5008]: I0129 15:42:18.072478 5008 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-controller-manager-8644cb7465-xww64" podStartSLOduration=2.439297115 podStartE2EDuration="7.072457309s" podCreationTimestamp="2026-01-29 15:42:11 +0000 UTC" firstStartedPulling="2026-01-29 15:42:12.453599401 +0000 UTC m=+876.126453638" lastFinishedPulling="2026-01-29 15:42:17.086759605 +0000 UTC m=+880.759613832" observedRunningTime="2026-01-29 15:42:18.066011782 +0000 UTC m=+881.738866019" watchObservedRunningTime="2026-01-29 15:42:18.072457309 +0000 UTC m=+881.745311566" Jan 29 15:42:18 crc kubenswrapper[5008]: I0129 15:42:18.091825 5008 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-webhook-server-6b97546cb-r5lk9" podStartSLOduration=2.670606008 podStartE2EDuration="7.091805409s" podCreationTimestamp="2026-01-29 15:42:11 +0000 UTC" firstStartedPulling="2026-01-29 15:42:12.691291258 +0000 UTC m=+876.364145495" lastFinishedPulling="2026-01-29 15:42:17.112490659 +0000 UTC m=+880.785344896" observedRunningTime="2026-01-29 15:42:18.087348241 +0000 UTC m=+881.760202508" watchObservedRunningTime="2026-01-29 15:42:18.091805409 +0000 UTC m=+881.764659676" Jan 29 15:42:32 crc kubenswrapper[5008]: I0129 15:42:32.266479 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-webhook-server-6b97546cb-r5lk9" Jan 29 15:42:51 crc kubenswrapper[5008]: I0129 15:42:51.995663 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-controller-manager-8644cb7465-xww64" Jan 29 15:42:52 crc kubenswrapper[5008]: I0129 15:42:52.699419 5008 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-95tm6"] Jan 29 15:42:52 crc kubenswrapper[5008]: I0129 15:42:52.701551 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-95tm6" Jan 29 15:42:52 crc kubenswrapper[5008]: I0129 15:42:52.703576 5008 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-certs-secret" Jan 29 15:42:52 crc kubenswrapper[5008]: I0129 15:42:52.703596 5008 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"frr-startup" Jan 29 15:42:52 crc kubenswrapper[5008]: I0129 15:42:52.704043 5008 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-daemon-dockercfg-z8dgm" Jan 29 15:42:52 crc kubenswrapper[5008]: I0129 15:42:52.706591 5008 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-webhook-server-7df86c4f6c-4l5h6"] Jan 29 15:42:52 crc kubenswrapper[5008]: I0129 15:42:52.707270 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-4l5h6" Jan 29 15:42:52 crc kubenswrapper[5008]: I0129 15:42:52.708623 5008 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-webhook-server-cert" Jan 29 15:42:52 crc kubenswrapper[5008]: I0129 15:42:52.726612 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-7df86c4f6c-4l5h6"] Jan 29 15:42:52 crc kubenswrapper[5008]: I0129 15:42:52.784366 5008 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/speaker-dmtw7"] Jan 29 15:42:52 crc kubenswrapper[5008]: I0129 15:42:52.785178 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-dmtw7" Jan 29 15:42:52 crc kubenswrapper[5008]: I0129 15:42:52.787717 5008 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-memberlist" Jan 29 15:42:52 crc kubenswrapper[5008]: I0129 15:42:52.787917 5008 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"metallb-excludel2" Jan 29 15:42:52 crc kubenswrapper[5008]: I0129 15:42:52.787766 5008 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-certs-secret" Jan 29 15:42:52 crc kubenswrapper[5008]: I0129 15:42:52.787840 5008 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-dockercfg-bf7md" Jan 29 15:42:52 crc kubenswrapper[5008]: I0129 15:42:52.813127 5008 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/controller-6968d8fdc4-bzslg"] Jan 29 15:42:52 crc kubenswrapper[5008]: I0129 15:42:52.813914 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-6968d8fdc4-bzslg" Jan 29 15:42:52 crc kubenswrapper[5008]: I0129 15:42:52.816115 5008 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-certs-secret" Jan 29 15:42:52 crc kubenswrapper[5008]: I0129 15:42:52.818340 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/17fc1fa7-5758-4768-a6f5-5b63b63d0948-frr-sockets\") pod \"frr-k8s-95tm6\" (UID: \"17fc1fa7-5758-4768-a6f5-5b63b63d0948\") " pod="metallb-system/frr-k8s-95tm6" Jan 29 15:42:52 crc kubenswrapper[5008]: I0129 15:42:52.818366 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/17fc1fa7-5758-4768-a6f5-5b63b63d0948-metrics\") pod \"frr-k8s-95tm6\" (UID: \"17fc1fa7-5758-4768-a6f5-5b63b63d0948\") " pod="metallb-system/frr-k8s-95tm6" Jan 29 15:42:52 crc kubenswrapper[5008]: I0129 15:42:52.818392 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2m887\" (UniqueName: \"kubernetes.io/projected/17fc1fa7-5758-4768-a6f5-5b63b63d0948-kube-api-access-2m887\") pod \"frr-k8s-95tm6\" (UID: \"17fc1fa7-5758-4768-a6f5-5b63b63d0948\") " pod="metallb-system/frr-k8s-95tm6" Jan 29 15:42:52 crc kubenswrapper[5008]: I0129 15:42:52.818439 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/17fc1fa7-5758-4768-a6f5-5b63b63d0948-reloader\") pod \"frr-k8s-95tm6\" (UID: \"17fc1fa7-5758-4768-a6f5-5b63b63d0948\") " pod="metallb-system/frr-k8s-95tm6" Jan 29 15:42:52 crc kubenswrapper[5008]: I0129 15:42:52.818455 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lwlm6\" (UniqueName: \"kubernetes.io/projected/8927915f-8333-415c-82e1-47d948a6e8ad-kube-api-access-lwlm6\") pod \"speaker-dmtw7\" (UID: \"8927915f-8333-415c-82e1-47d948a6e8ad\") " pod="metallb-system/speaker-dmtw7" Jan 29 15:42:52 crc kubenswrapper[5008]: I0129 15:42:52.818470 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/8927915f-8333-415c-82e1-47d948a6e8ad-memberlist\") pod \"speaker-dmtw7\" (UID: \"8927915f-8333-415c-82e1-47d948a6e8ad\") " pod="metallb-system/speaker-dmtw7" Jan 29 15:42:52 crc kubenswrapper[5008]: I0129 15:42:52.818483 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2j8d5\" (UniqueName: \"kubernetes.io/projected/88b3b62b-8ee9-4541-a109-c52f195f55c2-kube-api-access-2j8d5\") pod \"controller-6968d8fdc4-bzslg\" (UID: \"88b3b62b-8ee9-4541-a109-c52f195f55c2\") " pod="metallb-system/controller-6968d8fdc4-bzslg" Jan 29 15:42:52 crc kubenswrapper[5008]: I0129 15:42:52.818514 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/8927915f-8333-415c-82e1-47d948a6e8ad-metrics-certs\") pod \"speaker-dmtw7\" (UID: \"8927915f-8333-415c-82e1-47d948a6e8ad\") " pod="metallb-system/speaker-dmtw7" Jan 29 15:42:52 crc kubenswrapper[5008]: I0129 15:42:52.818531 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/fc07e8e0-7de8-4d7a-96f9-8ccdd7180f07-cert\") pod \"frr-k8s-webhook-server-7df86c4f6c-4l5h6\" (UID: \"fc07e8e0-7de8-4d7a-96f9-8ccdd7180f07\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-4l5h6" Jan 29 15:42:52 crc kubenswrapper[5008]: I0129 15:42:52.818547 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rl7wn\" (UniqueName: \"kubernetes.io/projected/fc07e8e0-7de8-4d7a-96f9-8ccdd7180f07-kube-api-access-rl7wn\") pod \"frr-k8s-webhook-server-7df86c4f6c-4l5h6\" (UID: \"fc07e8e0-7de8-4d7a-96f9-8ccdd7180f07\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-4l5h6" Jan 29 15:42:52 crc kubenswrapper[5008]: I0129 15:42:52.818564 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/88b3b62b-8ee9-4541-a109-c52f195f55c2-cert\") pod \"controller-6968d8fdc4-bzslg\" (UID: \"88b3b62b-8ee9-4541-a109-c52f195f55c2\") " pod="metallb-system/controller-6968d8fdc4-bzslg" Jan 29 15:42:52 crc kubenswrapper[5008]: I0129 15:42:52.818577 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/17fc1fa7-5758-4768-a6f5-5b63b63d0948-frr-conf\") pod \"frr-k8s-95tm6\" (UID: \"17fc1fa7-5758-4768-a6f5-5b63b63d0948\") " pod="metallb-system/frr-k8s-95tm6" Jan 29 15:42:52 crc kubenswrapper[5008]: I0129 15:42:52.818598 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/17fc1fa7-5758-4768-a6f5-5b63b63d0948-frr-startup\") pod \"frr-k8s-95tm6\" (UID: \"17fc1fa7-5758-4768-a6f5-5b63b63d0948\") " pod="metallb-system/frr-k8s-95tm6" Jan 29 15:42:52 crc kubenswrapper[5008]: I0129 15:42:52.818621 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/88b3b62b-8ee9-4541-a109-c52f195f55c2-metrics-certs\") pod \"controller-6968d8fdc4-bzslg\" (UID: \"88b3b62b-8ee9-4541-a109-c52f195f55c2\") " pod="metallb-system/controller-6968d8fdc4-bzslg" Jan 29 15:42:52 crc kubenswrapper[5008]: I0129 15:42:52.818635 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/8927915f-8333-415c-82e1-47d948a6e8ad-metallb-excludel2\") pod \"speaker-dmtw7\" (UID: \"8927915f-8333-415c-82e1-47d948a6e8ad\") " pod="metallb-system/speaker-dmtw7" Jan 29 15:42:52 crc kubenswrapper[5008]: I0129 15:42:52.818658 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/17fc1fa7-5758-4768-a6f5-5b63b63d0948-metrics-certs\") pod \"frr-k8s-95tm6\" (UID: \"17fc1fa7-5758-4768-a6f5-5b63b63d0948\") " pod="metallb-system/frr-k8s-95tm6" Jan 29 15:42:52 crc kubenswrapper[5008]: I0129 15:42:52.824004 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-6968d8fdc4-bzslg"] Jan 29 15:42:52 crc kubenswrapper[5008]: I0129 15:42:52.920199 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/17fc1fa7-5758-4768-a6f5-5b63b63d0948-reloader\") pod \"frr-k8s-95tm6\" (UID: \"17fc1fa7-5758-4768-a6f5-5b63b63d0948\") " pod="metallb-system/frr-k8s-95tm6" Jan 29 15:42:52 crc kubenswrapper[5008]: I0129 15:42:52.920554 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lwlm6\" (UniqueName: \"kubernetes.io/projected/8927915f-8333-415c-82e1-47d948a6e8ad-kube-api-access-lwlm6\") pod \"speaker-dmtw7\" (UID: \"8927915f-8333-415c-82e1-47d948a6e8ad\") " pod="metallb-system/speaker-dmtw7" Jan 29 15:42:52 crc kubenswrapper[5008]: I0129 15:42:52.920656 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/8927915f-8333-415c-82e1-47d948a6e8ad-memberlist\") pod \"speaker-dmtw7\" (UID: \"8927915f-8333-415c-82e1-47d948a6e8ad\") " pod="metallb-system/speaker-dmtw7" Jan 29 15:42:52 crc kubenswrapper[5008]: I0129 15:42:52.920761 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2j8d5\" (UniqueName: \"kubernetes.io/projected/88b3b62b-8ee9-4541-a109-c52f195f55c2-kube-api-access-2j8d5\") pod \"controller-6968d8fdc4-bzslg\" (UID: \"88b3b62b-8ee9-4541-a109-c52f195f55c2\") " pod="metallb-system/controller-6968d8fdc4-bzslg" Jan 29 15:42:52 crc kubenswrapper[5008]: I0129 15:42:52.920911 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/8927915f-8333-415c-82e1-47d948a6e8ad-metrics-certs\") pod \"speaker-dmtw7\" (UID: \"8927915f-8333-415c-82e1-47d948a6e8ad\") " pod="metallb-system/speaker-dmtw7" Jan 29 15:42:52 crc kubenswrapper[5008]: I0129 15:42:52.921021 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rl7wn\" (UniqueName: \"kubernetes.io/projected/fc07e8e0-7de8-4d7a-96f9-8ccdd7180f07-kube-api-access-rl7wn\") pod \"frr-k8s-webhook-server-7df86c4f6c-4l5h6\" (UID: \"fc07e8e0-7de8-4d7a-96f9-8ccdd7180f07\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-4l5h6" Jan 29 15:42:52 crc kubenswrapper[5008]: I0129 15:42:52.921111 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/fc07e8e0-7de8-4d7a-96f9-8ccdd7180f07-cert\") pod \"frr-k8s-webhook-server-7df86c4f6c-4l5h6\" (UID: \"fc07e8e0-7de8-4d7a-96f9-8ccdd7180f07\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-4l5h6" Jan 29 15:42:52 crc kubenswrapper[5008]: I0129 15:42:52.921201 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/88b3b62b-8ee9-4541-a109-c52f195f55c2-cert\") pod \"controller-6968d8fdc4-bzslg\" (UID: \"88b3b62b-8ee9-4541-a109-c52f195f55c2\") " pod="metallb-system/controller-6968d8fdc4-bzslg" Jan 29 15:42:52 crc kubenswrapper[5008]: I0129 15:42:52.921304 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/17fc1fa7-5758-4768-a6f5-5b63b63d0948-frr-conf\") pod \"frr-k8s-95tm6\" (UID: \"17fc1fa7-5758-4768-a6f5-5b63b63d0948\") " pod="metallb-system/frr-k8s-95tm6" Jan 29 15:42:52 crc kubenswrapper[5008]: I0129 15:42:52.921402 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/17fc1fa7-5758-4768-a6f5-5b63b63d0948-frr-startup\") pod \"frr-k8s-95tm6\" (UID: \"17fc1fa7-5758-4768-a6f5-5b63b63d0948\") " pod="metallb-system/frr-k8s-95tm6" Jan 29 15:42:52 crc kubenswrapper[5008]: E0129 15:42:52.920841 5008 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Jan 29 15:42:52 crc kubenswrapper[5008]: I0129 15:42:52.921571 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/88b3b62b-8ee9-4541-a109-c52f195f55c2-metrics-certs\") pod \"controller-6968d8fdc4-bzslg\" (UID: \"88b3b62b-8ee9-4541-a109-c52f195f55c2\") " pod="metallb-system/controller-6968d8fdc4-bzslg" Jan 29 15:42:52 crc kubenswrapper[5008]: E0129 15:42:52.921610 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8927915f-8333-415c-82e1-47d948a6e8ad-memberlist podName:8927915f-8333-415c-82e1-47d948a6e8ad nodeName:}" failed. No retries permitted until 2026-01-29 15:42:53.421569099 +0000 UTC m=+917.094423386 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/8927915f-8333-415c-82e1-47d948a6e8ad-memberlist") pod "speaker-dmtw7" (UID: "8927915f-8333-415c-82e1-47d948a6e8ad") : secret "metallb-memberlist" not found Jan 29 15:42:52 crc kubenswrapper[5008]: I0129 15:42:52.921749 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/8927915f-8333-415c-82e1-47d948a6e8ad-metallb-excludel2\") pod \"speaker-dmtw7\" (UID: \"8927915f-8333-415c-82e1-47d948a6e8ad\") " pod="metallb-system/speaker-dmtw7" Jan 29 15:42:52 crc kubenswrapper[5008]: I0129 15:42:52.921883 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/17fc1fa7-5758-4768-a6f5-5b63b63d0948-metrics-certs\") pod \"frr-k8s-95tm6\" (UID: \"17fc1fa7-5758-4768-a6f5-5b63b63d0948\") " pod="metallb-system/frr-k8s-95tm6" Jan 29 15:42:52 crc kubenswrapper[5008]: I0129 15:42:52.922011 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/17fc1fa7-5758-4768-a6f5-5b63b63d0948-frr-sockets\") pod \"frr-k8s-95tm6\" (UID: \"17fc1fa7-5758-4768-a6f5-5b63b63d0948\") " pod="metallb-system/frr-k8s-95tm6" Jan 29 15:42:52 crc kubenswrapper[5008]: I0129 15:42:52.921653 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/17fc1fa7-5758-4768-a6f5-5b63b63d0948-reloader\") pod \"frr-k8s-95tm6\" (UID: \"17fc1fa7-5758-4768-a6f5-5b63b63d0948\") " pod="metallb-system/frr-k8s-95tm6" Jan 29 15:42:52 crc kubenswrapper[5008]: I0129 15:42:52.921684 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/17fc1fa7-5758-4768-a6f5-5b63b63d0948-frr-conf\") pod \"frr-k8s-95tm6\" (UID: \"17fc1fa7-5758-4768-a6f5-5b63b63d0948\") " pod="metallb-system/frr-k8s-95tm6" Jan 29 15:42:52 crc kubenswrapper[5008]: E0129 15:42:52.921029 5008 secret.go:188] Couldn't get secret metallb-system/speaker-certs-secret: secret "speaker-certs-secret" not found Jan 29 15:42:52 crc kubenswrapper[5008]: E0129 15:42:52.922274 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8927915f-8333-415c-82e1-47d948a6e8ad-metrics-certs podName:8927915f-8333-415c-82e1-47d948a6e8ad nodeName:}" failed. No retries permitted until 2026-01-29 15:42:53.422250815 +0000 UTC m=+917.095105052 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/8927915f-8333-415c-82e1-47d948a6e8ad-metrics-certs") pod "speaker-dmtw7" (UID: "8927915f-8333-415c-82e1-47d948a6e8ad") : secret "speaker-certs-secret" not found Jan 29 15:42:52 crc kubenswrapper[5008]: I0129 15:42:52.922347 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/17fc1fa7-5758-4768-a6f5-5b63b63d0948-metrics\") pod \"frr-k8s-95tm6\" (UID: \"17fc1fa7-5758-4768-a6f5-5b63b63d0948\") " pod="metallb-system/frr-k8s-95tm6" Jan 29 15:42:52 crc kubenswrapper[5008]: I0129 15:42:52.922413 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/17fc1fa7-5758-4768-a6f5-5b63b63d0948-frr-sockets\") pod \"frr-k8s-95tm6\" (UID: \"17fc1fa7-5758-4768-a6f5-5b63b63d0948\") " pod="metallb-system/frr-k8s-95tm6" Jan 29 15:42:52 crc kubenswrapper[5008]: I0129 15:42:52.922540 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2m887\" (UniqueName: \"kubernetes.io/projected/17fc1fa7-5758-4768-a6f5-5b63b63d0948-kube-api-access-2m887\") pod \"frr-k8s-95tm6\" (UID: \"17fc1fa7-5758-4768-a6f5-5b63b63d0948\") " pod="metallb-system/frr-k8s-95tm6" Jan 29 15:42:52 crc kubenswrapper[5008]: I0129 15:42:52.922591 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/17fc1fa7-5758-4768-a6f5-5b63b63d0948-metrics\") pod \"frr-k8s-95tm6\" (UID: \"17fc1fa7-5758-4768-a6f5-5b63b63d0948\") " pod="metallb-system/frr-k8s-95tm6" Jan 29 15:42:52 crc kubenswrapper[5008]: I0129 15:42:52.922931 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/8927915f-8333-415c-82e1-47d948a6e8ad-metallb-excludel2\") pod \"speaker-dmtw7\" (UID: \"8927915f-8333-415c-82e1-47d948a6e8ad\") " pod="metallb-system/speaker-dmtw7" Jan 29 15:42:52 crc kubenswrapper[5008]: I0129 15:42:52.923064 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/17fc1fa7-5758-4768-a6f5-5b63b63d0948-frr-startup\") pod \"frr-k8s-95tm6\" (UID: \"17fc1fa7-5758-4768-a6f5-5b63b63d0948\") " pod="metallb-system/frr-k8s-95tm6" Jan 29 15:42:52 crc kubenswrapper[5008]: I0129 15:42:52.924132 5008 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Jan 29 15:42:52 crc kubenswrapper[5008]: I0129 15:42:52.927632 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/17fc1fa7-5758-4768-a6f5-5b63b63d0948-metrics-certs\") pod \"frr-k8s-95tm6\" (UID: \"17fc1fa7-5758-4768-a6f5-5b63b63d0948\") " pod="metallb-system/frr-k8s-95tm6" Jan 29 15:42:52 crc kubenswrapper[5008]: I0129 15:42:52.934285 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/88b3b62b-8ee9-4541-a109-c52f195f55c2-cert\") pod \"controller-6968d8fdc4-bzslg\" (UID: \"88b3b62b-8ee9-4541-a109-c52f195f55c2\") " pod="metallb-system/controller-6968d8fdc4-bzslg" Jan 29 15:42:52 crc kubenswrapper[5008]: I0129 15:42:52.934716 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/88b3b62b-8ee9-4541-a109-c52f195f55c2-metrics-certs\") pod \"controller-6968d8fdc4-bzslg\" (UID: \"88b3b62b-8ee9-4541-a109-c52f195f55c2\") " pod="metallb-system/controller-6968d8fdc4-bzslg" Jan 29 15:42:52 crc kubenswrapper[5008]: I0129 15:42:52.937245 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/fc07e8e0-7de8-4d7a-96f9-8ccdd7180f07-cert\") pod \"frr-k8s-webhook-server-7df86c4f6c-4l5h6\" (UID: \"fc07e8e0-7de8-4d7a-96f9-8ccdd7180f07\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-4l5h6" Jan 29 15:42:52 crc kubenswrapper[5008]: I0129 15:42:52.939898 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rl7wn\" (UniqueName: \"kubernetes.io/projected/fc07e8e0-7de8-4d7a-96f9-8ccdd7180f07-kube-api-access-rl7wn\") pod \"frr-k8s-webhook-server-7df86c4f6c-4l5h6\" (UID: \"fc07e8e0-7de8-4d7a-96f9-8ccdd7180f07\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-4l5h6" Jan 29 15:42:52 crc kubenswrapper[5008]: I0129 15:42:52.940832 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2m887\" (UniqueName: \"kubernetes.io/projected/17fc1fa7-5758-4768-a6f5-5b63b63d0948-kube-api-access-2m887\") pod \"frr-k8s-95tm6\" (UID: \"17fc1fa7-5758-4768-a6f5-5b63b63d0948\") " pod="metallb-system/frr-k8s-95tm6" Jan 29 15:42:52 crc kubenswrapper[5008]: I0129 15:42:52.946384 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lwlm6\" (UniqueName: \"kubernetes.io/projected/8927915f-8333-415c-82e1-47d948a6e8ad-kube-api-access-lwlm6\") pod \"speaker-dmtw7\" (UID: \"8927915f-8333-415c-82e1-47d948a6e8ad\") " pod="metallb-system/speaker-dmtw7" Jan 29 15:42:52 crc kubenswrapper[5008]: I0129 15:42:52.956127 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2j8d5\" (UniqueName: \"kubernetes.io/projected/88b3b62b-8ee9-4541-a109-c52f195f55c2-kube-api-access-2j8d5\") pod \"controller-6968d8fdc4-bzslg\" (UID: \"88b3b62b-8ee9-4541-a109-c52f195f55c2\") " pod="metallb-system/controller-6968d8fdc4-bzslg" Jan 29 15:42:53 crc kubenswrapper[5008]: I0129 15:42:53.021179 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-95tm6" Jan 29 15:42:53 crc kubenswrapper[5008]: I0129 15:42:53.025481 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-4l5h6" Jan 29 15:42:53 crc kubenswrapper[5008]: I0129 15:42:53.127005 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-6968d8fdc4-bzslg" Jan 29 15:42:53 crc kubenswrapper[5008]: I0129 15:42:53.265542 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-95tm6" event={"ID":"17fc1fa7-5758-4768-a6f5-5b63b63d0948","Type":"ContainerStarted","Data":"a9ec40b668ff751f03ae83da8d6adacd7ae1faaf5fa0aa62bef4608b1c387853"} Jan 29 15:42:53 crc kubenswrapper[5008]: E0129 15:42:53.342506 5008 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://registry.redhat.io/openshift4/frr-rhel9@sha256:b4dd345e67e0d4f80968f2f04aac4d5da1ce02a3b880502867e03d4fa46d3862: Requesting bearer token: invalid status code from registry 403 (Forbidden)" image="registry.redhat.io/openshift4/frr-rhel9@sha256:b4dd345e67e0d4f80968f2f04aac4d5da1ce02a3b880502867e03d4fa46d3862" Jan 29 15:42:53 crc kubenswrapper[5008]: E0129 15:42:53.342666 5008 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:cp-frr-files,Image:registry.redhat.io/openshift4/frr-rhel9@sha256:b4dd345e67e0d4f80968f2f04aac4d5da1ce02a3b880502867e03d4fa46d3862,Command:[/bin/sh -c cp -rLf /tmp/frr/* /etc/frr/],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:frr-startup,ReadOnly:false,MountPath:/tmp/frr,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:frr-conf,ReadOnly:false,MountPath:/etc/frr,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-2m887,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*100,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:*101,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod frr-k8s-95tm6_metallb-system(17fc1fa7-5758-4768-a6f5-5b63b63d0948): ErrImagePull: initializing source docker://registry.redhat.io/openshift4/frr-rhel9@sha256:b4dd345e67e0d4f80968f2f04aac4d5da1ce02a3b880502867e03d4fa46d3862: Requesting bearer token: invalid status code from registry 403 (Forbidden)" logger="UnhandledError" Jan 29 15:42:53 crc kubenswrapper[5008]: E0129 15:42:53.345878 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cp-frr-files\" with ErrImagePull: \"initializing source docker://registry.redhat.io/openshift4/frr-rhel9@sha256:b4dd345e67e0d4f80968f2f04aac4d5da1ce02a3b880502867e03d4fa46d3862: Requesting bearer token: invalid status code from registry 403 (Forbidden)\"" pod="metallb-system/frr-k8s-95tm6" podUID="17fc1fa7-5758-4768-a6f5-5b63b63d0948" Jan 29 15:42:53 crc kubenswrapper[5008]: I0129 15:42:53.350325 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-6968d8fdc4-bzslg"] Jan 29 15:42:53 crc kubenswrapper[5008]: I0129 15:42:53.428018 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/8927915f-8333-415c-82e1-47d948a6e8ad-memberlist\") pod \"speaker-dmtw7\" (UID: \"8927915f-8333-415c-82e1-47d948a6e8ad\") " pod="metallb-system/speaker-dmtw7" Jan 29 15:42:53 crc kubenswrapper[5008]: I0129 15:42:53.428305 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/8927915f-8333-415c-82e1-47d948a6e8ad-metrics-certs\") pod \"speaker-dmtw7\" (UID: \"8927915f-8333-415c-82e1-47d948a6e8ad\") " pod="metallb-system/speaker-dmtw7" Jan 29 15:42:53 crc kubenswrapper[5008]: E0129 15:42:53.428208 5008 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Jan 29 15:42:53 crc kubenswrapper[5008]: E0129 15:42:53.428416 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8927915f-8333-415c-82e1-47d948a6e8ad-memberlist podName:8927915f-8333-415c-82e1-47d948a6e8ad nodeName:}" failed. No retries permitted until 2026-01-29 15:42:54.428392915 +0000 UTC m=+918.101247142 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/8927915f-8333-415c-82e1-47d948a6e8ad-memberlist") pod "speaker-dmtw7" (UID: "8927915f-8333-415c-82e1-47d948a6e8ad") : secret "metallb-memberlist" not found Jan 29 15:42:53 crc kubenswrapper[5008]: I0129 15:42:53.440683 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/8927915f-8333-415c-82e1-47d948a6e8ad-metrics-certs\") pod \"speaker-dmtw7\" (UID: \"8927915f-8333-415c-82e1-47d948a6e8ad\") " pod="metallb-system/speaker-dmtw7" Jan 29 15:42:53 crc kubenswrapper[5008]: I0129 15:42:53.461507 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-7df86c4f6c-4l5h6"] Jan 29 15:42:54 crc kubenswrapper[5008]: I0129 15:42:54.274878 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-6968d8fdc4-bzslg" event={"ID":"88b3b62b-8ee9-4541-a109-c52f195f55c2","Type":"ContainerStarted","Data":"9e8c669b0c62eb6a9f8048e95d9ff90c082f08ad0dad0416ed48e496b71ccd6a"} Jan 29 15:42:54 crc kubenswrapper[5008]: I0129 15:42:54.275172 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/controller-6968d8fdc4-bzslg" Jan 29 15:42:54 crc kubenswrapper[5008]: I0129 15:42:54.275187 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-6968d8fdc4-bzslg" event={"ID":"88b3b62b-8ee9-4541-a109-c52f195f55c2","Type":"ContainerStarted","Data":"f3f3760040ccd43614b9a8bebd2fa4142c416d8f85600f954ab9e93d30f25e99"} Jan 29 15:42:54 crc kubenswrapper[5008]: I0129 15:42:54.275198 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-6968d8fdc4-bzslg" event={"ID":"88b3b62b-8ee9-4541-a109-c52f195f55c2","Type":"ContainerStarted","Data":"6bc269d2a8131b3e266a6aed301e6f1c63c90be6b88abca8e9c021c385871d0f"} Jan 29 15:42:54 crc kubenswrapper[5008]: I0129 15:42:54.276370 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-4l5h6" event={"ID":"fc07e8e0-7de8-4d7a-96f9-8ccdd7180f07","Type":"ContainerStarted","Data":"3e79b74cb5e6188efd8f04e4c6248c27fc27d02321f61d7b535be2c547e6371e"} Jan 29 15:42:54 crc kubenswrapper[5008]: E0129 15:42:54.278605 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cp-frr-files\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/frr-rhel9@sha256:b4dd345e67e0d4f80968f2f04aac4d5da1ce02a3b880502867e03d4fa46d3862\\\"\"" pod="metallb-system/frr-k8s-95tm6" podUID="17fc1fa7-5758-4768-a6f5-5b63b63d0948" Jan 29 15:42:54 crc kubenswrapper[5008]: I0129 15:42:54.293066 5008 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/controller-6968d8fdc4-bzslg" podStartSLOduration=2.293046682 podStartE2EDuration="2.293046682s" podCreationTimestamp="2026-01-29 15:42:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 15:42:54.291435623 +0000 UTC m=+917.964289870" watchObservedRunningTime="2026-01-29 15:42:54.293046682 +0000 UTC m=+917.965900929" Jan 29 15:42:54 crc kubenswrapper[5008]: I0129 15:42:54.442049 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/8927915f-8333-415c-82e1-47d948a6e8ad-memberlist\") pod \"speaker-dmtw7\" (UID: \"8927915f-8333-415c-82e1-47d948a6e8ad\") " pod="metallb-system/speaker-dmtw7" Jan 29 15:42:54 crc kubenswrapper[5008]: I0129 15:42:54.450249 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/8927915f-8333-415c-82e1-47d948a6e8ad-memberlist\") pod \"speaker-dmtw7\" (UID: \"8927915f-8333-415c-82e1-47d948a6e8ad\") " pod="metallb-system/speaker-dmtw7" Jan 29 15:42:54 crc kubenswrapper[5008]: I0129 15:42:54.598027 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-dmtw7" Jan 29 15:42:54 crc kubenswrapper[5008]: W0129 15:42:54.618751 5008 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8927915f_8333_415c_82e1_47d948a6e8ad.slice/crio-604ca4fc3f3b7d4dec84a70104e0463f36a66f636b5d1a782efadc478b5cd653 WatchSource:0}: Error finding container 604ca4fc3f3b7d4dec84a70104e0463f36a66f636b5d1a782efadc478b5cd653: Status 404 returned error can't find the container with id 604ca4fc3f3b7d4dec84a70104e0463f36a66f636b5d1a782efadc478b5cd653 Jan 29 15:42:55 crc kubenswrapper[5008]: I0129 15:42:55.282895 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-dmtw7" event={"ID":"8927915f-8333-415c-82e1-47d948a6e8ad","Type":"ContainerStarted","Data":"1d5fc6e003dc2d03f9c011bb9895ba308880490ea798b98530077ad885d16c7a"} Jan 29 15:42:55 crc kubenswrapper[5008]: I0129 15:42:55.282958 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-dmtw7" event={"ID":"8927915f-8333-415c-82e1-47d948a6e8ad","Type":"ContainerStarted","Data":"a281febc92271ed4741cfa48b172c504b779aedf1063a00d42e14f3869ebae6f"} Jan 29 15:42:55 crc kubenswrapper[5008]: I0129 15:42:55.282971 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-dmtw7" event={"ID":"8927915f-8333-415c-82e1-47d948a6e8ad","Type":"ContainerStarted","Data":"604ca4fc3f3b7d4dec84a70104e0463f36a66f636b5d1a782efadc478b5cd653"} Jan 29 15:42:55 crc kubenswrapper[5008]: I0129 15:42:55.283175 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/speaker-dmtw7" Jan 29 15:42:55 crc kubenswrapper[5008]: I0129 15:42:55.301904 5008 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/speaker-dmtw7" podStartSLOduration=3.301886238 podStartE2EDuration="3.301886238s" podCreationTimestamp="2026-01-29 15:42:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 15:42:55.301203521 +0000 UTC m=+918.974057768" watchObservedRunningTime="2026-01-29 15:42:55.301886238 +0000 UTC m=+918.974740475" Jan 29 15:43:00 crc kubenswrapper[5008]: I0129 15:43:00.317406 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-4l5h6" event={"ID":"fc07e8e0-7de8-4d7a-96f9-8ccdd7180f07","Type":"ContainerStarted","Data":"1ea2b5bbf48ed8cfd5ae2cdc50e9ac14dd77005e271708a9da6c7fee15f9e08a"} Jan 29 15:43:01 crc kubenswrapper[5008]: I0129 15:43:01.331399 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-4l5h6" Jan 29 15:43:01 crc kubenswrapper[5008]: I0129 15:43:01.345552 5008 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-4l5h6" podStartSLOduration=2.680539301 podStartE2EDuration="9.345523091s" podCreationTimestamp="2026-01-29 15:42:52 +0000 UTC" firstStartedPulling="2026-01-29 15:42:53.465595527 +0000 UTC m=+917.138449764" lastFinishedPulling="2026-01-29 15:43:00.130579287 +0000 UTC m=+923.803433554" observedRunningTime="2026-01-29 15:43:01.339920155 +0000 UTC m=+925.012774452" watchObservedRunningTime="2026-01-29 15:43:01.345523091 +0000 UTC m=+925.018377338" Jan 29 15:43:03 crc kubenswrapper[5008]: I0129 15:43:03.134989 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/controller-6968d8fdc4-bzslg" Jan 29 15:43:04 crc kubenswrapper[5008]: I0129 15:43:04.602384 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/speaker-dmtw7" Jan 29 15:43:06 crc kubenswrapper[5008]: I0129 15:43:06.356655 5008 generic.go:334] "Generic (PLEG): container finished" podID="17fc1fa7-5758-4768-a6f5-5b63b63d0948" containerID="231e5de5485ddfe294e1d3a81c8d79d122a0084f5cf9936952c289b36c0a733d" exitCode=0 Jan 29 15:43:06 crc kubenswrapper[5008]: I0129 15:43:06.356773 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-95tm6" event={"ID":"17fc1fa7-5758-4768-a6f5-5b63b63d0948","Type":"ContainerDied","Data":"231e5de5485ddfe294e1d3a81c8d79d122a0084f5cf9936952c289b36c0a733d"} Jan 29 15:43:07 crc kubenswrapper[5008]: I0129 15:43:07.368193 5008 generic.go:334] "Generic (PLEG): container finished" podID="17fc1fa7-5758-4768-a6f5-5b63b63d0948" containerID="33720197a14eea329aa19313ef67e6121dfb318eeb5363d329c2f32b75b0e16e" exitCode=0 Jan 29 15:43:07 crc kubenswrapper[5008]: I0129 15:43:07.368295 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-95tm6" event={"ID":"17fc1fa7-5758-4768-a6f5-5b63b63d0948","Type":"ContainerDied","Data":"33720197a14eea329aa19313ef67e6121dfb318eeb5363d329c2f32b75b0e16e"} Jan 29 15:43:07 crc kubenswrapper[5008]: I0129 15:43:07.585188 5008 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-bvg8g"] Jan 29 15:43:07 crc kubenswrapper[5008]: I0129 15:43:07.586043 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-bvg8g" Jan 29 15:43:07 crc kubenswrapper[5008]: I0129 15:43:07.588679 5008 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"kube-root-ca.crt" Jan 29 15:43:07 crc kubenswrapper[5008]: I0129 15:43:07.588936 5008 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"openshift-service-ca.crt" Jan 29 15:43:07 crc kubenswrapper[5008]: I0129 15:43:07.590541 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-index-dockercfg-fpjp4" Jan 29 15:43:07 crc kubenswrapper[5008]: I0129 15:43:07.608152 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-bvg8g"] Jan 29 15:43:07 crc kubenswrapper[5008]: I0129 15:43:07.731163 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6mzcl\" (UniqueName: \"kubernetes.io/projected/216a7f22-8b15-4532-a345-2a9da518679f-kube-api-access-6mzcl\") pod \"openstack-operator-index-bvg8g\" (UID: \"216a7f22-8b15-4532-a345-2a9da518679f\") " pod="openstack-operators/openstack-operator-index-bvg8g" Jan 29 15:43:07 crc kubenswrapper[5008]: I0129 15:43:07.833066 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6mzcl\" (UniqueName: \"kubernetes.io/projected/216a7f22-8b15-4532-a345-2a9da518679f-kube-api-access-6mzcl\") pod \"openstack-operator-index-bvg8g\" (UID: \"216a7f22-8b15-4532-a345-2a9da518679f\") " pod="openstack-operators/openstack-operator-index-bvg8g" Jan 29 15:43:07 crc kubenswrapper[5008]: I0129 15:43:07.851905 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6mzcl\" (UniqueName: \"kubernetes.io/projected/216a7f22-8b15-4532-a345-2a9da518679f-kube-api-access-6mzcl\") pod \"openstack-operator-index-bvg8g\" (UID: \"216a7f22-8b15-4532-a345-2a9da518679f\") " pod="openstack-operators/openstack-operator-index-bvg8g" Jan 29 15:43:07 crc kubenswrapper[5008]: I0129 15:43:07.905078 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-bvg8g" Jan 29 15:43:08 crc kubenswrapper[5008]: I0129 15:43:08.130817 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-bvg8g"] Jan 29 15:43:08 crc kubenswrapper[5008]: W0129 15:43:08.158920 5008 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod216a7f22_8b15_4532_a345_2a9da518679f.slice/crio-a08f2d41444c3b33931f6fccecf3e8b61a7338461bd1d84edb3bcbd5755fa677 WatchSource:0}: Error finding container a08f2d41444c3b33931f6fccecf3e8b61a7338461bd1d84edb3bcbd5755fa677: Status 404 returned error can't find the container with id a08f2d41444c3b33931f6fccecf3e8b61a7338461bd1d84edb3bcbd5755fa677 Jan 29 15:43:08 crc kubenswrapper[5008]: I0129 15:43:08.377531 5008 generic.go:334] "Generic (PLEG): container finished" podID="17fc1fa7-5758-4768-a6f5-5b63b63d0948" containerID="3170e1b36932438726b302f82a0fbce3307979c9fc880212c37283db916ec3a6" exitCode=0 Jan 29 15:43:08 crc kubenswrapper[5008]: I0129 15:43:08.377565 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-95tm6" event={"ID":"17fc1fa7-5758-4768-a6f5-5b63b63d0948","Type":"ContainerDied","Data":"3170e1b36932438726b302f82a0fbce3307979c9fc880212c37283db916ec3a6"} Jan 29 15:43:08 crc kubenswrapper[5008]: I0129 15:43:08.378354 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-bvg8g" event={"ID":"216a7f22-8b15-4532-a345-2a9da518679f","Type":"ContainerStarted","Data":"a08f2d41444c3b33931f6fccecf3e8b61a7338461bd1d84edb3bcbd5755fa677"} Jan 29 15:43:09 crc kubenswrapper[5008]: I0129 15:43:09.386226 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-95tm6" event={"ID":"17fc1fa7-5758-4768-a6f5-5b63b63d0948","Type":"ContainerStarted","Data":"98b69b910e0313ca12d3067fb699555c3f870f775e6b1814a716e32c11f4b945"} Jan 29 15:43:09 crc kubenswrapper[5008]: I0129 15:43:09.386612 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-95tm6" event={"ID":"17fc1fa7-5758-4768-a6f5-5b63b63d0948","Type":"ContainerStarted","Data":"7d461416194708ba876b149d24894e85b55fbf637b48290d0123e59d20667a8e"} Jan 29 15:43:11 crc kubenswrapper[5008]: I0129 15:43:11.344454 5008 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-bvg8g"] Jan 29 15:43:11 crc kubenswrapper[5008]: I0129 15:43:11.408281 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-95tm6" event={"ID":"17fc1fa7-5758-4768-a6f5-5b63b63d0948","Type":"ContainerStarted","Data":"0fb84983516a5b2a40325e8a28b98266055c9d6dbb4f687f6c8c24306ba50dff"} Jan 29 15:43:11 crc kubenswrapper[5008]: I0129 15:43:11.408341 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-95tm6" event={"ID":"17fc1fa7-5758-4768-a6f5-5b63b63d0948","Type":"ContainerStarted","Data":"17a10cc56ce1eb6b41fb2a54a58710f8ba75c5fadd9bdbbb9452e25e3550c7c2"} Jan 29 15:43:11 crc kubenswrapper[5008]: I0129 15:43:11.408364 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-95tm6" event={"ID":"17fc1fa7-5758-4768-a6f5-5b63b63d0948","Type":"ContainerStarted","Data":"d115d747aeb4d5a4087cba5a82125329a21f3ead612d7073181e40ee486b435f"} Jan 29 15:43:11 crc kubenswrapper[5008]: I0129 15:43:11.408383 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-95tm6" event={"ID":"17fc1fa7-5758-4768-a6f5-5b63b63d0948","Type":"ContainerStarted","Data":"3716342af50a7804afbe37daeb3c0fb1382c3d99c797eb28bb8f228a26d9fa27"} Jan 29 15:43:11 crc kubenswrapper[5008]: I0129 15:43:11.408410 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-95tm6" Jan 29 15:43:11 crc kubenswrapper[5008]: I0129 15:43:11.410001 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-bvg8g" event={"ID":"216a7f22-8b15-4532-a345-2a9da518679f","Type":"ContainerStarted","Data":"e16317683a7a4cfd31f317c71e3b0587b54f896e9512e794e421ff3d8119247a"} Jan 29 15:43:11 crc kubenswrapper[5008]: I0129 15:43:11.455210 5008 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-95tm6" podStartSLOduration=-9223372017.399584 podStartE2EDuration="19.455192762s" podCreationTimestamp="2026-01-29 15:42:52 +0000 UTC" firstStartedPulling="2026-01-29 15:42:53.205477247 +0000 UTC m=+916.878331484" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 15:43:11.438124448 +0000 UTC m=+935.110978725" watchObservedRunningTime="2026-01-29 15:43:11.455192762 +0000 UTC m=+935.128046999" Jan 29 15:43:11 crc kubenswrapper[5008]: I0129 15:43:11.951177 5008 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-bvg8g" podStartSLOduration=2.687736361 podStartE2EDuration="4.951151114s" podCreationTimestamp="2026-01-29 15:43:07 +0000 UTC" firstStartedPulling="2026-01-29 15:43:08.175576065 +0000 UTC m=+931.848430302" lastFinishedPulling="2026-01-29 15:43:10.438990818 +0000 UTC m=+934.111845055" observedRunningTime="2026-01-29 15:43:11.454324191 +0000 UTC m=+935.127178438" watchObservedRunningTime="2026-01-29 15:43:11.951151114 +0000 UTC m=+935.624005391" Jan 29 15:43:11 crc kubenswrapper[5008]: I0129 15:43:11.955441 5008 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-lv8km"] Jan 29 15:43:11 crc kubenswrapper[5008]: I0129 15:43:11.956888 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-lv8km" Jan 29 15:43:11 crc kubenswrapper[5008]: I0129 15:43:11.961297 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-lv8km"] Jan 29 15:43:12 crc kubenswrapper[5008]: I0129 15:43:12.107388 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bsbwg\" (UniqueName: \"kubernetes.io/projected/cdce8b7e-15b6-41ae-89f3-fd69472b9800-kube-api-access-bsbwg\") pod \"openstack-operator-index-lv8km\" (UID: \"cdce8b7e-15b6-41ae-89f3-fd69472b9800\") " pod="openstack-operators/openstack-operator-index-lv8km" Jan 29 15:43:12 crc kubenswrapper[5008]: I0129 15:43:12.208294 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bsbwg\" (UniqueName: \"kubernetes.io/projected/cdce8b7e-15b6-41ae-89f3-fd69472b9800-kube-api-access-bsbwg\") pod \"openstack-operator-index-lv8km\" (UID: \"cdce8b7e-15b6-41ae-89f3-fd69472b9800\") " pod="openstack-operators/openstack-operator-index-lv8km" Jan 29 15:43:12 crc kubenswrapper[5008]: I0129 15:43:12.229842 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bsbwg\" (UniqueName: \"kubernetes.io/projected/cdce8b7e-15b6-41ae-89f3-fd69472b9800-kube-api-access-bsbwg\") pod \"openstack-operator-index-lv8km\" (UID: \"cdce8b7e-15b6-41ae-89f3-fd69472b9800\") " pod="openstack-operators/openstack-operator-index-lv8km" Jan 29 15:43:12 crc kubenswrapper[5008]: I0129 15:43:12.306935 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-lv8km" Jan 29 15:43:12 crc kubenswrapper[5008]: I0129 15:43:12.418038 5008 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack-operators/openstack-operator-index-bvg8g" podUID="216a7f22-8b15-4532-a345-2a9da518679f" containerName="registry-server" containerID="cri-o://e16317683a7a4cfd31f317c71e3b0587b54f896e9512e794e421ff3d8119247a" gracePeriod=2 Jan 29 15:43:12 crc kubenswrapper[5008]: W0129 15:43:12.534081 5008 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcdce8b7e_15b6_41ae_89f3_fd69472b9800.slice/crio-847014a54133b189fcdb609d1fca489b903dc90f362c801f4a21f00423a709a0 WatchSource:0}: Error finding container 847014a54133b189fcdb609d1fca489b903dc90f362c801f4a21f00423a709a0: Status 404 returned error can't find the container with id 847014a54133b189fcdb609d1fca489b903dc90f362c801f4a21f00423a709a0 Jan 29 15:43:12 crc kubenswrapper[5008]: I0129 15:43:12.547045 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-lv8km"] Jan 29 15:43:12 crc kubenswrapper[5008]: I0129 15:43:12.927714 5008 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-bvg8g" Jan 29 15:43:13 crc kubenswrapper[5008]: I0129 15:43:13.019614 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6mzcl\" (UniqueName: \"kubernetes.io/projected/216a7f22-8b15-4532-a345-2a9da518679f-kube-api-access-6mzcl\") pod \"216a7f22-8b15-4532-a345-2a9da518679f\" (UID: \"216a7f22-8b15-4532-a345-2a9da518679f\") " Jan 29 15:43:13 crc kubenswrapper[5008]: I0129 15:43:13.021615 5008 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="metallb-system/frr-k8s-95tm6" Jan 29 15:43:13 crc kubenswrapper[5008]: I0129 15:43:13.025811 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/216a7f22-8b15-4532-a345-2a9da518679f-kube-api-access-6mzcl" (OuterVolumeSpecName: "kube-api-access-6mzcl") pod "216a7f22-8b15-4532-a345-2a9da518679f" (UID: "216a7f22-8b15-4532-a345-2a9da518679f"). InnerVolumeSpecName "kube-api-access-6mzcl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:43:13 crc kubenswrapper[5008]: I0129 15:43:13.031385 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-4l5h6" Jan 29 15:43:13 crc kubenswrapper[5008]: I0129 15:43:13.064410 5008 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="metallb-system/frr-k8s-95tm6" Jan 29 15:43:13 crc kubenswrapper[5008]: I0129 15:43:13.121468 5008 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6mzcl\" (UniqueName: \"kubernetes.io/projected/216a7f22-8b15-4532-a345-2a9da518679f-kube-api-access-6mzcl\") on node \"crc\" DevicePath \"\"" Jan 29 15:43:13 crc kubenswrapper[5008]: I0129 15:43:13.428040 5008 generic.go:334] "Generic (PLEG): container finished" podID="216a7f22-8b15-4532-a345-2a9da518679f" containerID="e16317683a7a4cfd31f317c71e3b0587b54f896e9512e794e421ff3d8119247a" exitCode=0 Jan 29 15:43:13 crc kubenswrapper[5008]: I0129 15:43:13.428090 5008 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-bvg8g" Jan 29 15:43:13 crc kubenswrapper[5008]: I0129 15:43:13.428111 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-bvg8g" event={"ID":"216a7f22-8b15-4532-a345-2a9da518679f","Type":"ContainerDied","Data":"e16317683a7a4cfd31f317c71e3b0587b54f896e9512e794e421ff3d8119247a"} Jan 29 15:43:13 crc kubenswrapper[5008]: I0129 15:43:13.428628 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-bvg8g" event={"ID":"216a7f22-8b15-4532-a345-2a9da518679f","Type":"ContainerDied","Data":"a08f2d41444c3b33931f6fccecf3e8b61a7338461bd1d84edb3bcbd5755fa677"} Jan 29 15:43:13 crc kubenswrapper[5008]: I0129 15:43:13.428677 5008 scope.go:117] "RemoveContainer" containerID="e16317683a7a4cfd31f317c71e3b0587b54f896e9512e794e421ff3d8119247a" Jan 29 15:43:13 crc kubenswrapper[5008]: I0129 15:43:13.432174 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-lv8km" event={"ID":"cdce8b7e-15b6-41ae-89f3-fd69472b9800","Type":"ContainerStarted","Data":"f6b9e7ec67196089535f435273040d6e56dbf49a92f92705d0160a9c45780f32"} Jan 29 15:43:13 crc kubenswrapper[5008]: I0129 15:43:13.432508 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-lv8km" event={"ID":"cdce8b7e-15b6-41ae-89f3-fd69472b9800","Type":"ContainerStarted","Data":"847014a54133b189fcdb609d1fca489b903dc90f362c801f4a21f00423a709a0"} Jan 29 15:43:13 crc kubenswrapper[5008]: I0129 15:43:13.462409 5008 scope.go:117] "RemoveContainer" containerID="e16317683a7a4cfd31f317c71e3b0587b54f896e9512e794e421ff3d8119247a" Jan 29 15:43:13 crc kubenswrapper[5008]: E0129 15:43:13.463374 5008 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e16317683a7a4cfd31f317c71e3b0587b54f896e9512e794e421ff3d8119247a\": container with ID starting with e16317683a7a4cfd31f317c71e3b0587b54f896e9512e794e421ff3d8119247a not found: ID does not exist" containerID="e16317683a7a4cfd31f317c71e3b0587b54f896e9512e794e421ff3d8119247a" Jan 29 15:43:13 crc kubenswrapper[5008]: I0129 15:43:13.463445 5008 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e16317683a7a4cfd31f317c71e3b0587b54f896e9512e794e421ff3d8119247a"} err="failed to get container status \"e16317683a7a4cfd31f317c71e3b0587b54f896e9512e794e421ff3d8119247a\": rpc error: code = NotFound desc = could not find container \"e16317683a7a4cfd31f317c71e3b0587b54f896e9512e794e421ff3d8119247a\": container with ID starting with e16317683a7a4cfd31f317c71e3b0587b54f896e9512e794e421ff3d8119247a not found: ID does not exist" Jan 29 15:43:13 crc kubenswrapper[5008]: I0129 15:43:13.468434 5008 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-lv8km" podStartSLOduration=2.313658152 podStartE2EDuration="2.468396505s" podCreationTimestamp="2026-01-29 15:43:11 +0000 UTC" firstStartedPulling="2026-01-29 15:43:12.53624855 +0000 UTC m=+936.209102777" lastFinishedPulling="2026-01-29 15:43:12.690986863 +0000 UTC m=+936.363841130" observedRunningTime="2026-01-29 15:43:13.448503921 +0000 UTC m=+937.121358258" watchObservedRunningTime="2026-01-29 15:43:13.468396505 +0000 UTC m=+937.141250792" Jan 29 15:43:13 crc kubenswrapper[5008]: I0129 15:43:13.515275 5008 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-bvg8g"] Jan 29 15:43:13 crc kubenswrapper[5008]: I0129 15:43:13.524421 5008 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack-operators/openstack-operator-index-bvg8g"] Jan 29 15:43:15 crc kubenswrapper[5008]: I0129 15:43:15.339681 5008 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="216a7f22-8b15-4532-a345-2a9da518679f" path="/var/lib/kubelet/pods/216a7f22-8b15-4532-a345-2a9da518679f/volumes" Jan 29 15:43:22 crc kubenswrapper[5008]: I0129 15:43:22.307382 5008 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack-operators/openstack-operator-index-lv8km" Jan 29 15:43:22 crc kubenswrapper[5008]: I0129 15:43:22.307959 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-index-lv8km" Jan 29 15:43:22 crc kubenswrapper[5008]: I0129 15:43:22.342908 5008 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack-operators/openstack-operator-index-lv8km" Jan 29 15:43:22 crc kubenswrapper[5008]: I0129 15:43:22.535588 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-index-lv8km" Jan 29 15:43:23 crc kubenswrapper[5008]: I0129 15:43:23.028489 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-95tm6" Jan 29 15:43:28 crc kubenswrapper[5008]: I0129 15:43:28.207098 5008 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/488b31f3850666f759755213b2d3367735e8b7118e0fd5a1c8e4c15b72n4rxg"] Jan 29 15:43:28 crc kubenswrapper[5008]: E0129 15:43:28.208062 5008 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="216a7f22-8b15-4532-a345-2a9da518679f" containerName="registry-server" Jan 29 15:43:28 crc kubenswrapper[5008]: I0129 15:43:28.208096 5008 state_mem.go:107] "Deleted CPUSet assignment" podUID="216a7f22-8b15-4532-a345-2a9da518679f" containerName="registry-server" Jan 29 15:43:28 crc kubenswrapper[5008]: I0129 15:43:28.208449 5008 memory_manager.go:354] "RemoveStaleState removing state" podUID="216a7f22-8b15-4532-a345-2a9da518679f" containerName="registry-server" Jan 29 15:43:28 crc kubenswrapper[5008]: I0129 15:43:28.210280 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/488b31f3850666f759755213b2d3367735e8b7118e0fd5a1c8e4c15b72n4rxg" Jan 29 15:43:28 crc kubenswrapper[5008]: I0129 15:43:28.216730 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/488b31f3850666f759755213b2d3367735e8b7118e0fd5a1c8e4c15b72n4rxg"] Jan 29 15:43:28 crc kubenswrapper[5008]: I0129 15:43:28.254612 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"default-dockercfg-c8v8v" Jan 29 15:43:28 crc kubenswrapper[5008]: I0129 15:43:28.357667 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-blg86\" (UniqueName: \"kubernetes.io/projected/dcbfd66c-b06c-432d-b8e8-a222ab00f36c-kube-api-access-blg86\") pod \"488b31f3850666f759755213b2d3367735e8b7118e0fd5a1c8e4c15b72n4rxg\" (UID: \"dcbfd66c-b06c-432d-b8e8-a222ab00f36c\") " pod="openstack-operators/488b31f3850666f759755213b2d3367735e8b7118e0fd5a1c8e4c15b72n4rxg" Jan 29 15:43:28 crc kubenswrapper[5008]: I0129 15:43:28.357839 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/dcbfd66c-b06c-432d-b8e8-a222ab00f36c-bundle\") pod \"488b31f3850666f759755213b2d3367735e8b7118e0fd5a1c8e4c15b72n4rxg\" (UID: \"dcbfd66c-b06c-432d-b8e8-a222ab00f36c\") " pod="openstack-operators/488b31f3850666f759755213b2d3367735e8b7118e0fd5a1c8e4c15b72n4rxg" Jan 29 15:43:28 crc kubenswrapper[5008]: I0129 15:43:28.357884 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/dcbfd66c-b06c-432d-b8e8-a222ab00f36c-util\") pod \"488b31f3850666f759755213b2d3367735e8b7118e0fd5a1c8e4c15b72n4rxg\" (UID: \"dcbfd66c-b06c-432d-b8e8-a222ab00f36c\") " pod="openstack-operators/488b31f3850666f759755213b2d3367735e8b7118e0fd5a1c8e4c15b72n4rxg" Jan 29 15:43:28 crc kubenswrapper[5008]: I0129 15:43:28.458716 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-blg86\" (UniqueName: \"kubernetes.io/projected/dcbfd66c-b06c-432d-b8e8-a222ab00f36c-kube-api-access-blg86\") pod \"488b31f3850666f759755213b2d3367735e8b7118e0fd5a1c8e4c15b72n4rxg\" (UID: \"dcbfd66c-b06c-432d-b8e8-a222ab00f36c\") " pod="openstack-operators/488b31f3850666f759755213b2d3367735e8b7118e0fd5a1c8e4c15b72n4rxg" Jan 29 15:43:28 crc kubenswrapper[5008]: I0129 15:43:28.459502 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/dcbfd66c-b06c-432d-b8e8-a222ab00f36c-bundle\") pod \"488b31f3850666f759755213b2d3367735e8b7118e0fd5a1c8e4c15b72n4rxg\" (UID: \"dcbfd66c-b06c-432d-b8e8-a222ab00f36c\") " pod="openstack-operators/488b31f3850666f759755213b2d3367735e8b7118e0fd5a1c8e4c15b72n4rxg" Jan 29 15:43:28 crc kubenswrapper[5008]: I0129 15:43:28.459691 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/dcbfd66c-b06c-432d-b8e8-a222ab00f36c-util\") pod \"488b31f3850666f759755213b2d3367735e8b7118e0fd5a1c8e4c15b72n4rxg\" (UID: \"dcbfd66c-b06c-432d-b8e8-a222ab00f36c\") " pod="openstack-operators/488b31f3850666f759755213b2d3367735e8b7118e0fd5a1c8e4c15b72n4rxg" Jan 29 15:43:28 crc kubenswrapper[5008]: I0129 15:43:28.460564 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/dcbfd66c-b06c-432d-b8e8-a222ab00f36c-bundle\") pod \"488b31f3850666f759755213b2d3367735e8b7118e0fd5a1c8e4c15b72n4rxg\" (UID: \"dcbfd66c-b06c-432d-b8e8-a222ab00f36c\") " pod="openstack-operators/488b31f3850666f759755213b2d3367735e8b7118e0fd5a1c8e4c15b72n4rxg" Jan 29 15:43:28 crc kubenswrapper[5008]: I0129 15:43:28.460687 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/dcbfd66c-b06c-432d-b8e8-a222ab00f36c-util\") pod \"488b31f3850666f759755213b2d3367735e8b7118e0fd5a1c8e4c15b72n4rxg\" (UID: \"dcbfd66c-b06c-432d-b8e8-a222ab00f36c\") " pod="openstack-operators/488b31f3850666f759755213b2d3367735e8b7118e0fd5a1c8e4c15b72n4rxg" Jan 29 15:43:28 crc kubenswrapper[5008]: I0129 15:43:28.494700 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-blg86\" (UniqueName: \"kubernetes.io/projected/dcbfd66c-b06c-432d-b8e8-a222ab00f36c-kube-api-access-blg86\") pod \"488b31f3850666f759755213b2d3367735e8b7118e0fd5a1c8e4c15b72n4rxg\" (UID: \"dcbfd66c-b06c-432d-b8e8-a222ab00f36c\") " pod="openstack-operators/488b31f3850666f759755213b2d3367735e8b7118e0fd5a1c8e4c15b72n4rxg" Jan 29 15:43:28 crc kubenswrapper[5008]: I0129 15:43:28.570585 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/488b31f3850666f759755213b2d3367735e8b7118e0fd5a1c8e4c15b72n4rxg" Jan 29 15:43:29 crc kubenswrapper[5008]: I0129 15:43:29.014229 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/488b31f3850666f759755213b2d3367735e8b7118e0fd5a1c8e4c15b72n4rxg"] Jan 29 15:43:29 crc kubenswrapper[5008]: I0129 15:43:29.556578 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/488b31f3850666f759755213b2d3367735e8b7118e0fd5a1c8e4c15b72n4rxg" event={"ID":"dcbfd66c-b06c-432d-b8e8-a222ab00f36c","Type":"ContainerStarted","Data":"9239be16dbd1f7777729c08e9496dfa060494c0ee5947936ff5b5779c265a6ce"} Jan 29 15:43:44 crc kubenswrapper[5008]: I0129 15:43:44.680656 5008 generic.go:334] "Generic (PLEG): container finished" podID="dcbfd66c-b06c-432d-b8e8-a222ab00f36c" containerID="7d5c903e1f3ba0cea1d15fc05195cd1530a6583eb5b944d7caab6f6c2c55dd45" exitCode=0 Jan 29 15:43:44 crc kubenswrapper[5008]: I0129 15:43:44.680749 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/488b31f3850666f759755213b2d3367735e8b7118e0fd5a1c8e4c15b72n4rxg" event={"ID":"dcbfd66c-b06c-432d-b8e8-a222ab00f36c","Type":"ContainerDied","Data":"7d5c903e1f3ba0cea1d15fc05195cd1530a6583eb5b944d7caab6f6c2c55dd45"} Jan 29 15:43:46 crc kubenswrapper[5008]: I0129 15:43:46.697066 5008 generic.go:334] "Generic (PLEG): container finished" podID="dcbfd66c-b06c-432d-b8e8-a222ab00f36c" containerID="4f414b7bdd9adf458c9ee14e38eea93ce3d8efca98d0436bf706ddef3cca134b" exitCode=0 Jan 29 15:43:46 crc kubenswrapper[5008]: I0129 15:43:46.697147 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/488b31f3850666f759755213b2d3367735e8b7118e0fd5a1c8e4c15b72n4rxg" event={"ID":"dcbfd66c-b06c-432d-b8e8-a222ab00f36c","Type":"ContainerDied","Data":"4f414b7bdd9adf458c9ee14e38eea93ce3d8efca98d0436bf706ddef3cca134b"} Jan 29 15:43:47 crc kubenswrapper[5008]: I0129 15:43:47.707151 5008 generic.go:334] "Generic (PLEG): container finished" podID="dcbfd66c-b06c-432d-b8e8-a222ab00f36c" containerID="742ad74ea9369a7369ccd691f0019c383e12bb9305929becea41099d2763e1d2" exitCode=0 Jan 29 15:43:47 crc kubenswrapper[5008]: I0129 15:43:47.707202 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/488b31f3850666f759755213b2d3367735e8b7118e0fd5a1c8e4c15b72n4rxg" event={"ID":"dcbfd66c-b06c-432d-b8e8-a222ab00f36c","Type":"ContainerDied","Data":"742ad74ea9369a7369ccd691f0019c383e12bb9305929becea41099d2763e1d2"} Jan 29 15:43:49 crc kubenswrapper[5008]: I0129 15:43:49.015668 5008 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/488b31f3850666f759755213b2d3367735e8b7118e0fd5a1c8e4c15b72n4rxg" Jan 29 15:43:49 crc kubenswrapper[5008]: I0129 15:43:49.164331 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-blg86\" (UniqueName: \"kubernetes.io/projected/dcbfd66c-b06c-432d-b8e8-a222ab00f36c-kube-api-access-blg86\") pod \"dcbfd66c-b06c-432d-b8e8-a222ab00f36c\" (UID: \"dcbfd66c-b06c-432d-b8e8-a222ab00f36c\") " Jan 29 15:43:49 crc kubenswrapper[5008]: I0129 15:43:49.164447 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/dcbfd66c-b06c-432d-b8e8-a222ab00f36c-bundle\") pod \"dcbfd66c-b06c-432d-b8e8-a222ab00f36c\" (UID: \"dcbfd66c-b06c-432d-b8e8-a222ab00f36c\") " Jan 29 15:43:49 crc kubenswrapper[5008]: I0129 15:43:49.164638 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/dcbfd66c-b06c-432d-b8e8-a222ab00f36c-util\") pod \"dcbfd66c-b06c-432d-b8e8-a222ab00f36c\" (UID: \"dcbfd66c-b06c-432d-b8e8-a222ab00f36c\") " Jan 29 15:43:49 crc kubenswrapper[5008]: I0129 15:43:49.165628 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dcbfd66c-b06c-432d-b8e8-a222ab00f36c-bundle" (OuterVolumeSpecName: "bundle") pod "dcbfd66c-b06c-432d-b8e8-a222ab00f36c" (UID: "dcbfd66c-b06c-432d-b8e8-a222ab00f36c"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 15:43:49 crc kubenswrapper[5008]: I0129 15:43:49.174141 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dcbfd66c-b06c-432d-b8e8-a222ab00f36c-kube-api-access-blg86" (OuterVolumeSpecName: "kube-api-access-blg86") pod "dcbfd66c-b06c-432d-b8e8-a222ab00f36c" (UID: "dcbfd66c-b06c-432d-b8e8-a222ab00f36c"). InnerVolumeSpecName "kube-api-access-blg86". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:43:49 crc kubenswrapper[5008]: I0129 15:43:49.178822 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dcbfd66c-b06c-432d-b8e8-a222ab00f36c-util" (OuterVolumeSpecName: "util") pod "dcbfd66c-b06c-432d-b8e8-a222ab00f36c" (UID: "dcbfd66c-b06c-432d-b8e8-a222ab00f36c"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 15:43:49 crc kubenswrapper[5008]: I0129 15:43:49.266229 5008 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/dcbfd66c-b06c-432d-b8e8-a222ab00f36c-util\") on node \"crc\" DevicePath \"\"" Jan 29 15:43:49 crc kubenswrapper[5008]: I0129 15:43:49.266271 5008 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-blg86\" (UniqueName: \"kubernetes.io/projected/dcbfd66c-b06c-432d-b8e8-a222ab00f36c-kube-api-access-blg86\") on node \"crc\" DevicePath \"\"" Jan 29 15:43:49 crc kubenswrapper[5008]: I0129 15:43:49.266286 5008 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/dcbfd66c-b06c-432d-b8e8-a222ab00f36c-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 15:43:49 crc kubenswrapper[5008]: I0129 15:43:49.722024 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/488b31f3850666f759755213b2d3367735e8b7118e0fd5a1c8e4c15b72n4rxg" event={"ID":"dcbfd66c-b06c-432d-b8e8-a222ab00f36c","Type":"ContainerDied","Data":"9239be16dbd1f7777729c08e9496dfa060494c0ee5947936ff5b5779c265a6ce"} Jan 29 15:43:49 crc kubenswrapper[5008]: I0129 15:43:49.722332 5008 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9239be16dbd1f7777729c08e9496dfa060494c0ee5947936ff5b5779c265a6ce" Jan 29 15:43:49 crc kubenswrapper[5008]: I0129 15:43:49.722078 5008 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/488b31f3850666f759755213b2d3367735e8b7118e0fd5a1c8e4c15b72n4rxg" Jan 29 15:43:55 crc kubenswrapper[5008]: I0129 15:43:55.304068 5008 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-init-6d9fb954d-qlkhn"] Jan 29 15:43:55 crc kubenswrapper[5008]: E0129 15:43:55.304981 5008 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dcbfd66c-b06c-432d-b8e8-a222ab00f36c" containerName="extract" Jan 29 15:43:55 crc kubenswrapper[5008]: I0129 15:43:55.305001 5008 state_mem.go:107] "Deleted CPUSet assignment" podUID="dcbfd66c-b06c-432d-b8e8-a222ab00f36c" containerName="extract" Jan 29 15:43:55 crc kubenswrapper[5008]: E0129 15:43:55.305022 5008 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dcbfd66c-b06c-432d-b8e8-a222ab00f36c" containerName="util" Jan 29 15:43:55 crc kubenswrapper[5008]: I0129 15:43:55.305035 5008 state_mem.go:107] "Deleted CPUSet assignment" podUID="dcbfd66c-b06c-432d-b8e8-a222ab00f36c" containerName="util" Jan 29 15:43:55 crc kubenswrapper[5008]: E0129 15:43:55.305067 5008 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dcbfd66c-b06c-432d-b8e8-a222ab00f36c" containerName="pull" Jan 29 15:43:55 crc kubenswrapper[5008]: I0129 15:43:55.305081 5008 state_mem.go:107] "Deleted CPUSet assignment" podUID="dcbfd66c-b06c-432d-b8e8-a222ab00f36c" containerName="pull" Jan 29 15:43:55 crc kubenswrapper[5008]: I0129 15:43:55.305280 5008 memory_manager.go:354] "RemoveStaleState removing state" podUID="dcbfd66c-b06c-432d-b8e8-a222ab00f36c" containerName="extract" Jan 29 15:43:55 crc kubenswrapper[5008]: I0129 15:43:55.305968 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-init-6d9fb954d-qlkhn" Jan 29 15:43:55 crc kubenswrapper[5008]: I0129 15:43:55.308449 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-init-dockercfg-cf8hf" Jan 29 15:43:55 crc kubenswrapper[5008]: I0129 15:43:55.374058 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-init-6d9fb954d-qlkhn"] Jan 29 15:43:55 crc kubenswrapper[5008]: I0129 15:43:55.454339 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gqj84\" (UniqueName: \"kubernetes.io/projected/9edb96c4-66c6-464b-8dd3-089d6be05a60-kube-api-access-gqj84\") pod \"openstack-operator-controller-init-6d9fb954d-qlkhn\" (UID: \"9edb96c4-66c6-464b-8dd3-089d6be05a60\") " pod="openstack-operators/openstack-operator-controller-init-6d9fb954d-qlkhn" Jan 29 15:43:55 crc kubenswrapper[5008]: I0129 15:43:55.556893 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gqj84\" (UniqueName: \"kubernetes.io/projected/9edb96c4-66c6-464b-8dd3-089d6be05a60-kube-api-access-gqj84\") pod \"openstack-operator-controller-init-6d9fb954d-qlkhn\" (UID: \"9edb96c4-66c6-464b-8dd3-089d6be05a60\") " pod="openstack-operators/openstack-operator-controller-init-6d9fb954d-qlkhn" Jan 29 15:43:55 crc kubenswrapper[5008]: I0129 15:43:55.584346 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gqj84\" (UniqueName: \"kubernetes.io/projected/9edb96c4-66c6-464b-8dd3-089d6be05a60-kube-api-access-gqj84\") pod \"openstack-operator-controller-init-6d9fb954d-qlkhn\" (UID: \"9edb96c4-66c6-464b-8dd3-089d6be05a60\") " pod="openstack-operators/openstack-operator-controller-init-6d9fb954d-qlkhn" Jan 29 15:43:55 crc kubenswrapper[5008]: I0129 15:43:55.623931 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-init-6d9fb954d-qlkhn" Jan 29 15:43:55 crc kubenswrapper[5008]: I0129 15:43:55.850933 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-init-6d9fb954d-qlkhn"] Jan 29 15:43:56 crc kubenswrapper[5008]: I0129 15:43:56.769685 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-init-6d9fb954d-qlkhn" event={"ID":"9edb96c4-66c6-464b-8dd3-089d6be05a60","Type":"ContainerStarted","Data":"099b0885a305e83598fe4797ef06fe7fa0590be780598e1ffff24e9dbc8124fa"} Jan 29 15:44:01 crc kubenswrapper[5008]: I0129 15:44:01.805320 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-init-6d9fb954d-qlkhn" event={"ID":"9edb96c4-66c6-464b-8dd3-089d6be05a60","Type":"ContainerStarted","Data":"907e8712b6d25dae39109b258304e1241c2e97daa46e05e90720eaf5f5d23ea8"} Jan 29 15:44:01 crc kubenswrapper[5008]: I0129 15:44:01.805907 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-init-6d9fb954d-qlkhn" Jan 29 15:44:01 crc kubenswrapper[5008]: I0129 15:44:01.844070 5008 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-init-6d9fb954d-qlkhn" podStartSLOduration=1.993684027 podStartE2EDuration="6.844047739s" podCreationTimestamp="2026-01-29 15:43:55 +0000 UTC" firstStartedPulling="2026-01-29 15:43:55.861241147 +0000 UTC m=+979.534095394" lastFinishedPulling="2026-01-29 15:44:00.711604859 +0000 UTC m=+984.384459106" observedRunningTime="2026-01-29 15:44:01.840710128 +0000 UTC m=+985.513564405" watchObservedRunningTime="2026-01-29 15:44:01.844047739 +0000 UTC m=+985.516902006" Jan 29 15:44:05 crc kubenswrapper[5008]: I0129 15:44:05.627836 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-init-6d9fb954d-qlkhn" Jan 29 15:44:06 crc kubenswrapper[5008]: I0129 15:44:06.485490 5008 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-6kzcj"] Jan 29 15:44:06 crc kubenswrapper[5008]: I0129 15:44:06.486585 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-6kzcj" Jan 29 15:44:06 crc kubenswrapper[5008]: I0129 15:44:06.494654 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-6kzcj"] Jan 29 15:44:06 crc kubenswrapper[5008]: I0129 15:44:06.513206 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m6t5h\" (UniqueName: \"kubernetes.io/projected/c82fc869-759d-4902-9aef-fdd69452b420-kube-api-access-m6t5h\") pod \"certified-operators-6kzcj\" (UID: \"c82fc869-759d-4902-9aef-fdd69452b420\") " pod="openshift-marketplace/certified-operators-6kzcj" Jan 29 15:44:06 crc kubenswrapper[5008]: I0129 15:44:06.513309 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c82fc869-759d-4902-9aef-fdd69452b420-catalog-content\") pod \"certified-operators-6kzcj\" (UID: \"c82fc869-759d-4902-9aef-fdd69452b420\") " pod="openshift-marketplace/certified-operators-6kzcj" Jan 29 15:44:06 crc kubenswrapper[5008]: I0129 15:44:06.513369 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c82fc869-759d-4902-9aef-fdd69452b420-utilities\") pod \"certified-operators-6kzcj\" (UID: \"c82fc869-759d-4902-9aef-fdd69452b420\") " pod="openshift-marketplace/certified-operators-6kzcj" Jan 29 15:44:06 crc kubenswrapper[5008]: I0129 15:44:06.614252 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c82fc869-759d-4902-9aef-fdd69452b420-utilities\") pod \"certified-operators-6kzcj\" (UID: \"c82fc869-759d-4902-9aef-fdd69452b420\") " pod="openshift-marketplace/certified-operators-6kzcj" Jan 29 15:44:06 crc kubenswrapper[5008]: I0129 15:44:06.614345 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m6t5h\" (UniqueName: \"kubernetes.io/projected/c82fc869-759d-4902-9aef-fdd69452b420-kube-api-access-m6t5h\") pod \"certified-operators-6kzcj\" (UID: \"c82fc869-759d-4902-9aef-fdd69452b420\") " pod="openshift-marketplace/certified-operators-6kzcj" Jan 29 15:44:06 crc kubenswrapper[5008]: I0129 15:44:06.614424 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c82fc869-759d-4902-9aef-fdd69452b420-catalog-content\") pod \"certified-operators-6kzcj\" (UID: \"c82fc869-759d-4902-9aef-fdd69452b420\") " pod="openshift-marketplace/certified-operators-6kzcj" Jan 29 15:44:06 crc kubenswrapper[5008]: I0129 15:44:06.614775 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c82fc869-759d-4902-9aef-fdd69452b420-utilities\") pod \"certified-operators-6kzcj\" (UID: \"c82fc869-759d-4902-9aef-fdd69452b420\") " pod="openshift-marketplace/certified-operators-6kzcj" Jan 29 15:44:06 crc kubenswrapper[5008]: I0129 15:44:06.614974 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c82fc869-759d-4902-9aef-fdd69452b420-catalog-content\") pod \"certified-operators-6kzcj\" (UID: \"c82fc869-759d-4902-9aef-fdd69452b420\") " pod="openshift-marketplace/certified-operators-6kzcj" Jan 29 15:44:06 crc kubenswrapper[5008]: I0129 15:44:06.634555 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m6t5h\" (UniqueName: \"kubernetes.io/projected/c82fc869-759d-4902-9aef-fdd69452b420-kube-api-access-m6t5h\") pod \"certified-operators-6kzcj\" (UID: \"c82fc869-759d-4902-9aef-fdd69452b420\") " pod="openshift-marketplace/certified-operators-6kzcj" Jan 29 15:44:06 crc kubenswrapper[5008]: I0129 15:44:06.802420 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-6kzcj" Jan 29 15:44:07 crc kubenswrapper[5008]: I0129 15:44:07.354396 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-6kzcj"] Jan 29 15:44:07 crc kubenswrapper[5008]: I0129 15:44:07.875746 5008 generic.go:334] "Generic (PLEG): container finished" podID="c82fc869-759d-4902-9aef-fdd69452b420" containerID="5c142c008e193f2bb446f8c2889a9aba1d36db2e12bc749c5dffba8460d0aa0d" exitCode=0 Jan 29 15:44:07 crc kubenswrapper[5008]: I0129 15:44:07.875821 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6kzcj" event={"ID":"c82fc869-759d-4902-9aef-fdd69452b420","Type":"ContainerDied","Data":"5c142c008e193f2bb446f8c2889a9aba1d36db2e12bc749c5dffba8460d0aa0d"} Jan 29 15:44:07 crc kubenswrapper[5008]: I0129 15:44:07.875851 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6kzcj" event={"ID":"c82fc869-759d-4902-9aef-fdd69452b420","Type":"ContainerStarted","Data":"debd562bbbd639021d945b4eafb3e69ca2ec6a19be12a7aeaf5f75ffdbc60792"} Jan 29 15:44:08 crc kubenswrapper[5008]: E0129 15:44:08.009610 5008 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://registry.redhat.io/redhat/certified-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Jan 29 15:44:08 crc kubenswrapper[5008]: E0129 15:44:08.009873 5008 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-m6t5h,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-6kzcj_openshift-marketplace(c82fc869-759d-4902-9aef-fdd69452b420): ErrImagePull: initializing source docker://registry.redhat.io/redhat/certified-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" logger="UnhandledError" Jan 29 15:44:08 crc kubenswrapper[5008]: E0129 15:44:08.011165 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"initializing source docker://registry.redhat.io/redhat/certified-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)\"" pod="openshift-marketplace/certified-operators-6kzcj" podUID="c82fc869-759d-4902-9aef-fdd69452b420" Jan 29 15:44:08 crc kubenswrapper[5008]: E0129 15:44:08.884148 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-6kzcj" podUID="c82fc869-759d-4902-9aef-fdd69452b420" Jan 29 15:44:13 crc kubenswrapper[5008]: I0129 15:44:13.991227 5008 patch_prober.go:28] interesting pod/machine-config-daemon-gk9q8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 15:44:13 crc kubenswrapper[5008]: I0129 15:44:13.991629 5008 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-gk9q8" podUID="ca0fcb2d-733d-4bde-9bbf-3f7082d0e244" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 15:44:22 crc kubenswrapper[5008]: I0129 15:44:22.323488 5008 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-9l2c6"] Jan 29 15:44:22 crc kubenswrapper[5008]: I0129 15:44:22.325116 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-9l2c6" Jan 29 15:44:22 crc kubenswrapper[5008]: I0129 15:44:22.337258 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-9l2c6"] Jan 29 15:44:22 crc kubenswrapper[5008]: I0129 15:44:22.427202 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gkwsn\" (UniqueName: \"kubernetes.io/projected/decefe5c-189e-43f8-88b2-f93a00567c3e-kube-api-access-gkwsn\") pod \"community-operators-9l2c6\" (UID: \"decefe5c-189e-43f8-88b2-f93a00567c3e\") " pod="openshift-marketplace/community-operators-9l2c6" Jan 29 15:44:22 crc kubenswrapper[5008]: I0129 15:44:22.427258 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/decefe5c-189e-43f8-88b2-f93a00567c3e-utilities\") pod \"community-operators-9l2c6\" (UID: \"decefe5c-189e-43f8-88b2-f93a00567c3e\") " pod="openshift-marketplace/community-operators-9l2c6" Jan 29 15:44:22 crc kubenswrapper[5008]: I0129 15:44:22.427387 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/decefe5c-189e-43f8-88b2-f93a00567c3e-catalog-content\") pod \"community-operators-9l2c6\" (UID: \"decefe5c-189e-43f8-88b2-f93a00567c3e\") " pod="openshift-marketplace/community-operators-9l2c6" Jan 29 15:44:22 crc kubenswrapper[5008]: I0129 15:44:22.528937 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gkwsn\" (UniqueName: \"kubernetes.io/projected/decefe5c-189e-43f8-88b2-f93a00567c3e-kube-api-access-gkwsn\") pod \"community-operators-9l2c6\" (UID: \"decefe5c-189e-43f8-88b2-f93a00567c3e\") " pod="openshift-marketplace/community-operators-9l2c6" Jan 29 15:44:22 crc kubenswrapper[5008]: I0129 15:44:22.528980 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/decefe5c-189e-43f8-88b2-f93a00567c3e-utilities\") pod \"community-operators-9l2c6\" (UID: \"decefe5c-189e-43f8-88b2-f93a00567c3e\") " pod="openshift-marketplace/community-operators-9l2c6" Jan 29 15:44:22 crc kubenswrapper[5008]: I0129 15:44:22.529051 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/decefe5c-189e-43f8-88b2-f93a00567c3e-catalog-content\") pod \"community-operators-9l2c6\" (UID: \"decefe5c-189e-43f8-88b2-f93a00567c3e\") " pod="openshift-marketplace/community-operators-9l2c6" Jan 29 15:44:22 crc kubenswrapper[5008]: I0129 15:44:22.529889 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/decefe5c-189e-43f8-88b2-f93a00567c3e-catalog-content\") pod \"community-operators-9l2c6\" (UID: \"decefe5c-189e-43f8-88b2-f93a00567c3e\") " pod="openshift-marketplace/community-operators-9l2c6" Jan 29 15:44:22 crc kubenswrapper[5008]: I0129 15:44:22.530011 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/decefe5c-189e-43f8-88b2-f93a00567c3e-utilities\") pod \"community-operators-9l2c6\" (UID: \"decefe5c-189e-43f8-88b2-f93a00567c3e\") " pod="openshift-marketplace/community-operators-9l2c6" Jan 29 15:44:22 crc kubenswrapper[5008]: I0129 15:44:22.547139 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gkwsn\" (UniqueName: \"kubernetes.io/projected/decefe5c-189e-43f8-88b2-f93a00567c3e-kube-api-access-gkwsn\") pod \"community-operators-9l2c6\" (UID: \"decefe5c-189e-43f8-88b2-f93a00567c3e\") " pod="openshift-marketplace/community-operators-9l2c6" Jan 29 15:44:22 crc kubenswrapper[5008]: I0129 15:44:22.684152 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-9l2c6" Jan 29 15:44:22 crc kubenswrapper[5008]: I0129 15:44:22.972454 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-9l2c6"] Jan 29 15:44:22 crc kubenswrapper[5008]: W0129 15:44:22.983979 5008 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poddecefe5c_189e_43f8_88b2_f93a00567c3e.slice/crio-1e9043307f7a755489d3a239db58010b75203626c362242971f41c104845eeea WatchSource:0}: Error finding container 1e9043307f7a755489d3a239db58010b75203626c362242971f41c104845eeea: Status 404 returned error can't find the container with id 1e9043307f7a755489d3a239db58010b75203626c362242971f41c104845eeea Jan 29 15:44:23 crc kubenswrapper[5008]: I0129 15:44:23.121410 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9l2c6" event={"ID":"decefe5c-189e-43f8-88b2-f93a00567c3e","Type":"ContainerStarted","Data":"1e9043307f7a755489d3a239db58010b75203626c362242971f41c104845eeea"} Jan 29 15:44:23 crc kubenswrapper[5008]: E0129 15:44:23.452577 5008 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://registry.redhat.io/redhat/certified-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Jan 29 15:44:23 crc kubenswrapper[5008]: E0129 15:44:23.452703 5008 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-m6t5h,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-6kzcj_openshift-marketplace(c82fc869-759d-4902-9aef-fdd69452b420): ErrImagePull: initializing source docker://registry.redhat.io/redhat/certified-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" logger="UnhandledError" Jan 29 15:44:23 crc kubenswrapper[5008]: E0129 15:44:23.454462 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"initializing source docker://registry.redhat.io/redhat/certified-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)\"" pod="openshift-marketplace/certified-operators-6kzcj" podUID="c82fc869-759d-4902-9aef-fdd69452b420" Jan 29 15:44:24 crc kubenswrapper[5008]: I0129 15:44:24.132523 5008 generic.go:334] "Generic (PLEG): container finished" podID="decefe5c-189e-43f8-88b2-f93a00567c3e" containerID="11de983cd2749bba71f06017a27d73e928c76c7f26d9aaaadf0259656de48de2" exitCode=0 Jan 29 15:44:24 crc kubenswrapper[5008]: I0129 15:44:24.132618 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9l2c6" event={"ID":"decefe5c-189e-43f8-88b2-f93a00567c3e","Type":"ContainerDied","Data":"11de983cd2749bba71f06017a27d73e928c76c7f26d9aaaadf0259656de48de2"} Jan 29 15:44:24 crc kubenswrapper[5008]: E0129 15:44:24.303675 5008 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://registry.redhat.io/redhat/community-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" image="registry.redhat.io/redhat/community-operator-index:v4.18" Jan 29 15:44:24 crc kubenswrapper[5008]: E0129 15:44:24.303878 5008 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-gkwsn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-9l2c6_openshift-marketplace(decefe5c-189e-43f8-88b2-f93a00567c3e): ErrImagePull: initializing source docker://registry.redhat.io/redhat/community-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" logger="UnhandledError" Jan 29 15:44:24 crc kubenswrapper[5008]: E0129 15:44:24.305069 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"initializing source docker://registry.redhat.io/redhat/community-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)\"" pod="openshift-marketplace/community-operators-9l2c6" podUID="decefe5c-189e-43f8-88b2-f93a00567c3e" Jan 29 15:44:25 crc kubenswrapper[5008]: E0129 15:44:25.140376 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-9l2c6" podUID="decefe5c-189e-43f8-88b2-f93a00567c3e" Jan 29 15:44:30 crc kubenswrapper[5008]: I0129 15:44:30.235462 5008 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-hh7sg"] Jan 29 15:44:30 crc kubenswrapper[5008]: I0129 15:44:30.236701 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-hh7sg" Jan 29 15:44:30 crc kubenswrapper[5008]: I0129 15:44:30.238864 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"barbican-operator-controller-manager-dockercfg-pr6jc" Jan 29 15:44:30 crc kubenswrapper[5008]: I0129 15:44:30.245069 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-hh7sg"] Jan 29 15:44:30 crc kubenswrapper[5008]: I0129 15:44:30.259599 5008 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/cinder-operator-controller-manager-8d874c8fc-4zrsr"] Jan 29 15:44:30 crc kubenswrapper[5008]: I0129 15:44:30.260597 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-8d874c8fc-4zrsr" Jan 29 15:44:30 crc kubenswrapper[5008]: I0129 15:44:30.262536 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"cinder-operator-controller-manager-dockercfg-bzgh8" Jan 29 15:44:30 crc kubenswrapper[5008]: I0129 15:44:30.266452 5008 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/designate-operator-controller-manager-6d9697b7f4-n4xtj"] Jan 29 15:44:30 crc kubenswrapper[5008]: I0129 15:44:30.267595 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-6d9697b7f4-n4xtj" Jan 29 15:44:30 crc kubenswrapper[5008]: I0129 15:44:30.270019 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"designate-operator-controller-manager-dockercfg-47pm5" Jan 29 15:44:30 crc kubenswrapper[5008]: I0129 15:44:30.278343 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-8d874c8fc-4zrsr"] Jan 29 15:44:30 crc kubenswrapper[5008]: I0129 15:44:30.310817 5008 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/glance-operator-controller-manager-8886f4c47-s4fq5"] Jan 29 15:44:30 crc kubenswrapper[5008]: I0129 15:44:30.311609 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-8886f4c47-s4fq5" Jan 29 15:44:30 crc kubenswrapper[5008]: I0129 15:44:30.313663 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"glance-operator-controller-manager-dockercfg-vn7c6" Jan 29 15:44:30 crc kubenswrapper[5008]: I0129 15:44:30.325964 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-6d9697b7f4-n4xtj"] Jan 29 15:44:30 crc kubenswrapper[5008]: I0129 15:44:30.335384 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-8886f4c47-s4fq5"] Jan 29 15:44:30 crc kubenswrapper[5008]: I0129 15:44:30.339222 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tl8lj\" (UniqueName: \"kubernetes.io/projected/7a610d2e-cb71-4995-a0e8-f6dc26f7664a-kube-api-access-tl8lj\") pod \"designate-operator-controller-manager-6d9697b7f4-n4xtj\" (UID: \"7a610d2e-cb71-4995-a0e8-f6dc26f7664a\") " pod="openstack-operators/designate-operator-controller-manager-6d9697b7f4-n4xtj" Jan 29 15:44:30 crc kubenswrapper[5008]: I0129 15:44:30.339352 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j7j6n\" (UniqueName: \"kubernetes.io/projected/68468eb9-9e76-4f2f-9aba-cc3198e0a241-kube-api-access-j7j6n\") pod \"barbican-operator-controller-manager-7b6c4d8c5f-hh7sg\" (UID: \"68468eb9-9e76-4f2f-9aba-cc3198e0a241\") " pod="openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-hh7sg" Jan 29 15:44:30 crc kubenswrapper[5008]: I0129 15:44:30.339418 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dhj9m\" (UniqueName: \"kubernetes.io/projected/6e775178-095e-451d-bded-b83f229c4231-kube-api-access-dhj9m\") pod \"cinder-operator-controller-manager-8d874c8fc-4zrsr\" (UID: \"6e775178-095e-451d-bded-b83f229c4231\") " pod="openstack-operators/cinder-operator-controller-manager-8d874c8fc-4zrsr" Jan 29 15:44:30 crc kubenswrapper[5008]: I0129 15:44:30.360850 5008 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/heat-operator-controller-manager-69d6db494d-9sf7f"] Jan 29 15:44:30 crc kubenswrapper[5008]: I0129 15:44:30.361579 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-69d6db494d-9sf7f" Jan 29 15:44:30 crc kubenswrapper[5008]: I0129 15:44:30.363726 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"heat-operator-controller-manager-dockercfg-w2b4n" Jan 29 15:44:30 crc kubenswrapper[5008]: I0129 15:44:30.379578 5008 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/horizon-operator-controller-manager-5fb775575f-qs9wh"] Jan 29 15:44:30 crc kubenswrapper[5008]: I0129 15:44:30.380332 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-qs9wh" Jan 29 15:44:30 crc kubenswrapper[5008]: I0129 15:44:30.387854 5008 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/infra-operator-controller-manager-79955696d6-zvcs5"] Jan 29 15:44:30 crc kubenswrapper[5008]: I0129 15:44:30.388835 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-79955696d6-zvcs5" Jan 29 15:44:30 crc kubenswrapper[5008]: I0129 15:44:30.392453 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"horizon-operator-controller-manager-dockercfg-thwfs" Jan 29 15:44:30 crc kubenswrapper[5008]: I0129 15:44:30.397402 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-webhook-server-cert" Jan 29 15:44:30 crc kubenswrapper[5008]: I0129 15:44:30.397572 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-controller-manager-dockercfg-tbwkr" Jan 29 15:44:30 crc kubenswrapper[5008]: I0129 15:44:30.400932 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-69d6db494d-9sf7f"] Jan 29 15:44:30 crc kubenswrapper[5008]: I0129 15:44:30.418241 5008 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-ncxxj"] Jan 29 15:44:30 crc kubenswrapper[5008]: I0129 15:44:30.418922 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-ncxxj" Jan 29 15:44:30 crc kubenswrapper[5008]: I0129 15:44:30.426248 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ironic-operator-controller-manager-dockercfg-jz7r5" Jan 29 15:44:30 crc kubenswrapper[5008]: I0129 15:44:30.427482 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-5fb775575f-qs9wh"] Jan 29 15:44:30 crc kubenswrapper[5008]: I0129 15:44:30.447376 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j7j6n\" (UniqueName: \"kubernetes.io/projected/68468eb9-9e76-4f2f-9aba-cc3198e0a241-kube-api-access-j7j6n\") pod \"barbican-operator-controller-manager-7b6c4d8c5f-hh7sg\" (UID: \"68468eb9-9e76-4f2f-9aba-cc3198e0a241\") " pod="openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-hh7sg" Jan 29 15:44:30 crc kubenswrapper[5008]: I0129 15:44:30.447441 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/4ff89cd9-951e-4907-b60c-a1a1c08007a4-cert\") pod \"infra-operator-controller-manager-79955696d6-zvcs5\" (UID: \"4ff89cd9-951e-4907-b60c-a1a1c08007a4\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-zvcs5" Jan 29 15:44:30 crc kubenswrapper[5008]: I0129 15:44:30.447497 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f2mtw\" (UniqueName: \"kubernetes.io/projected/4ff89cd9-951e-4907-b60c-a1a1c08007a4-kube-api-access-f2mtw\") pod \"infra-operator-controller-manager-79955696d6-zvcs5\" (UID: \"4ff89cd9-951e-4907-b60c-a1a1c08007a4\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-zvcs5" Jan 29 15:44:30 crc kubenswrapper[5008]: I0129 15:44:30.447552 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dhj9m\" (UniqueName: \"kubernetes.io/projected/6e775178-095e-451d-bded-b83f229c4231-kube-api-access-dhj9m\") pod \"cinder-operator-controller-manager-8d874c8fc-4zrsr\" (UID: \"6e775178-095e-451d-bded-b83f229c4231\") " pod="openstack-operators/cinder-operator-controller-manager-8d874c8fc-4zrsr" Jan 29 15:44:30 crc kubenswrapper[5008]: I0129 15:44:30.447602 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qmws6\" (UniqueName: \"kubernetes.io/projected/cae67616-1145-4057-b304-08a322e78d9d-kube-api-access-qmws6\") pod \"horizon-operator-controller-manager-5fb775575f-qs9wh\" (UID: \"cae67616-1145-4057-b304-08a322e78d9d\") " pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-qs9wh" Jan 29 15:44:30 crc kubenswrapper[5008]: I0129 15:44:30.447637 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tl8lj\" (UniqueName: \"kubernetes.io/projected/7a610d2e-cb71-4995-a0e8-f6dc26f7664a-kube-api-access-tl8lj\") pod \"designate-operator-controller-manager-6d9697b7f4-n4xtj\" (UID: \"7a610d2e-cb71-4995-a0e8-f6dc26f7664a\") " pod="openstack-operators/designate-operator-controller-manager-6d9697b7f4-n4xtj" Jan 29 15:44:30 crc kubenswrapper[5008]: I0129 15:44:30.447708 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8wxlp\" (UniqueName: \"kubernetes.io/projected/b46e3eea-2330-4b3f-b45d-34ae38a0dde9-kube-api-access-8wxlp\") pod \"heat-operator-controller-manager-69d6db494d-9sf7f\" (UID: \"b46e3eea-2330-4b3f-b45d-34ae38a0dde9\") " pod="openstack-operators/heat-operator-controller-manager-69d6db494d-9sf7f" Jan 29 15:44:30 crc kubenswrapper[5008]: I0129 15:44:30.447771 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-45p6s\" (UniqueName: \"kubernetes.io/projected/94a4547d-0c92-41e4-8ca7-64e21df1708e-kube-api-access-45p6s\") pod \"glance-operator-controller-manager-8886f4c47-s4fq5\" (UID: \"94a4547d-0c92-41e4-8ca7-64e21df1708e\") " pod="openstack-operators/glance-operator-controller-manager-8886f4c47-s4fq5" Jan 29 15:44:30 crc kubenswrapper[5008]: I0129 15:44:30.448853 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-79955696d6-zvcs5"] Jan 29 15:44:30 crc kubenswrapper[5008]: I0129 15:44:30.457838 5008 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/keystone-operator-controller-manager-84f48565d4-qhwnb"] Jan 29 15:44:30 crc kubenswrapper[5008]: I0129 15:44:30.459606 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-84f48565d4-qhwnb" Jan 29 15:44:30 crc kubenswrapper[5008]: I0129 15:44:30.467222 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"keystone-operator-controller-manager-dockercfg-j6css" Jan 29 15:44:30 crc kubenswrapper[5008]: I0129 15:44:30.469465 5008 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/manila-operator-controller-manager-7dd968899f-q7khh"] Jan 29 15:44:30 crc kubenswrapper[5008]: I0129 15:44:30.471549 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-7dd968899f-q7khh" Jan 29 15:44:30 crc kubenswrapper[5008]: I0129 15:44:30.478994 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-84f48565d4-qhwnb"] Jan 29 15:44:30 crc kubenswrapper[5008]: I0129 15:44:30.483439 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"manila-operator-controller-manager-dockercfg-sdg77" Jan 29 15:44:30 crc kubenswrapper[5008]: I0129 15:44:30.494806 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dhj9m\" (UniqueName: \"kubernetes.io/projected/6e775178-095e-451d-bded-b83f229c4231-kube-api-access-dhj9m\") pod \"cinder-operator-controller-manager-8d874c8fc-4zrsr\" (UID: \"6e775178-095e-451d-bded-b83f229c4231\") " pod="openstack-operators/cinder-operator-controller-manager-8d874c8fc-4zrsr" Jan 29 15:44:30 crc kubenswrapper[5008]: I0129 15:44:30.506375 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j7j6n\" (UniqueName: \"kubernetes.io/projected/68468eb9-9e76-4f2f-9aba-cc3198e0a241-kube-api-access-j7j6n\") pod \"barbican-operator-controller-manager-7b6c4d8c5f-hh7sg\" (UID: \"68468eb9-9e76-4f2f-9aba-cc3198e0a241\") " pod="openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-hh7sg" Jan 29 15:44:30 crc kubenswrapper[5008]: I0129 15:44:30.506725 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-ncxxj"] Jan 29 15:44:30 crc kubenswrapper[5008]: I0129 15:44:30.507173 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tl8lj\" (UniqueName: \"kubernetes.io/projected/7a610d2e-cb71-4995-a0e8-f6dc26f7664a-kube-api-access-tl8lj\") pod \"designate-operator-controller-manager-6d9697b7f4-n4xtj\" (UID: \"7a610d2e-cb71-4995-a0e8-f6dc26f7664a\") " pod="openstack-operators/designate-operator-controller-manager-6d9697b7f4-n4xtj" Jan 29 15:44:30 crc kubenswrapper[5008]: I0129 15:44:30.552497 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f2mtw\" (UniqueName: \"kubernetes.io/projected/4ff89cd9-951e-4907-b60c-a1a1c08007a4-kube-api-access-f2mtw\") pod \"infra-operator-controller-manager-79955696d6-zvcs5\" (UID: \"4ff89cd9-951e-4907-b60c-a1a1c08007a4\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-zvcs5" Jan 29 15:44:30 crc kubenswrapper[5008]: I0129 15:44:30.552764 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wk5pq\" (UniqueName: \"kubernetes.io/projected/e57e9a97-d32e-4464-b12c-ba44a4643ada-kube-api-access-wk5pq\") pod \"manila-operator-controller-manager-7dd968899f-q7khh\" (UID: \"e57e9a97-d32e-4464-b12c-ba44a4643ada\") " pod="openstack-operators/manila-operator-controller-manager-7dd968899f-q7khh" Jan 29 15:44:30 crc kubenswrapper[5008]: I0129 15:44:30.552918 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qmws6\" (UniqueName: \"kubernetes.io/projected/cae67616-1145-4057-b304-08a322e78d9d-kube-api-access-qmws6\") pod \"horizon-operator-controller-manager-5fb775575f-qs9wh\" (UID: \"cae67616-1145-4057-b304-08a322e78d9d\") " pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-qs9wh" Jan 29 15:44:30 crc kubenswrapper[5008]: I0129 15:44:30.553054 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8wxlp\" (UniqueName: \"kubernetes.io/projected/b46e3eea-2330-4b3f-b45d-34ae38a0dde9-kube-api-access-8wxlp\") pod \"heat-operator-controller-manager-69d6db494d-9sf7f\" (UID: \"b46e3eea-2330-4b3f-b45d-34ae38a0dde9\") " pod="openstack-operators/heat-operator-controller-manager-69d6db494d-9sf7f" Jan 29 15:44:30 crc kubenswrapper[5008]: I0129 15:44:30.553165 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j7jgj\" (UniqueName: \"kubernetes.io/projected/e76346a9-7ba5-4178-82b7-da9f0c337c08-kube-api-access-j7jgj\") pod \"keystone-operator-controller-manager-84f48565d4-qhwnb\" (UID: \"e76346a9-7ba5-4178-82b7-da9f0c337c08\") " pod="openstack-operators/keystone-operator-controller-manager-84f48565d4-qhwnb" Jan 29 15:44:30 crc kubenswrapper[5008]: I0129 15:44:30.553264 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9jp82\" (UniqueName: \"kubernetes.io/projected/6196a4fd-8576-412f-9140-cf61b98444a4-kube-api-access-9jp82\") pod \"ironic-operator-controller-manager-5f4b8bd54d-ncxxj\" (UID: \"6196a4fd-8576-412f-9140-cf61b98444a4\") " pod="openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-ncxxj" Jan 29 15:44:30 crc kubenswrapper[5008]: I0129 15:44:30.553351 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-45p6s\" (UniqueName: \"kubernetes.io/projected/94a4547d-0c92-41e4-8ca7-64e21df1708e-kube-api-access-45p6s\") pod \"glance-operator-controller-manager-8886f4c47-s4fq5\" (UID: \"94a4547d-0c92-41e4-8ca7-64e21df1708e\") " pod="openstack-operators/glance-operator-controller-manager-8886f4c47-s4fq5" Jan 29 15:44:30 crc kubenswrapper[5008]: I0129 15:44:30.553462 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/4ff89cd9-951e-4907-b60c-a1a1c08007a4-cert\") pod \"infra-operator-controller-manager-79955696d6-zvcs5\" (UID: \"4ff89cd9-951e-4907-b60c-a1a1c08007a4\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-zvcs5" Jan 29 15:44:30 crc kubenswrapper[5008]: E0129 15:44:30.553678 5008 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 29 15:44:30 crc kubenswrapper[5008]: E0129 15:44:30.553832 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4ff89cd9-951e-4907-b60c-a1a1c08007a4-cert podName:4ff89cd9-951e-4907-b60c-a1a1c08007a4 nodeName:}" failed. No retries permitted until 2026-01-29 15:44:31.053801648 +0000 UTC m=+1014.726655895 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/4ff89cd9-951e-4907-b60c-a1a1c08007a4-cert") pod "infra-operator-controller-manager-79955696d6-zvcs5" (UID: "4ff89cd9-951e-4907-b60c-a1a1c08007a4") : secret "infra-operator-webhook-server-cert" not found Jan 29 15:44:30 crc kubenswrapper[5008]: I0129 15:44:30.557947 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-7dd968899f-q7khh"] Jan 29 15:44:30 crc kubenswrapper[5008]: I0129 15:44:30.558410 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-hh7sg" Jan 29 15:44:30 crc kubenswrapper[5008]: I0129 15:44:30.563026 5008 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/neutron-operator-controller-manager-585dbc889-44qcp"] Jan 29 15:44:30 crc kubenswrapper[5008]: I0129 15:44:30.564320 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-585dbc889-44qcp" Jan 29 15:44:30 crc kubenswrapper[5008]: I0129 15:44:30.576907 5008 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-67bf948998-bjjwz"] Jan 29 15:44:30 crc kubenswrapper[5008]: I0129 15:44:30.578171 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-bjjwz" Jan 29 15:44:30 crc kubenswrapper[5008]: I0129 15:44:30.589697 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"neutron-operator-controller-manager-dockercfg-fs9lw" Jan 29 15:44:30 crc kubenswrapper[5008]: I0129 15:44:30.589970 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"mariadb-operator-controller-manager-dockercfg-6hhjd" Jan 29 15:44:30 crc kubenswrapper[5008]: I0129 15:44:30.590544 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-8d874c8fc-4zrsr" Jan 29 15:44:30 crc kubenswrapper[5008]: I0129 15:44:30.598022 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qmws6\" (UniqueName: \"kubernetes.io/projected/cae67616-1145-4057-b304-08a322e78d9d-kube-api-access-qmws6\") pod \"horizon-operator-controller-manager-5fb775575f-qs9wh\" (UID: \"cae67616-1145-4057-b304-08a322e78d9d\") " pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-qs9wh" Jan 29 15:44:30 crc kubenswrapper[5008]: I0129 15:44:30.604620 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-67bf948998-bjjwz"] Jan 29 15:44:30 crc kubenswrapper[5008]: I0129 15:44:30.604898 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-6d9697b7f4-n4xtj" Jan 29 15:44:30 crc kubenswrapper[5008]: I0129 15:44:30.606949 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8wxlp\" (UniqueName: \"kubernetes.io/projected/b46e3eea-2330-4b3f-b45d-34ae38a0dde9-kube-api-access-8wxlp\") pod \"heat-operator-controller-manager-69d6db494d-9sf7f\" (UID: \"b46e3eea-2330-4b3f-b45d-34ae38a0dde9\") " pod="openstack-operators/heat-operator-controller-manager-69d6db494d-9sf7f" Jan 29 15:44:30 crc kubenswrapper[5008]: I0129 15:44:30.611339 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f2mtw\" (UniqueName: \"kubernetes.io/projected/4ff89cd9-951e-4907-b60c-a1a1c08007a4-kube-api-access-f2mtw\") pod \"infra-operator-controller-manager-79955696d6-zvcs5\" (UID: \"4ff89cd9-951e-4907-b60c-a1a1c08007a4\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-zvcs5" Jan 29 15:44:30 crc kubenswrapper[5008]: I0129 15:44:30.622582 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-45p6s\" (UniqueName: \"kubernetes.io/projected/94a4547d-0c92-41e4-8ca7-64e21df1708e-kube-api-access-45p6s\") pod \"glance-operator-controller-manager-8886f4c47-s4fq5\" (UID: \"94a4547d-0c92-41e4-8ca7-64e21df1708e\") " pod="openstack-operators/glance-operator-controller-manager-8886f4c47-s4fq5" Jan 29 15:44:30 crc kubenswrapper[5008]: I0129 15:44:30.625462 5008 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/nova-operator-controller-manager-55bff696bd-klqvj"] Jan 29 15:44:30 crc kubenswrapper[5008]: I0129 15:44:30.626538 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-55bff696bd-klqvj" Jan 29 15:44:30 crc kubenswrapper[5008]: I0129 15:44:30.628062 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"nova-operator-controller-manager-dockercfg-sql9t" Jan 29 15:44:30 crc kubenswrapper[5008]: I0129 15:44:30.630622 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-8886f4c47-s4fq5" Jan 29 15:44:30 crc kubenswrapper[5008]: I0129 15:44:30.654534 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gmdcr\" (UniqueName: \"kubernetes.io/projected/14020423-5911-4b69-8889-b12267c9bbf9-kube-api-access-gmdcr\") pod \"neutron-operator-controller-manager-585dbc889-44qcp\" (UID: \"14020423-5911-4b69-8889-b12267c9bbf9\") " pod="openstack-operators/neutron-operator-controller-manager-585dbc889-44qcp" Jan 29 15:44:30 crc kubenswrapper[5008]: I0129 15:44:30.654593 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wk5pq\" (UniqueName: \"kubernetes.io/projected/e57e9a97-d32e-4464-b12c-ba44a4643ada-kube-api-access-wk5pq\") pod \"manila-operator-controller-manager-7dd968899f-q7khh\" (UID: \"e57e9a97-d32e-4464-b12c-ba44a4643ada\") " pod="openstack-operators/manila-operator-controller-manager-7dd968899f-q7khh" Jan 29 15:44:30 crc kubenswrapper[5008]: I0129 15:44:30.654645 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8hpbg\" (UniqueName: \"kubernetes.io/projected/d39876a5-4ca3-44e2-a4c5-c6541c2ec812-kube-api-access-8hpbg\") pod \"mariadb-operator-controller-manager-67bf948998-bjjwz\" (UID: \"d39876a5-4ca3-44e2-a4c5-c6541c2ec812\") " pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-bjjwz" Jan 29 15:44:30 crc kubenswrapper[5008]: I0129 15:44:30.654685 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j7jgj\" (UniqueName: \"kubernetes.io/projected/e76346a9-7ba5-4178-82b7-da9f0c337c08-kube-api-access-j7jgj\") pod \"keystone-operator-controller-manager-84f48565d4-qhwnb\" (UID: \"e76346a9-7ba5-4178-82b7-da9f0c337c08\") " pod="openstack-operators/keystone-operator-controller-manager-84f48565d4-qhwnb" Jan 29 15:44:30 crc kubenswrapper[5008]: I0129 15:44:30.654716 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9jp82\" (UniqueName: \"kubernetes.io/projected/6196a4fd-8576-412f-9140-cf61b98444a4-kube-api-access-9jp82\") pod \"ironic-operator-controller-manager-5f4b8bd54d-ncxxj\" (UID: \"6196a4fd-8576-412f-9140-cf61b98444a4\") " pod="openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-ncxxj" Jan 29 15:44:30 crc kubenswrapper[5008]: I0129 15:44:30.666386 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-585dbc889-44qcp"] Jan 29 15:44:30 crc kubenswrapper[5008]: I0129 15:44:30.674442 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-55bff696bd-klqvj"] Jan 29 15:44:30 crc kubenswrapper[5008]: I0129 15:44:30.677272 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9jp82\" (UniqueName: \"kubernetes.io/projected/6196a4fd-8576-412f-9140-cf61b98444a4-kube-api-access-9jp82\") pod \"ironic-operator-controller-manager-5f4b8bd54d-ncxxj\" (UID: \"6196a4fd-8576-412f-9140-cf61b98444a4\") " pod="openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-ncxxj" Jan 29 15:44:30 crc kubenswrapper[5008]: I0129 15:44:30.677908 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wk5pq\" (UniqueName: \"kubernetes.io/projected/e57e9a97-d32e-4464-b12c-ba44a4643ada-kube-api-access-wk5pq\") pod \"manila-operator-controller-manager-7dd968899f-q7khh\" (UID: \"e57e9a97-d32e-4464-b12c-ba44a4643ada\") " pod="openstack-operators/manila-operator-controller-manager-7dd968899f-q7khh" Jan 29 15:44:30 crc kubenswrapper[5008]: I0129 15:44:30.684190 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j7jgj\" (UniqueName: \"kubernetes.io/projected/e76346a9-7ba5-4178-82b7-da9f0c337c08-kube-api-access-j7jgj\") pod \"keystone-operator-controller-manager-84f48565d4-qhwnb\" (UID: \"e76346a9-7ba5-4178-82b7-da9f0c337c08\") " pod="openstack-operators/keystone-operator-controller-manager-84f48565d4-qhwnb" Jan 29 15:44:30 crc kubenswrapper[5008]: I0129 15:44:30.688262 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-7dd968899f-q7khh" Jan 29 15:44:30 crc kubenswrapper[5008]: I0129 15:44:30.689829 5008 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/octavia-operator-controller-manager-6687f8d877-zbddd"] Jan 29 15:44:30 crc kubenswrapper[5008]: I0129 15:44:30.690739 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-6687f8d877-zbddd" Jan 29 15:44:30 crc kubenswrapper[5008]: I0129 15:44:30.693912 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"octavia-operator-controller-manager-dockercfg-nbtjx" Jan 29 15:44:30 crc kubenswrapper[5008]: I0129 15:44:30.702104 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-6687f8d877-zbddd"] Jan 29 15:44:30 crc kubenswrapper[5008]: I0129 15:44:30.724759 5008 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dxkdxv"] Jan 29 15:44:30 crc kubenswrapper[5008]: I0129 15:44:30.726053 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dxkdxv" Jan 29 15:44:30 crc kubenswrapper[5008]: I0129 15:44:30.727272 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-69d6db494d-9sf7f" Jan 29 15:44:30 crc kubenswrapper[5008]: I0129 15:44:30.727440 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-webhook-server-cert" Jan 29 15:44:30 crc kubenswrapper[5008]: I0129 15:44:30.728335 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-controller-manager-dockercfg-wmglr" Jan 29 15:44:30 crc kubenswrapper[5008]: I0129 15:44:30.748649 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-qs9wh" Jan 29 15:44:30 crc kubenswrapper[5008]: I0129 15:44:30.755452 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8hpbg\" (UniqueName: \"kubernetes.io/projected/d39876a5-4ca3-44e2-a4c5-c6541c2ec812-kube-api-access-8hpbg\") pod \"mariadb-operator-controller-manager-67bf948998-bjjwz\" (UID: \"d39876a5-4ca3-44e2-a4c5-c6541c2ec812\") " pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-bjjwz" Jan 29 15:44:30 crc kubenswrapper[5008]: I0129 15:44:30.755516 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ccnhz\" (UniqueName: \"kubernetes.io/projected/27a92a88-ee29-47fd-b4cf-5e3232ce7573-kube-api-access-ccnhz\") pod \"nova-operator-controller-manager-55bff696bd-klqvj\" (UID: \"27a92a88-ee29-47fd-b4cf-5e3232ce7573\") " pod="openstack-operators/nova-operator-controller-manager-55bff696bd-klqvj" Jan 29 15:44:30 crc kubenswrapper[5008]: I0129 15:44:30.755541 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t59lc\" (UniqueName: \"kubernetes.io/projected/4dc123ee-b76c-46a7-9aea-76457232036b-kube-api-access-t59lc\") pod \"octavia-operator-controller-manager-6687f8d877-zbddd\" (UID: \"4dc123ee-b76c-46a7-9aea-76457232036b\") " pod="openstack-operators/octavia-operator-controller-manager-6687f8d877-zbddd" Jan 29 15:44:30 crc kubenswrapper[5008]: I0129 15:44:30.755593 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gmdcr\" (UniqueName: \"kubernetes.io/projected/14020423-5911-4b69-8889-b12267c9bbf9-kube-api-access-gmdcr\") pod \"neutron-operator-controller-manager-585dbc889-44qcp\" (UID: \"14020423-5911-4b69-8889-b12267c9bbf9\") " pod="openstack-operators/neutron-operator-controller-manager-585dbc889-44qcp" Jan 29 15:44:30 crc kubenswrapper[5008]: I0129 15:44:30.761683 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dxkdxv"] Jan 29 15:44:30 crc kubenswrapper[5008]: I0129 15:44:30.769188 5008 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ovn-operator-controller-manager-788c46999f-qjtzq"] Jan 29 15:44:30 crc kubenswrapper[5008]: I0129 15:44:30.770195 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-788c46999f-qjtzq" Jan 29 15:44:30 crc kubenswrapper[5008]: I0129 15:44:30.770486 5008 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/swift-operator-controller-manager-68fc8c869-84h7l"] Jan 29 15:44:30 crc kubenswrapper[5008]: I0129 15:44:30.771234 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-68fc8c869-84h7l" Jan 29 15:44:30 crc kubenswrapper[5008]: I0129 15:44:30.772198 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8hpbg\" (UniqueName: \"kubernetes.io/projected/d39876a5-4ca3-44e2-a4c5-c6541c2ec812-kube-api-access-8hpbg\") pod \"mariadb-operator-controller-manager-67bf948998-bjjwz\" (UID: \"d39876a5-4ca3-44e2-a4c5-c6541c2ec812\") " pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-bjjwz" Jan 29 15:44:30 crc kubenswrapper[5008]: I0129 15:44:30.772210 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ovn-operator-controller-manager-dockercfg-4fkt4" Jan 29 15:44:30 crc kubenswrapper[5008]: I0129 15:44:30.777123 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"swift-operator-controller-manager-dockercfg-s9k57" Jan 29 15:44:30 crc kubenswrapper[5008]: I0129 15:44:30.780867 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gmdcr\" (UniqueName: \"kubernetes.io/projected/14020423-5911-4b69-8889-b12267c9bbf9-kube-api-access-gmdcr\") pod \"neutron-operator-controller-manager-585dbc889-44qcp\" (UID: \"14020423-5911-4b69-8889-b12267c9bbf9\") " pod="openstack-operators/neutron-operator-controller-manager-585dbc889-44qcp" Jan 29 15:44:30 crc kubenswrapper[5008]: I0129 15:44:30.786439 5008 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/placement-operator-controller-manager-5b964cf4cd-xjf4m"] Jan 29 15:44:30 crc kubenswrapper[5008]: I0129 15:44:30.788141 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-xjf4m" Jan 29 15:44:30 crc kubenswrapper[5008]: I0129 15:44:30.796462 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-68fc8c869-84h7l"] Jan 29 15:44:30 crc kubenswrapper[5008]: I0129 15:44:30.802092 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"placement-operator-controller-manager-dockercfg-54s5g" Jan 29 15:44:30 crc kubenswrapper[5008]: I0129 15:44:30.808572 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-788c46999f-qjtzq"] Jan 29 15:44:30 crc kubenswrapper[5008]: I0129 15:44:30.840556 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-5b964cf4cd-xjf4m"] Jan 29 15:44:30 crc kubenswrapper[5008]: I0129 15:44:30.857276 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-twwv9\" (UniqueName: \"kubernetes.io/projected/a9dfe223-8569-48bb-8b52-c3fb069208a0-kube-api-access-twwv9\") pod \"swift-operator-controller-manager-68fc8c869-84h7l\" (UID: \"a9dfe223-8569-48bb-8b52-c3fb069208a0\") " pod="openstack-operators/swift-operator-controller-manager-68fc8c869-84h7l" Jan 29 15:44:30 crc kubenswrapper[5008]: I0129 15:44:30.857344 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/9f5d1ef8-a9b5-428a-b441-b7d763dbd102-cert\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4dxkdxv\" (UID: \"9f5d1ef8-a9b5-428a-b441-b7d763dbd102\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dxkdxv" Jan 29 15:44:30 crc kubenswrapper[5008]: I0129 15:44:30.857391 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ccnhz\" (UniqueName: \"kubernetes.io/projected/27a92a88-ee29-47fd-b4cf-5e3232ce7573-kube-api-access-ccnhz\") pod \"nova-operator-controller-manager-55bff696bd-klqvj\" (UID: \"27a92a88-ee29-47fd-b4cf-5e3232ce7573\") " pod="openstack-operators/nova-operator-controller-manager-55bff696bd-klqvj" Jan 29 15:44:30 crc kubenswrapper[5008]: I0129 15:44:30.857413 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t59lc\" (UniqueName: \"kubernetes.io/projected/4dc123ee-b76c-46a7-9aea-76457232036b-kube-api-access-t59lc\") pod \"octavia-operator-controller-manager-6687f8d877-zbddd\" (UID: \"4dc123ee-b76c-46a7-9aea-76457232036b\") " pod="openstack-operators/octavia-operator-controller-manager-6687f8d877-zbddd" Jan 29 15:44:30 crc kubenswrapper[5008]: I0129 15:44:30.857436 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-njtnx\" (UniqueName: \"kubernetes.io/projected/9f5d1ef8-a9b5-428a-b441-b7d763dbd102-kube-api-access-njtnx\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4dxkdxv\" (UID: \"9f5d1ef8-a9b5-428a-b441-b7d763dbd102\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dxkdxv" Jan 29 15:44:30 crc kubenswrapper[5008]: I0129 15:44:30.857470 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8wdbm\" (UniqueName: \"kubernetes.io/projected/ce6a1921-bd9b-47c4-8f5f-9443d8e4c08f-kube-api-access-8wdbm\") pod \"placement-operator-controller-manager-5b964cf4cd-xjf4m\" (UID: \"ce6a1921-bd9b-47c4-8f5f-9443d8e4c08f\") " pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-xjf4m" Jan 29 15:44:30 crc kubenswrapper[5008]: I0129 15:44:30.857488 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ktqqm\" (UniqueName: \"kubernetes.io/projected/cb2d6253-7fa7-41a9-9d0b-002ef590c4db-kube-api-access-ktqqm\") pod \"ovn-operator-controller-manager-788c46999f-qjtzq\" (UID: \"cb2d6253-7fa7-41a9-9d0b-002ef590c4db\") " pod="openstack-operators/ovn-operator-controller-manager-788c46999f-qjtzq" Jan 29 15:44:30 crc kubenswrapper[5008]: I0129 15:44:30.865299 5008 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-64b5b76f97-bbsft"] Jan 29 15:44:30 crc kubenswrapper[5008]: I0129 15:44:30.866154 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-64b5b76f97-bbsft" Jan 29 15:44:30 crc kubenswrapper[5008]: I0129 15:44:30.870351 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"telemetry-operator-controller-manager-dockercfg-9jzzm" Jan 29 15:44:30 crc kubenswrapper[5008]: I0129 15:44:30.871085 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-ncxxj" Jan 29 15:44:30 crc kubenswrapper[5008]: I0129 15:44:30.878276 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t59lc\" (UniqueName: \"kubernetes.io/projected/4dc123ee-b76c-46a7-9aea-76457232036b-kube-api-access-t59lc\") pod \"octavia-operator-controller-manager-6687f8d877-zbddd\" (UID: \"4dc123ee-b76c-46a7-9aea-76457232036b\") " pod="openstack-operators/octavia-operator-controller-manager-6687f8d877-zbddd" Jan 29 15:44:30 crc kubenswrapper[5008]: I0129 15:44:30.915357 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ccnhz\" (UniqueName: \"kubernetes.io/projected/27a92a88-ee29-47fd-b4cf-5e3232ce7573-kube-api-access-ccnhz\") pod \"nova-operator-controller-manager-55bff696bd-klqvj\" (UID: \"27a92a88-ee29-47fd-b4cf-5e3232ce7573\") " pod="openstack-operators/nova-operator-controller-manager-55bff696bd-klqvj" Jan 29 15:44:30 crc kubenswrapper[5008]: I0129 15:44:30.920307 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-64b5b76f97-bbsft"] Jan 29 15:44:30 crc kubenswrapper[5008]: I0129 15:44:30.959506 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ktqqm\" (UniqueName: \"kubernetes.io/projected/cb2d6253-7fa7-41a9-9d0b-002ef590c4db-kube-api-access-ktqqm\") pod \"ovn-operator-controller-manager-788c46999f-qjtzq\" (UID: \"cb2d6253-7fa7-41a9-9d0b-002ef590c4db\") " pod="openstack-operators/ovn-operator-controller-manager-788c46999f-qjtzq" Jan 29 15:44:30 crc kubenswrapper[5008]: I0129 15:44:30.969221 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-twwv9\" (UniqueName: \"kubernetes.io/projected/a9dfe223-8569-48bb-8b52-c3fb069208a0-kube-api-access-twwv9\") pod \"swift-operator-controller-manager-68fc8c869-84h7l\" (UID: \"a9dfe223-8569-48bb-8b52-c3fb069208a0\") " pod="openstack-operators/swift-operator-controller-manager-68fc8c869-84h7l" Jan 29 15:44:30 crc kubenswrapper[5008]: I0129 15:44:30.969281 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jv9f4\" (UniqueName: \"kubernetes.io/projected/30b3e5fd-7f41-4ed9-a1de-cb282994ad38-kube-api-access-jv9f4\") pod \"telemetry-operator-controller-manager-64b5b76f97-bbsft\" (UID: \"30b3e5fd-7f41-4ed9-a1de-cb282994ad38\") " pod="openstack-operators/telemetry-operator-controller-manager-64b5b76f97-bbsft" Jan 29 15:44:30 crc kubenswrapper[5008]: I0129 15:44:30.969358 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/9f5d1ef8-a9b5-428a-b441-b7d763dbd102-cert\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4dxkdxv\" (UID: \"9f5d1ef8-a9b5-428a-b441-b7d763dbd102\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dxkdxv" Jan 29 15:44:30 crc kubenswrapper[5008]: I0129 15:44:30.969512 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-njtnx\" (UniqueName: \"kubernetes.io/projected/9f5d1ef8-a9b5-428a-b441-b7d763dbd102-kube-api-access-njtnx\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4dxkdxv\" (UID: \"9f5d1ef8-a9b5-428a-b441-b7d763dbd102\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dxkdxv" Jan 29 15:44:30 crc kubenswrapper[5008]: I0129 15:44:30.969580 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8wdbm\" (UniqueName: \"kubernetes.io/projected/ce6a1921-bd9b-47c4-8f5f-9443d8e4c08f-kube-api-access-8wdbm\") pod \"placement-operator-controller-manager-5b964cf4cd-xjf4m\" (UID: \"ce6a1921-bd9b-47c4-8f5f-9443d8e4c08f\") " pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-xjf4m" Jan 29 15:44:30 crc kubenswrapper[5008]: E0129 15:44:30.970367 5008 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 29 15:44:30 crc kubenswrapper[5008]: E0129 15:44:30.970434 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9f5d1ef8-a9b5-428a-b441-b7d763dbd102-cert podName:9f5d1ef8-a9b5-428a-b441-b7d763dbd102 nodeName:}" failed. No retries permitted until 2026-01-29 15:44:31.470415719 +0000 UTC m=+1015.143269966 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/9f5d1ef8-a9b5-428a-b441-b7d763dbd102-cert") pod "openstack-baremetal-operator-controller-manager-59c4b45c4dxkdxv" (UID: "9f5d1ef8-a9b5-428a-b441-b7d763dbd102") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 29 15:44:30 crc kubenswrapper[5008]: I0129 15:44:30.974578 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-84f48565d4-qhwnb" Jan 29 15:44:30 crc kubenswrapper[5008]: I0129 15:44:30.988364 5008 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/test-operator-controller-manager-56f8bfcd9f-fxz5k"] Jan 29 15:44:30 crc kubenswrapper[5008]: I0129 15:44:30.989690 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-fxz5k" Jan 29 15:44:31 crc kubenswrapper[5008]: I0129 15:44:31.004124 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"test-operator-controller-manager-dockercfg-q9vj4" Jan 29 15:44:31 crc kubenswrapper[5008]: I0129 15:44:31.004642 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-585dbc889-44qcp" Jan 29 15:44:31 crc kubenswrapper[5008]: I0129 15:44:31.018881 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-56f8bfcd9f-fxz5k"] Jan 29 15:44:31 crc kubenswrapper[5008]: I0129 15:44:31.026276 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-bjjwz" Jan 29 15:44:31 crc kubenswrapper[5008]: I0129 15:44:31.047071 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-55bff696bd-klqvj" Jan 29 15:44:31 crc kubenswrapper[5008]: I0129 15:44:31.047620 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ktqqm\" (UniqueName: \"kubernetes.io/projected/cb2d6253-7fa7-41a9-9d0b-002ef590c4db-kube-api-access-ktqqm\") pod \"ovn-operator-controller-manager-788c46999f-qjtzq\" (UID: \"cb2d6253-7fa7-41a9-9d0b-002ef590c4db\") " pod="openstack-operators/ovn-operator-controller-manager-788c46999f-qjtzq" Jan 29 15:44:31 crc kubenswrapper[5008]: I0129 15:44:31.052171 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8wdbm\" (UniqueName: \"kubernetes.io/projected/ce6a1921-bd9b-47c4-8f5f-9443d8e4c08f-kube-api-access-8wdbm\") pod \"placement-operator-controller-manager-5b964cf4cd-xjf4m\" (UID: \"ce6a1921-bd9b-47c4-8f5f-9443d8e4c08f\") " pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-xjf4m" Jan 29 15:44:31 crc kubenswrapper[5008]: I0129 15:44:31.054755 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-njtnx\" (UniqueName: \"kubernetes.io/projected/9f5d1ef8-a9b5-428a-b441-b7d763dbd102-kube-api-access-njtnx\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4dxkdxv\" (UID: \"9f5d1ef8-a9b5-428a-b441-b7d763dbd102\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dxkdxv" Jan 29 15:44:31 crc kubenswrapper[5008]: I0129 15:44:31.056590 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-twwv9\" (UniqueName: \"kubernetes.io/projected/a9dfe223-8569-48bb-8b52-c3fb069208a0-kube-api-access-twwv9\") pod \"swift-operator-controller-manager-68fc8c869-84h7l\" (UID: \"a9dfe223-8569-48bb-8b52-c3fb069208a0\") " pod="openstack-operators/swift-operator-controller-manager-68fc8c869-84h7l" Jan 29 15:44:31 crc kubenswrapper[5008]: I0129 15:44:31.071344 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jv9f4\" (UniqueName: \"kubernetes.io/projected/30b3e5fd-7f41-4ed9-a1de-cb282994ad38-kube-api-access-jv9f4\") pod \"telemetry-operator-controller-manager-64b5b76f97-bbsft\" (UID: \"30b3e5fd-7f41-4ed9-a1de-cb282994ad38\") " pod="openstack-operators/telemetry-operator-controller-manager-64b5b76f97-bbsft" Jan 29 15:44:31 crc kubenswrapper[5008]: I0129 15:44:31.071428 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bm9mw\" (UniqueName: \"kubernetes.io/projected/d4fd527b-7108-4f94-b7a9-bb0b358b8c3c-kube-api-access-bm9mw\") pod \"test-operator-controller-manager-56f8bfcd9f-fxz5k\" (UID: \"d4fd527b-7108-4f94-b7a9-bb0b358b8c3c\") " pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-fxz5k" Jan 29 15:44:31 crc kubenswrapper[5008]: I0129 15:44:31.071506 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/4ff89cd9-951e-4907-b60c-a1a1c08007a4-cert\") pod \"infra-operator-controller-manager-79955696d6-zvcs5\" (UID: \"4ff89cd9-951e-4907-b60c-a1a1c08007a4\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-zvcs5" Jan 29 15:44:31 crc kubenswrapper[5008]: E0129 15:44:31.071704 5008 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 29 15:44:31 crc kubenswrapper[5008]: E0129 15:44:31.071770 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4ff89cd9-951e-4907-b60c-a1a1c08007a4-cert podName:4ff89cd9-951e-4907-b60c-a1a1c08007a4 nodeName:}" failed. No retries permitted until 2026-01-29 15:44:32.07174874 +0000 UTC m=+1015.744602987 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/4ff89cd9-951e-4907-b60c-a1a1c08007a4-cert") pod "infra-operator-controller-manager-79955696d6-zvcs5" (UID: "4ff89cd9-951e-4907-b60c-a1a1c08007a4") : secret "infra-operator-webhook-server-cert" not found Jan 29 15:44:31 crc kubenswrapper[5008]: I0129 15:44:31.097021 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-6687f8d877-zbddd" Jan 29 15:44:31 crc kubenswrapper[5008]: I0129 15:44:31.117447 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jv9f4\" (UniqueName: \"kubernetes.io/projected/30b3e5fd-7f41-4ed9-a1de-cb282994ad38-kube-api-access-jv9f4\") pod \"telemetry-operator-controller-manager-64b5b76f97-bbsft\" (UID: \"30b3e5fd-7f41-4ed9-a1de-cb282994ad38\") " pod="openstack-operators/telemetry-operator-controller-manager-64b5b76f97-bbsft" Jan 29 15:44:31 crc kubenswrapper[5008]: I0129 15:44:31.125151 5008 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/watcher-operator-controller-manager-564965969-dwhc5"] Jan 29 15:44:31 crc kubenswrapper[5008]: I0129 15:44:31.126109 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-564965969-dwhc5" Jan 29 15:44:31 crc kubenswrapper[5008]: I0129 15:44:31.138677 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"watcher-operator-controller-manager-dockercfg-vg7sf" Jan 29 15:44:31 crc kubenswrapper[5008]: I0129 15:44:31.142154 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-788c46999f-qjtzq" Jan 29 15:44:31 crc kubenswrapper[5008]: I0129 15:44:31.151290 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-564965969-dwhc5"] Jan 29 15:44:31 crc kubenswrapper[5008]: I0129 15:44:31.173648 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fc2vx\" (UniqueName: \"kubernetes.io/projected/a2163508-5800-4d97-b8d4-1f3815764822-kube-api-access-fc2vx\") pod \"watcher-operator-controller-manager-564965969-dwhc5\" (UID: \"a2163508-5800-4d97-b8d4-1f3815764822\") " pod="openstack-operators/watcher-operator-controller-manager-564965969-dwhc5" Jan 29 15:44:31 crc kubenswrapper[5008]: I0129 15:44:31.173723 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bm9mw\" (UniqueName: \"kubernetes.io/projected/d4fd527b-7108-4f94-b7a9-bb0b358b8c3c-kube-api-access-bm9mw\") pod \"test-operator-controller-manager-56f8bfcd9f-fxz5k\" (UID: \"d4fd527b-7108-4f94-b7a9-bb0b358b8c3c\") " pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-fxz5k" Jan 29 15:44:31 crc kubenswrapper[5008]: I0129 15:44:31.190814 5008 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-manager-77db58b9dd-srsvv"] Jan 29 15:44:31 crc kubenswrapper[5008]: I0129 15:44:31.191595 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-77db58b9dd-srsvv" Jan 29 15:44:31 crc kubenswrapper[5008]: I0129 15:44:31.199430 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-68fc8c869-84h7l" Jan 29 15:44:31 crc kubenswrapper[5008]: I0129 15:44:31.201285 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-manager-dockercfg-wddh7" Jan 29 15:44:31 crc kubenswrapper[5008]: I0129 15:44:31.201772 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"metrics-server-cert" Jan 29 15:44:31 crc kubenswrapper[5008]: I0129 15:44:31.201878 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"webhook-server-cert" Jan 29 15:44:31 crc kubenswrapper[5008]: I0129 15:44:31.208108 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-xjf4m" Jan 29 15:44:31 crc kubenswrapper[5008]: I0129 15:44:31.208507 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-77db58b9dd-srsvv"] Jan 29 15:44:31 crc kubenswrapper[5008]: I0129 15:44:31.213983 5008 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-vtv85"] Jan 29 15:44:31 crc kubenswrapper[5008]: I0129 15:44:31.217286 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bm9mw\" (UniqueName: \"kubernetes.io/projected/d4fd527b-7108-4f94-b7a9-bb0b358b8c3c-kube-api-access-bm9mw\") pod \"test-operator-controller-manager-56f8bfcd9f-fxz5k\" (UID: \"d4fd527b-7108-4f94-b7a9-bb0b358b8c3c\") " pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-fxz5k" Jan 29 15:44:31 crc kubenswrapper[5008]: I0129 15:44:31.225206 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-64b5b76f97-bbsft" Jan 29 15:44:31 crc kubenswrapper[5008]: I0129 15:44:31.225991 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-vtv85" Jan 29 15:44:31 crc kubenswrapper[5008]: I0129 15:44:31.232488 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"rabbitmq-cluster-operator-controller-manager-dockercfg-hzrzg" Jan 29 15:44:31 crc kubenswrapper[5008]: I0129 15:44:31.236689 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-vtv85"] Jan 29 15:44:31 crc kubenswrapper[5008]: I0129 15:44:31.271816 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-hh7sg"] Jan 29 15:44:31 crc kubenswrapper[5008]: I0129 15:44:31.274740 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fc2vx\" (UniqueName: \"kubernetes.io/projected/a2163508-5800-4d97-b8d4-1f3815764822-kube-api-access-fc2vx\") pod \"watcher-operator-controller-manager-564965969-dwhc5\" (UID: \"a2163508-5800-4d97-b8d4-1f3815764822\") " pod="openstack-operators/watcher-operator-controller-manager-564965969-dwhc5" Jan 29 15:44:31 crc kubenswrapper[5008]: I0129 15:44:31.274821 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/44442d63-1bbc-4d1c-9e9d-2a9ad59baf59-webhook-certs\") pod \"openstack-operator-controller-manager-77db58b9dd-srsvv\" (UID: \"44442d63-1bbc-4d1c-9e9d-2a9ad59baf59\") " pod="openstack-operators/openstack-operator-controller-manager-77db58b9dd-srsvv" Jan 29 15:44:31 crc kubenswrapper[5008]: I0129 15:44:31.274864 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9v2kd\" (UniqueName: \"kubernetes.io/projected/1a373ec7-8da3-4b3e-a08a-e5e8b8e5a2d1-kube-api-access-9v2kd\") pod \"rabbitmq-cluster-operator-manager-668c99d594-vtv85\" (UID: \"1a373ec7-8da3-4b3e-a08a-e5e8b8e5a2d1\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-vtv85" Jan 29 15:44:31 crc kubenswrapper[5008]: I0129 15:44:31.274969 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/44442d63-1bbc-4d1c-9e9d-2a9ad59baf59-metrics-certs\") pod \"openstack-operator-controller-manager-77db58b9dd-srsvv\" (UID: \"44442d63-1bbc-4d1c-9e9d-2a9ad59baf59\") " pod="openstack-operators/openstack-operator-controller-manager-77db58b9dd-srsvv" Jan 29 15:44:31 crc kubenswrapper[5008]: I0129 15:44:31.275017 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qngtp\" (UniqueName: \"kubernetes.io/projected/44442d63-1bbc-4d1c-9e9d-2a9ad59baf59-kube-api-access-qngtp\") pod \"openstack-operator-controller-manager-77db58b9dd-srsvv\" (UID: \"44442d63-1bbc-4d1c-9e9d-2a9ad59baf59\") " pod="openstack-operators/openstack-operator-controller-manager-77db58b9dd-srsvv" Jan 29 15:44:31 crc kubenswrapper[5008]: I0129 15:44:31.294081 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fc2vx\" (UniqueName: \"kubernetes.io/projected/a2163508-5800-4d97-b8d4-1f3815764822-kube-api-access-fc2vx\") pod \"watcher-operator-controller-manager-564965969-dwhc5\" (UID: \"a2163508-5800-4d97-b8d4-1f3815764822\") " pod="openstack-operators/watcher-operator-controller-manager-564965969-dwhc5" Jan 29 15:44:31 crc kubenswrapper[5008]: W0129 15:44:31.325165 5008 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod68468eb9_9e76_4f2f_9aba_cc3198e0a241.slice/crio-3d232da1d12d8a44b3ec70cc00ef25b778881680090ad4976d0d2a644ce54a37 WatchSource:0}: Error finding container 3d232da1d12d8a44b3ec70cc00ef25b778881680090ad4976d0d2a644ce54a37: Status 404 returned error can't find the container with id 3d232da1d12d8a44b3ec70cc00ef25b778881680090ad4976d0d2a644ce54a37 Jan 29 15:44:31 crc kubenswrapper[5008]: I0129 15:44:31.376217 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/44442d63-1bbc-4d1c-9e9d-2a9ad59baf59-webhook-certs\") pod \"openstack-operator-controller-manager-77db58b9dd-srsvv\" (UID: \"44442d63-1bbc-4d1c-9e9d-2a9ad59baf59\") " pod="openstack-operators/openstack-operator-controller-manager-77db58b9dd-srsvv" Jan 29 15:44:31 crc kubenswrapper[5008]: I0129 15:44:31.376268 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9v2kd\" (UniqueName: \"kubernetes.io/projected/1a373ec7-8da3-4b3e-a08a-e5e8b8e5a2d1-kube-api-access-9v2kd\") pod \"rabbitmq-cluster-operator-manager-668c99d594-vtv85\" (UID: \"1a373ec7-8da3-4b3e-a08a-e5e8b8e5a2d1\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-vtv85" Jan 29 15:44:31 crc kubenswrapper[5008]: I0129 15:44:31.376339 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/44442d63-1bbc-4d1c-9e9d-2a9ad59baf59-metrics-certs\") pod \"openstack-operator-controller-manager-77db58b9dd-srsvv\" (UID: \"44442d63-1bbc-4d1c-9e9d-2a9ad59baf59\") " pod="openstack-operators/openstack-operator-controller-manager-77db58b9dd-srsvv" Jan 29 15:44:31 crc kubenswrapper[5008]: I0129 15:44:31.376354 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qngtp\" (UniqueName: \"kubernetes.io/projected/44442d63-1bbc-4d1c-9e9d-2a9ad59baf59-kube-api-access-qngtp\") pod \"openstack-operator-controller-manager-77db58b9dd-srsvv\" (UID: \"44442d63-1bbc-4d1c-9e9d-2a9ad59baf59\") " pod="openstack-operators/openstack-operator-controller-manager-77db58b9dd-srsvv" Jan 29 15:44:31 crc kubenswrapper[5008]: E0129 15:44:31.376591 5008 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 29 15:44:31 crc kubenswrapper[5008]: E0129 15:44:31.376640 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/44442d63-1bbc-4d1c-9e9d-2a9ad59baf59-webhook-certs podName:44442d63-1bbc-4d1c-9e9d-2a9ad59baf59 nodeName:}" failed. No retries permitted until 2026-01-29 15:44:31.876625157 +0000 UTC m=+1015.549479394 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/44442d63-1bbc-4d1c-9e9d-2a9ad59baf59-webhook-certs") pod "openstack-operator-controller-manager-77db58b9dd-srsvv" (UID: "44442d63-1bbc-4d1c-9e9d-2a9ad59baf59") : secret "webhook-server-cert" not found Jan 29 15:44:31 crc kubenswrapper[5008]: E0129 15:44:31.376807 5008 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 29 15:44:31 crc kubenswrapper[5008]: E0129 15:44:31.376830 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/44442d63-1bbc-4d1c-9e9d-2a9ad59baf59-metrics-certs podName:44442d63-1bbc-4d1c-9e9d-2a9ad59baf59 nodeName:}" failed. No retries permitted until 2026-01-29 15:44:31.876822241 +0000 UTC m=+1015.549676478 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/44442d63-1bbc-4d1c-9e9d-2a9ad59baf59-metrics-certs") pod "openstack-operator-controller-manager-77db58b9dd-srsvv" (UID: "44442d63-1bbc-4d1c-9e9d-2a9ad59baf59") : secret "metrics-server-cert" not found Jan 29 15:44:31 crc kubenswrapper[5008]: I0129 15:44:31.401606 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qngtp\" (UniqueName: \"kubernetes.io/projected/44442d63-1bbc-4d1c-9e9d-2a9ad59baf59-kube-api-access-qngtp\") pod \"openstack-operator-controller-manager-77db58b9dd-srsvv\" (UID: \"44442d63-1bbc-4d1c-9e9d-2a9ad59baf59\") " pod="openstack-operators/openstack-operator-controller-manager-77db58b9dd-srsvv" Jan 29 15:44:31 crc kubenswrapper[5008]: I0129 15:44:31.403049 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9v2kd\" (UniqueName: \"kubernetes.io/projected/1a373ec7-8da3-4b3e-a08a-e5e8b8e5a2d1-kube-api-access-9v2kd\") pod \"rabbitmq-cluster-operator-manager-668c99d594-vtv85\" (UID: \"1a373ec7-8da3-4b3e-a08a-e5e8b8e5a2d1\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-vtv85" Jan 29 15:44:31 crc kubenswrapper[5008]: I0129 15:44:31.462765 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-fxz5k" Jan 29 15:44:31 crc kubenswrapper[5008]: I0129 15:44:31.477184 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/9f5d1ef8-a9b5-428a-b441-b7d763dbd102-cert\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4dxkdxv\" (UID: \"9f5d1ef8-a9b5-428a-b441-b7d763dbd102\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dxkdxv" Jan 29 15:44:31 crc kubenswrapper[5008]: E0129 15:44:31.477352 5008 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 29 15:44:31 crc kubenswrapper[5008]: E0129 15:44:31.477410 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9f5d1ef8-a9b5-428a-b441-b7d763dbd102-cert podName:9f5d1ef8-a9b5-428a-b441-b7d763dbd102 nodeName:}" failed. No retries permitted until 2026-01-29 15:44:32.477390495 +0000 UTC m=+1016.150244722 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/9f5d1ef8-a9b5-428a-b441-b7d763dbd102-cert") pod "openstack-baremetal-operator-controller-manager-59c4b45c4dxkdxv" (UID: "9f5d1ef8-a9b5-428a-b441-b7d763dbd102") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 29 15:44:31 crc kubenswrapper[5008]: I0129 15:44:31.502644 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-564965969-dwhc5" Jan 29 15:44:31 crc kubenswrapper[5008]: I0129 15:44:31.535568 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-6d9697b7f4-n4xtj"] Jan 29 15:44:31 crc kubenswrapper[5008]: I0129 15:44:31.572022 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-vtv85" Jan 29 15:44:31 crc kubenswrapper[5008]: I0129 15:44:31.604817 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-8886f4c47-s4fq5"] Jan 29 15:44:31 crc kubenswrapper[5008]: W0129 15:44:31.613748 5008 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7a610d2e_cb71_4995_a0e8_f6dc26f7664a.slice/crio-ad010e20bd793718773a829e00bceaaf737720bc9e767a94b3d7b0cedaef882a WatchSource:0}: Error finding container ad010e20bd793718773a829e00bceaaf737720bc9e767a94b3d7b0cedaef882a: Status 404 returned error can't find the container with id ad010e20bd793718773a829e00bceaaf737720bc9e767a94b3d7b0cedaef882a Jan 29 15:44:31 crc kubenswrapper[5008]: I0129 15:44:31.643287 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-8d874c8fc-4zrsr"] Jan 29 15:44:31 crc kubenswrapper[5008]: I0129 15:44:31.809932 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-ncxxj"] Jan 29 15:44:31 crc kubenswrapper[5008]: W0129 15:44:31.817490 5008 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6196a4fd_8576_412f_9140_cf61b98444a4.slice/crio-a2a3a6e0cffbfd6306fc625be972ed0eb25454e7e7165c2cf379d38ef8d2da9d WatchSource:0}: Error finding container a2a3a6e0cffbfd6306fc625be972ed0eb25454e7e7165c2cf379d38ef8d2da9d: Status 404 returned error can't find the container with id a2a3a6e0cffbfd6306fc625be972ed0eb25454e7e7165c2cf379d38ef8d2da9d Jan 29 15:44:31 crc kubenswrapper[5008]: I0129 15:44:31.832412 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-7dd968899f-q7khh"] Jan 29 15:44:31 crc kubenswrapper[5008]: I0129 15:44:31.852249 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-69d6db494d-9sf7f"] Jan 29 15:44:31 crc kubenswrapper[5008]: I0129 15:44:31.860659 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-585dbc889-44qcp"] Jan 29 15:44:31 crc kubenswrapper[5008]: I0129 15:44:31.865240 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-84f48565d4-qhwnb"] Jan 29 15:44:31 crc kubenswrapper[5008]: W0129 15:44:31.870091 5008 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode76346a9_7ba5_4178_82b7_da9f0c337c08.slice/crio-6652cb646b3b7d0ab6ba65718228dc7b25fd97f841e0cea62d5a095f5a9134f4 WatchSource:0}: Error finding container 6652cb646b3b7d0ab6ba65718228dc7b25fd97f841e0cea62d5a095f5a9134f4: Status 404 returned error can't find the container with id 6652cb646b3b7d0ab6ba65718228dc7b25fd97f841e0cea62d5a095f5a9134f4 Jan 29 15:44:31 crc kubenswrapper[5008]: I0129 15:44:31.884553 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/44442d63-1bbc-4d1c-9e9d-2a9ad59baf59-webhook-certs\") pod \"openstack-operator-controller-manager-77db58b9dd-srsvv\" (UID: \"44442d63-1bbc-4d1c-9e9d-2a9ad59baf59\") " pod="openstack-operators/openstack-operator-controller-manager-77db58b9dd-srsvv" Jan 29 15:44:31 crc kubenswrapper[5008]: E0129 15:44:31.884755 5008 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 29 15:44:31 crc kubenswrapper[5008]: E0129 15:44:31.884848 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/44442d63-1bbc-4d1c-9e9d-2a9ad59baf59-webhook-certs podName:44442d63-1bbc-4d1c-9e9d-2a9ad59baf59 nodeName:}" failed. No retries permitted until 2026-01-29 15:44:32.884827892 +0000 UTC m=+1016.557682129 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/44442d63-1bbc-4d1c-9e9d-2a9ad59baf59-webhook-certs") pod "openstack-operator-controller-manager-77db58b9dd-srsvv" (UID: "44442d63-1bbc-4d1c-9e9d-2a9ad59baf59") : secret "webhook-server-cert" not found Jan 29 15:44:31 crc kubenswrapper[5008]: I0129 15:44:31.885499 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/44442d63-1bbc-4d1c-9e9d-2a9ad59baf59-metrics-certs\") pod \"openstack-operator-controller-manager-77db58b9dd-srsvv\" (UID: \"44442d63-1bbc-4d1c-9e9d-2a9ad59baf59\") " pod="openstack-operators/openstack-operator-controller-manager-77db58b9dd-srsvv" Jan 29 15:44:31 crc kubenswrapper[5008]: E0129 15:44:31.885680 5008 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 29 15:44:31 crc kubenswrapper[5008]: E0129 15:44:31.885749 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/44442d63-1bbc-4d1c-9e9d-2a9ad59baf59-metrics-certs podName:44442d63-1bbc-4d1c-9e9d-2a9ad59baf59 nodeName:}" failed. No retries permitted until 2026-01-29 15:44:32.885738564 +0000 UTC m=+1016.558592801 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/44442d63-1bbc-4d1c-9e9d-2a9ad59baf59-metrics-certs") pod "openstack-operator-controller-manager-77db58b9dd-srsvv" (UID: "44442d63-1bbc-4d1c-9e9d-2a9ad59baf59") : secret "metrics-server-cert" not found Jan 29 15:44:31 crc kubenswrapper[5008]: W0129 15:44:31.988565 5008 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcae67616_1145_4057_b304_08a322e78d9d.slice/crio-d10614f8dd7d7a5bb2be552e4ef5b438b007d215205528e2c063e8ed18e6f09b WatchSource:0}: Error finding container d10614f8dd7d7a5bb2be552e4ef5b438b007d215205528e2c063e8ed18e6f09b: Status 404 returned error can't find the container with id d10614f8dd7d7a5bb2be552e4ef5b438b007d215205528e2c063e8ed18e6f09b Jan 29 15:44:31 crc kubenswrapper[5008]: I0129 15:44:31.991581 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-5fb775575f-qs9wh"] Jan 29 15:44:32 crc kubenswrapper[5008]: I0129 15:44:32.090328 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/4ff89cd9-951e-4907-b60c-a1a1c08007a4-cert\") pod \"infra-operator-controller-manager-79955696d6-zvcs5\" (UID: \"4ff89cd9-951e-4907-b60c-a1a1c08007a4\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-zvcs5" Jan 29 15:44:32 crc kubenswrapper[5008]: E0129 15:44:32.090574 5008 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 29 15:44:32 crc kubenswrapper[5008]: E0129 15:44:32.090635 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4ff89cd9-951e-4907-b60c-a1a1c08007a4-cert podName:4ff89cd9-951e-4907-b60c-a1a1c08007a4 nodeName:}" failed. No retries permitted until 2026-01-29 15:44:34.090616651 +0000 UTC m=+1017.763470888 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/4ff89cd9-951e-4907-b60c-a1a1c08007a4-cert") pod "infra-operator-controller-manager-79955696d6-zvcs5" (UID: "4ff89cd9-951e-4907-b60c-a1a1c08007a4") : secret "infra-operator-webhook-server-cert" not found Jan 29 15:44:32 crc kubenswrapper[5008]: I0129 15:44:32.222116 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-7dd968899f-q7khh" event={"ID":"e57e9a97-d32e-4464-b12c-ba44a4643ada","Type":"ContainerStarted","Data":"bd8c840c67bae01776abbad88025e01f5c74f210aa6629db99a41c527fef445e"} Jan 29 15:44:32 crc kubenswrapper[5008]: I0129 15:44:32.225701 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-69d6db494d-9sf7f" event={"ID":"b46e3eea-2330-4b3f-b45d-34ae38a0dde9","Type":"ContainerStarted","Data":"1804d8ec1ba634ff71f1d2c85315037de33f1ad0a73faafbc77fa78b681ea28c"} Jan 29 15:44:32 crc kubenswrapper[5008]: I0129 15:44:32.227393 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-8886f4c47-s4fq5" event={"ID":"94a4547d-0c92-41e4-8ca7-64e21df1708e","Type":"ContainerStarted","Data":"237fbc2d054c2f89367ff13756a12bae6b601f1988235ad14ac42243a4b6c2a1"} Jan 29 15:44:32 crc kubenswrapper[5008]: I0129 15:44:32.228656 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-hh7sg" event={"ID":"68468eb9-9e76-4f2f-9aba-cc3198e0a241","Type":"ContainerStarted","Data":"3d232da1d12d8a44b3ec70cc00ef25b778881680090ad4976d0d2a644ce54a37"} Jan 29 15:44:32 crc kubenswrapper[5008]: I0129 15:44:32.229691 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-8d874c8fc-4zrsr" event={"ID":"6e775178-095e-451d-bded-b83f229c4231","Type":"ContainerStarted","Data":"657e1185611b2c6ff407043eba326f6775afb83d8d05971076623006954aea79"} Jan 29 15:44:32 crc kubenswrapper[5008]: I0129 15:44:32.231324 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-ncxxj" event={"ID":"6196a4fd-8576-412f-9140-cf61b98444a4","Type":"ContainerStarted","Data":"a2a3a6e0cffbfd6306fc625be972ed0eb25454e7e7165c2cf379d38ef8d2da9d"} Jan 29 15:44:32 crc kubenswrapper[5008]: I0129 15:44:32.233205 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-6d9697b7f4-n4xtj" event={"ID":"7a610d2e-cb71-4995-a0e8-f6dc26f7664a","Type":"ContainerStarted","Data":"ad010e20bd793718773a829e00bceaaf737720bc9e767a94b3d7b0cedaef882a"} Jan 29 15:44:32 crc kubenswrapper[5008]: I0129 15:44:32.234406 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-84f48565d4-qhwnb" event={"ID":"e76346a9-7ba5-4178-82b7-da9f0c337c08","Type":"ContainerStarted","Data":"6652cb646b3b7d0ab6ba65718228dc7b25fd97f841e0cea62d5a095f5a9134f4"} Jan 29 15:44:32 crc kubenswrapper[5008]: I0129 15:44:32.235461 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-qs9wh" event={"ID":"cae67616-1145-4057-b304-08a322e78d9d","Type":"ContainerStarted","Data":"d10614f8dd7d7a5bb2be552e4ef5b438b007d215205528e2c063e8ed18e6f09b"} Jan 29 15:44:32 crc kubenswrapper[5008]: I0129 15:44:32.237172 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-585dbc889-44qcp" event={"ID":"14020423-5911-4b69-8889-b12267c9bbf9","Type":"ContainerStarted","Data":"88fac025880987908d4db4aad128cb341cbaa0305f6767863d2b7c09a983a405"} Jan 29 15:44:32 crc kubenswrapper[5008]: I0129 15:44:32.304536 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-67bf948998-bjjwz"] Jan 29 15:44:32 crc kubenswrapper[5008]: I0129 15:44:32.313069 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-56f8bfcd9f-fxz5k"] Jan 29 15:44:32 crc kubenswrapper[5008]: I0129 15:44:32.319227 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-68fc8c869-84h7l"] Jan 29 15:44:32 crc kubenswrapper[5008]: W0129 15:44:32.330485 5008 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda9dfe223_8569_48bb_8b52_c3fb069208a0.slice/crio-d95190da8b1d1cf0d9173e5fb1333b86e3cda4b0ef9925e24ee8499803ac6029 WatchSource:0}: Error finding container d95190da8b1d1cf0d9173e5fb1333b86e3cda4b0ef9925e24ee8499803ac6029: Status 404 returned error can't find the container with id d95190da8b1d1cf0d9173e5fb1333b86e3cda4b0ef9925e24ee8499803ac6029 Jan 29 15:44:32 crc kubenswrapper[5008]: I0129 15:44:32.331447 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-vtv85"] Jan 29 15:44:32 crc kubenswrapper[5008]: I0129 15:44:32.346161 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-64b5b76f97-bbsft"] Jan 29 15:44:32 crc kubenswrapper[5008]: W0129 15:44:32.352951 5008 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd39876a5_4ca3_44e2_a4c5_c6541c2ec812.slice/crio-ff36cc088f2bf21458f26092f66a8ae7788edd2709aa089f82a44198df91ea75 WatchSource:0}: Error finding container ff36cc088f2bf21458f26092f66a8ae7788edd2709aa089f82a44198df91ea75: Status 404 returned error can't find the container with id ff36cc088f2bf21458f26092f66a8ae7788edd2709aa089f82a44198df91ea75 Jan 29 15:44:32 crc kubenswrapper[5008]: I0129 15:44:32.357389 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-788c46999f-qjtzq"] Jan 29 15:44:32 crc kubenswrapper[5008]: W0129 15:44:32.365259 5008 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd4fd527b_7108_4f94_b7a9_bb0b358b8c3c.slice/crio-3bec6bc83ff11a8487c6fef8f9ec3b49e1d7151f2b6f6cc9dccb913dfb03c0b5 WatchSource:0}: Error finding container 3bec6bc83ff11a8487c6fef8f9ec3b49e1d7151f2b6f6cc9dccb913dfb03c0b5: Status 404 returned error can't find the container with id 3bec6bc83ff11a8487c6fef8f9ec3b49e1d7151f2b6f6cc9dccb913dfb03c0b5 Jan 29 15:44:32 crc kubenswrapper[5008]: I0129 15:44:32.366123 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-6687f8d877-zbddd"] Jan 29 15:44:32 crc kubenswrapper[5008]: W0129 15:44:32.372056 5008 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcb2d6253_7fa7_41a9_9d0b_002ef590c4db.slice/crio-844c01bcf4e698f15c70fc0fd69337379e0f7e4bcb9d3bfe5be35978382b802c WatchSource:0}: Error finding container 844c01bcf4e698f15c70fc0fd69337379e0f7e4bcb9d3bfe5be35978382b802c: Status 404 returned error can't find the container with id 844c01bcf4e698f15c70fc0fd69337379e0f7e4bcb9d3bfe5be35978382b802c Jan 29 15:44:32 crc kubenswrapper[5008]: I0129 15:44:32.375508 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-564965969-dwhc5"] Jan 29 15:44:32 crc kubenswrapper[5008]: I0129 15:44:32.380562 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-55bff696bd-klqvj"] Jan 29 15:44:32 crc kubenswrapper[5008]: E0129 15:44:32.381637 5008 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/telemetry-operator@sha256:f9bf288cd0c13912404027a58ea3b90d4092b641e8265adc5c88644ea7fe901a,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-jv9f4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod telemetry-operator-controller-manager-64b5b76f97-bbsft_openstack-operators(30b3e5fd-7f41-4ed9-a1de-cb282994ad38): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 29 15:44:32 crc kubenswrapper[5008]: W0129 15:44:32.382618 5008 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda2163508_5800_4d97_b8d4_1f3815764822.slice/crio-ba6c5d5cebc3e92d6f8eb09049150ebbabbf36f182b5a8c8e2960450b3c182de WatchSource:0}: Error finding container ba6c5d5cebc3e92d6f8eb09049150ebbabbf36f182b5a8c8e2960450b3c182de: Status 404 returned error can't find the container with id ba6c5d5cebc3e92d6f8eb09049150ebbabbf36f182b5a8c8e2960450b3c182de Jan 29 15:44:32 crc kubenswrapper[5008]: E0129 15:44:32.382715 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/telemetry-operator-controller-manager-64b5b76f97-bbsft" podUID="30b3e5fd-7f41-4ed9-a1de-cb282994ad38" Jan 29 15:44:32 crc kubenswrapper[5008]: E0129 15:44:32.384642 5008 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:operator,Image:quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2,Command:[/manager],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:0,ContainerPort:9782,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:OPERATOR_NAMESPACE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{200 -3} {} 200m DecimalSI},memory: {{524288000 0} {} 500Mi BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-9v2kd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-cluster-operator-manager-668c99d594-vtv85_openstack-operators(1a373ec7-8da3-4b3e-a08a-e5e8b8e5a2d1): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 29 15:44:32 crc kubenswrapper[5008]: E0129 15:44:32.385734 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-vtv85" podUID="1a373ec7-8da3-4b3e-a08a-e5e8b8e5a2d1" Jan 29 15:44:32 crc kubenswrapper[5008]: W0129 15:44:32.386308 5008 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podce6a1921_bd9b_47c4_8f5f_9443d8e4c08f.slice/crio-14639791b938757787bd6781918c0ef1dbe334455a0aa388d5b9ef1d618efec1 WatchSource:0}: Error finding container 14639791b938757787bd6781918c0ef1dbe334455a0aa388d5b9ef1d618efec1: Status 404 returned error can't find the container with id 14639791b938757787bd6781918c0ef1dbe334455a0aa388d5b9ef1d618efec1 Jan 29 15:44:32 crc kubenswrapper[5008]: I0129 15:44:32.386693 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-5b964cf4cd-xjf4m"] Jan 29 15:44:32 crc kubenswrapper[5008]: E0129 15:44:32.387761 5008 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/watcher-operator@sha256:7869203f6f97de780368d507636031090fed3b658d2f7771acbd4481bdfc870b,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-fc2vx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod watcher-operator-controller-manager-564965969-dwhc5_openstack-operators(a2163508-5800-4d97-b8d4-1f3815764822): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 29 15:44:32 crc kubenswrapper[5008]: W0129 15:44:32.388653 5008 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod27a92a88_ee29_47fd_b4cf_5e3232ce7573.slice/crio-87837d9d3e870c62d7cd0646c9d0135cace81c628da074f49ead5115fe548168 WatchSource:0}: Error finding container 87837d9d3e870c62d7cd0646c9d0135cace81c628da074f49ead5115fe548168: Status 404 returned error can't find the container with id 87837d9d3e870c62d7cd0646c9d0135cace81c628da074f49ead5115fe548168 Jan 29 15:44:32 crc kubenswrapper[5008]: E0129 15:44:32.389130 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/watcher-operator-controller-manager-564965969-dwhc5" podUID="a2163508-5800-4d97-b8d4-1f3815764822" Jan 29 15:44:32 crc kubenswrapper[5008]: W0129 15:44:32.389484 5008 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4dc123ee_b76c_46a7_9aea_76457232036b.slice/crio-5079362c6381da71082ca15f5ef72d62bbac4279eec714faabcc1c6cc444a4ab WatchSource:0}: Error finding container 5079362c6381da71082ca15f5ef72d62bbac4279eec714faabcc1c6cc444a4ab: Status 404 returned error can't find the container with id 5079362c6381da71082ca15f5ef72d62bbac4279eec714faabcc1c6cc444a4ab Jan 29 15:44:32 crc kubenswrapper[5008]: E0129 15:44:32.391359 5008 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/placement-operator@sha256:e0824d5d461ada59715eb3048ed9394c80abba09c45503f8f90ee3b34e525488,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-8wdbm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod placement-operator-controller-manager-5b964cf4cd-xjf4m_openstack-operators(ce6a1921-bd9b-47c4-8f5f-9443d8e4c08f): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 29 15:44:32 crc kubenswrapper[5008]: E0129 15:44:32.391697 5008 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/nova-operator@sha256:5340b88039fac393da49ef4e181b2720c809c27a6bb30531a07a49342a1da45e,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-ccnhz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod nova-operator-controller-manager-55bff696bd-klqvj_openstack-operators(27a92a88-ee29-47fd-b4cf-5e3232ce7573): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 29 15:44:32 crc kubenswrapper[5008]: E0129 15:44:32.392613 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-xjf4m" podUID="ce6a1921-bd9b-47c4-8f5f-9443d8e4c08f" Jan 29 15:44:32 crc kubenswrapper[5008]: E0129 15:44:32.392712 5008 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/octavia-operator@sha256:e6f2f361f1dcbb321407a5884951e16ff96e7b88942b10b548f27ad4de14a0be,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-t59lc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod octavia-operator-controller-manager-6687f8d877-zbddd_openstack-operators(4dc123ee-b76c-46a7-9aea-76457232036b): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 29 15:44:32 crc kubenswrapper[5008]: E0129 15:44:32.392806 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/nova-operator-controller-manager-55bff696bd-klqvj" podUID="27a92a88-ee29-47fd-b4cf-5e3232ce7573" Jan 29 15:44:32 crc kubenswrapper[5008]: E0129 15:44:32.394315 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/octavia-operator-controller-manager-6687f8d877-zbddd" podUID="4dc123ee-b76c-46a7-9aea-76457232036b" Jan 29 15:44:32 crc kubenswrapper[5008]: I0129 15:44:32.496636 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/9f5d1ef8-a9b5-428a-b441-b7d763dbd102-cert\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4dxkdxv\" (UID: \"9f5d1ef8-a9b5-428a-b441-b7d763dbd102\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dxkdxv" Jan 29 15:44:32 crc kubenswrapper[5008]: E0129 15:44:32.496920 5008 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 29 15:44:32 crc kubenswrapper[5008]: E0129 15:44:32.497043 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9f5d1ef8-a9b5-428a-b441-b7d763dbd102-cert podName:9f5d1ef8-a9b5-428a-b441-b7d763dbd102 nodeName:}" failed. No retries permitted until 2026-01-29 15:44:34.497018924 +0000 UTC m=+1018.169873161 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/9f5d1ef8-a9b5-428a-b441-b7d763dbd102-cert") pod "openstack-baremetal-operator-controller-manager-59c4b45c4dxkdxv" (UID: "9f5d1ef8-a9b5-428a-b441-b7d763dbd102") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 29 15:44:32 crc kubenswrapper[5008]: I0129 15:44:32.902835 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/44442d63-1bbc-4d1c-9e9d-2a9ad59baf59-metrics-certs\") pod \"openstack-operator-controller-manager-77db58b9dd-srsvv\" (UID: \"44442d63-1bbc-4d1c-9e9d-2a9ad59baf59\") " pod="openstack-operators/openstack-operator-controller-manager-77db58b9dd-srsvv" Jan 29 15:44:32 crc kubenswrapper[5008]: I0129 15:44:32.902923 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/44442d63-1bbc-4d1c-9e9d-2a9ad59baf59-webhook-certs\") pod \"openstack-operator-controller-manager-77db58b9dd-srsvv\" (UID: \"44442d63-1bbc-4d1c-9e9d-2a9ad59baf59\") " pod="openstack-operators/openstack-operator-controller-manager-77db58b9dd-srsvv" Jan 29 15:44:32 crc kubenswrapper[5008]: E0129 15:44:32.903031 5008 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 29 15:44:32 crc kubenswrapper[5008]: E0129 15:44:32.903039 5008 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 29 15:44:32 crc kubenswrapper[5008]: E0129 15:44:32.903098 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/44442d63-1bbc-4d1c-9e9d-2a9ad59baf59-webhook-certs podName:44442d63-1bbc-4d1c-9e9d-2a9ad59baf59 nodeName:}" failed. No retries permitted until 2026-01-29 15:44:34.903081778 +0000 UTC m=+1018.575936015 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/44442d63-1bbc-4d1c-9e9d-2a9ad59baf59-webhook-certs") pod "openstack-operator-controller-manager-77db58b9dd-srsvv" (UID: "44442d63-1bbc-4d1c-9e9d-2a9ad59baf59") : secret "webhook-server-cert" not found Jan 29 15:44:32 crc kubenswrapper[5008]: E0129 15:44:32.903114 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/44442d63-1bbc-4d1c-9e9d-2a9ad59baf59-metrics-certs podName:44442d63-1bbc-4d1c-9e9d-2a9ad59baf59 nodeName:}" failed. No retries permitted until 2026-01-29 15:44:34.903107569 +0000 UTC m=+1018.575961806 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/44442d63-1bbc-4d1c-9e9d-2a9ad59baf59-metrics-certs") pod "openstack-operator-controller-manager-77db58b9dd-srsvv" (UID: "44442d63-1bbc-4d1c-9e9d-2a9ad59baf59") : secret "metrics-server-cert" not found Jan 29 15:44:33 crc kubenswrapper[5008]: I0129 15:44:33.249731 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-vtv85" event={"ID":"1a373ec7-8da3-4b3e-a08a-e5e8b8e5a2d1","Type":"ContainerStarted","Data":"d347bb26baa2b5e2390ee7830502bf1d18ba28d924e711333fc6862bfaf47ba2"} Jan 29 15:44:33 crc kubenswrapper[5008]: I0129 15:44:33.251526 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-788c46999f-qjtzq" event={"ID":"cb2d6253-7fa7-41a9-9d0b-002ef590c4db","Type":"ContainerStarted","Data":"844c01bcf4e698f15c70fc0fd69337379e0f7e4bcb9d3bfe5be35978382b802c"} Jan 29 15:44:33 crc kubenswrapper[5008]: E0129 15:44:33.253169 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-vtv85" podUID="1a373ec7-8da3-4b3e-a08a-e5e8b8e5a2d1" Jan 29 15:44:33 crc kubenswrapper[5008]: I0129 15:44:33.254360 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-55bff696bd-klqvj" event={"ID":"27a92a88-ee29-47fd-b4cf-5e3232ce7573","Type":"ContainerStarted","Data":"87837d9d3e870c62d7cd0646c9d0135cace81c628da074f49ead5115fe548168"} Jan 29 15:44:33 crc kubenswrapper[5008]: E0129 15:44:33.256707 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/nova-operator@sha256:5340b88039fac393da49ef4e181b2720c809c27a6bb30531a07a49342a1da45e\\\"\"" pod="openstack-operators/nova-operator-controller-manager-55bff696bd-klqvj" podUID="27a92a88-ee29-47fd-b4cf-5e3232ce7573" Jan 29 15:44:33 crc kubenswrapper[5008]: I0129 15:44:33.257326 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-6687f8d877-zbddd" event={"ID":"4dc123ee-b76c-46a7-9aea-76457232036b","Type":"ContainerStarted","Data":"5079362c6381da71082ca15f5ef72d62bbac4279eec714faabcc1c6cc444a4ab"} Jan 29 15:44:33 crc kubenswrapper[5008]: E0129 15:44:33.261556 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/octavia-operator@sha256:e6f2f361f1dcbb321407a5884951e16ff96e7b88942b10b548f27ad4de14a0be\\\"\"" pod="openstack-operators/octavia-operator-controller-manager-6687f8d877-zbddd" podUID="4dc123ee-b76c-46a7-9aea-76457232036b" Jan 29 15:44:33 crc kubenswrapper[5008]: I0129 15:44:33.273543 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-68fc8c869-84h7l" event={"ID":"a9dfe223-8569-48bb-8b52-c3fb069208a0","Type":"ContainerStarted","Data":"d95190da8b1d1cf0d9173e5fb1333b86e3cda4b0ef9925e24ee8499803ac6029"} Jan 29 15:44:33 crc kubenswrapper[5008]: I0129 15:44:33.277248 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-xjf4m" event={"ID":"ce6a1921-bd9b-47c4-8f5f-9443d8e4c08f","Type":"ContainerStarted","Data":"14639791b938757787bd6781918c0ef1dbe334455a0aa388d5b9ef1d618efec1"} Jan 29 15:44:33 crc kubenswrapper[5008]: I0129 15:44:33.278482 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-fxz5k" event={"ID":"d4fd527b-7108-4f94-b7a9-bb0b358b8c3c","Type":"ContainerStarted","Data":"3bec6bc83ff11a8487c6fef8f9ec3b49e1d7151f2b6f6cc9dccb913dfb03c0b5"} Jan 29 15:44:33 crc kubenswrapper[5008]: E0129 15:44:33.279023 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/placement-operator@sha256:e0824d5d461ada59715eb3048ed9394c80abba09c45503f8f90ee3b34e525488\\\"\"" pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-xjf4m" podUID="ce6a1921-bd9b-47c4-8f5f-9443d8e4c08f" Jan 29 15:44:33 crc kubenswrapper[5008]: I0129 15:44:33.280675 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-bjjwz" event={"ID":"d39876a5-4ca3-44e2-a4c5-c6541c2ec812","Type":"ContainerStarted","Data":"ff36cc088f2bf21458f26092f66a8ae7788edd2709aa089f82a44198df91ea75"} Jan 29 15:44:33 crc kubenswrapper[5008]: I0129 15:44:33.285694 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-564965969-dwhc5" event={"ID":"a2163508-5800-4d97-b8d4-1f3815764822","Type":"ContainerStarted","Data":"ba6c5d5cebc3e92d6f8eb09049150ebbabbf36f182b5a8c8e2960450b3c182de"} Jan 29 15:44:33 crc kubenswrapper[5008]: I0129 15:44:33.287884 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-64b5b76f97-bbsft" event={"ID":"30b3e5fd-7f41-4ed9-a1de-cb282994ad38","Type":"ContainerStarted","Data":"32e4d6aa080b5c9dcae2577e2453dcc86db2538c8f7f8833e02b71d908e785f6"} Jan 29 15:44:33 crc kubenswrapper[5008]: E0129 15:44:33.288056 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/watcher-operator@sha256:7869203f6f97de780368d507636031090fed3b658d2f7771acbd4481bdfc870b\\\"\"" pod="openstack-operators/watcher-operator-controller-manager-564965969-dwhc5" podUID="a2163508-5800-4d97-b8d4-1f3815764822" Jan 29 15:44:33 crc kubenswrapper[5008]: E0129 15:44:33.289181 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/telemetry-operator@sha256:f9bf288cd0c13912404027a58ea3b90d4092b641e8265adc5c88644ea7fe901a\\\"\"" pod="openstack-operators/telemetry-operator-controller-manager-64b5b76f97-bbsft" podUID="30b3e5fd-7f41-4ed9-a1de-cb282994ad38" Jan 29 15:44:33 crc kubenswrapper[5008]: I0129 15:44:33.512833 5008 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-z75gs"] Jan 29 15:44:33 crc kubenswrapper[5008]: I0129 15:44:33.514492 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-z75gs" Jan 29 15:44:33 crc kubenswrapper[5008]: I0129 15:44:33.522380 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-z75gs"] Jan 29 15:44:33 crc kubenswrapper[5008]: I0129 15:44:33.617464 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/014fe771-fe01-4b92-b038-862615b75136-catalog-content\") pod \"redhat-marketplace-z75gs\" (UID: \"014fe771-fe01-4b92-b038-862615b75136\") " pod="openshift-marketplace/redhat-marketplace-z75gs" Jan 29 15:44:33 crc kubenswrapper[5008]: I0129 15:44:33.617559 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tg272\" (UniqueName: \"kubernetes.io/projected/014fe771-fe01-4b92-b038-862615b75136-kube-api-access-tg272\") pod \"redhat-marketplace-z75gs\" (UID: \"014fe771-fe01-4b92-b038-862615b75136\") " pod="openshift-marketplace/redhat-marketplace-z75gs" Jan 29 15:44:33 crc kubenswrapper[5008]: I0129 15:44:33.617614 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/014fe771-fe01-4b92-b038-862615b75136-utilities\") pod \"redhat-marketplace-z75gs\" (UID: \"014fe771-fe01-4b92-b038-862615b75136\") " pod="openshift-marketplace/redhat-marketplace-z75gs" Jan 29 15:44:33 crc kubenswrapper[5008]: I0129 15:44:33.719361 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/014fe771-fe01-4b92-b038-862615b75136-catalog-content\") pod \"redhat-marketplace-z75gs\" (UID: \"014fe771-fe01-4b92-b038-862615b75136\") " pod="openshift-marketplace/redhat-marketplace-z75gs" Jan 29 15:44:33 crc kubenswrapper[5008]: I0129 15:44:33.719460 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tg272\" (UniqueName: \"kubernetes.io/projected/014fe771-fe01-4b92-b038-862615b75136-kube-api-access-tg272\") pod \"redhat-marketplace-z75gs\" (UID: \"014fe771-fe01-4b92-b038-862615b75136\") " pod="openshift-marketplace/redhat-marketplace-z75gs" Jan 29 15:44:33 crc kubenswrapper[5008]: I0129 15:44:33.719506 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/014fe771-fe01-4b92-b038-862615b75136-utilities\") pod \"redhat-marketplace-z75gs\" (UID: \"014fe771-fe01-4b92-b038-862615b75136\") " pod="openshift-marketplace/redhat-marketplace-z75gs" Jan 29 15:44:33 crc kubenswrapper[5008]: I0129 15:44:33.719965 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/014fe771-fe01-4b92-b038-862615b75136-utilities\") pod \"redhat-marketplace-z75gs\" (UID: \"014fe771-fe01-4b92-b038-862615b75136\") " pod="openshift-marketplace/redhat-marketplace-z75gs" Jan 29 15:44:33 crc kubenswrapper[5008]: I0129 15:44:33.720001 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/014fe771-fe01-4b92-b038-862615b75136-catalog-content\") pod \"redhat-marketplace-z75gs\" (UID: \"014fe771-fe01-4b92-b038-862615b75136\") " pod="openshift-marketplace/redhat-marketplace-z75gs" Jan 29 15:44:33 crc kubenswrapper[5008]: I0129 15:44:33.761443 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tg272\" (UniqueName: \"kubernetes.io/projected/014fe771-fe01-4b92-b038-862615b75136-kube-api-access-tg272\") pod \"redhat-marketplace-z75gs\" (UID: \"014fe771-fe01-4b92-b038-862615b75136\") " pod="openshift-marketplace/redhat-marketplace-z75gs" Jan 29 15:44:33 crc kubenswrapper[5008]: I0129 15:44:33.834909 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-z75gs" Jan 29 15:44:34 crc kubenswrapper[5008]: I0129 15:44:34.124937 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/4ff89cd9-951e-4907-b60c-a1a1c08007a4-cert\") pod \"infra-operator-controller-manager-79955696d6-zvcs5\" (UID: \"4ff89cd9-951e-4907-b60c-a1a1c08007a4\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-zvcs5" Jan 29 15:44:34 crc kubenswrapper[5008]: E0129 15:44:34.125121 5008 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 29 15:44:34 crc kubenswrapper[5008]: E0129 15:44:34.125424 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4ff89cd9-951e-4907-b60c-a1a1c08007a4-cert podName:4ff89cd9-951e-4907-b60c-a1a1c08007a4 nodeName:}" failed. No retries permitted until 2026-01-29 15:44:38.125402102 +0000 UTC m=+1021.798256339 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/4ff89cd9-951e-4907-b60c-a1a1c08007a4-cert") pod "infra-operator-controller-manager-79955696d6-zvcs5" (UID: "4ff89cd9-951e-4907-b60c-a1a1c08007a4") : secret "infra-operator-webhook-server-cert" not found Jan 29 15:44:34 crc kubenswrapper[5008]: E0129 15:44:34.297379 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/watcher-operator@sha256:7869203f6f97de780368d507636031090fed3b658d2f7771acbd4481bdfc870b\\\"\"" pod="openstack-operators/watcher-operator-controller-manager-564965969-dwhc5" podUID="a2163508-5800-4d97-b8d4-1f3815764822" Jan 29 15:44:34 crc kubenswrapper[5008]: E0129 15:44:34.297909 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/octavia-operator@sha256:e6f2f361f1dcbb321407a5884951e16ff96e7b88942b10b548f27ad4de14a0be\\\"\"" pod="openstack-operators/octavia-operator-controller-manager-6687f8d877-zbddd" podUID="4dc123ee-b76c-46a7-9aea-76457232036b" Jan 29 15:44:34 crc kubenswrapper[5008]: E0129 15:44:34.297927 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/telemetry-operator@sha256:f9bf288cd0c13912404027a58ea3b90d4092b641e8265adc5c88644ea7fe901a\\\"\"" pod="openstack-operators/telemetry-operator-controller-manager-64b5b76f97-bbsft" podUID="30b3e5fd-7f41-4ed9-a1de-cb282994ad38" Jan 29 15:44:34 crc kubenswrapper[5008]: E0129 15:44:34.297969 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/nova-operator@sha256:5340b88039fac393da49ef4e181b2720c809c27a6bb30531a07a49342a1da45e\\\"\"" pod="openstack-operators/nova-operator-controller-manager-55bff696bd-klqvj" podUID="27a92a88-ee29-47fd-b4cf-5e3232ce7573" Jan 29 15:44:34 crc kubenswrapper[5008]: E0129 15:44:34.297982 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-vtv85" podUID="1a373ec7-8da3-4b3e-a08a-e5e8b8e5a2d1" Jan 29 15:44:34 crc kubenswrapper[5008]: E0129 15:44:34.298018 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/placement-operator@sha256:e0824d5d461ada59715eb3048ed9394c80abba09c45503f8f90ee3b34e525488\\\"\"" pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-xjf4m" podUID="ce6a1921-bd9b-47c4-8f5f-9443d8e4c08f" Jan 29 15:44:34 crc kubenswrapper[5008]: I0129 15:44:34.345720 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-z75gs"] Jan 29 15:44:34 crc kubenswrapper[5008]: I0129 15:44:34.535832 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/9f5d1ef8-a9b5-428a-b441-b7d763dbd102-cert\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4dxkdxv\" (UID: \"9f5d1ef8-a9b5-428a-b441-b7d763dbd102\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dxkdxv" Jan 29 15:44:34 crc kubenswrapper[5008]: E0129 15:44:34.536050 5008 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 29 15:44:34 crc kubenswrapper[5008]: E0129 15:44:34.536149 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9f5d1ef8-a9b5-428a-b441-b7d763dbd102-cert podName:9f5d1ef8-a9b5-428a-b441-b7d763dbd102 nodeName:}" failed. No retries permitted until 2026-01-29 15:44:38.536125699 +0000 UTC m=+1022.208979986 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/9f5d1ef8-a9b5-428a-b441-b7d763dbd102-cert") pod "openstack-baremetal-operator-controller-manager-59c4b45c4dxkdxv" (UID: "9f5d1ef8-a9b5-428a-b441-b7d763dbd102") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 29 15:44:34 crc kubenswrapper[5008]: I0129 15:44:34.941555 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/44442d63-1bbc-4d1c-9e9d-2a9ad59baf59-metrics-certs\") pod \"openstack-operator-controller-manager-77db58b9dd-srsvv\" (UID: \"44442d63-1bbc-4d1c-9e9d-2a9ad59baf59\") " pod="openstack-operators/openstack-operator-controller-manager-77db58b9dd-srsvv" Jan 29 15:44:34 crc kubenswrapper[5008]: I0129 15:44:34.941649 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/44442d63-1bbc-4d1c-9e9d-2a9ad59baf59-webhook-certs\") pod \"openstack-operator-controller-manager-77db58b9dd-srsvv\" (UID: \"44442d63-1bbc-4d1c-9e9d-2a9ad59baf59\") " pod="openstack-operators/openstack-operator-controller-manager-77db58b9dd-srsvv" Jan 29 15:44:34 crc kubenswrapper[5008]: E0129 15:44:34.941809 5008 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 29 15:44:34 crc kubenswrapper[5008]: E0129 15:44:34.941821 5008 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 29 15:44:34 crc kubenswrapper[5008]: E0129 15:44:34.941861 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/44442d63-1bbc-4d1c-9e9d-2a9ad59baf59-webhook-certs podName:44442d63-1bbc-4d1c-9e9d-2a9ad59baf59 nodeName:}" failed. No retries permitted until 2026-01-29 15:44:38.941843305 +0000 UTC m=+1022.614697542 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/44442d63-1bbc-4d1c-9e9d-2a9ad59baf59-webhook-certs") pod "openstack-operator-controller-manager-77db58b9dd-srsvv" (UID: "44442d63-1bbc-4d1c-9e9d-2a9ad59baf59") : secret "webhook-server-cert" not found Jan 29 15:44:34 crc kubenswrapper[5008]: E0129 15:44:34.941915 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/44442d63-1bbc-4d1c-9e9d-2a9ad59baf59-metrics-certs podName:44442d63-1bbc-4d1c-9e9d-2a9ad59baf59 nodeName:}" failed. No retries permitted until 2026-01-29 15:44:38.941892327 +0000 UTC m=+1022.614746614 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/44442d63-1bbc-4d1c-9e9d-2a9ad59baf59-metrics-certs") pod "openstack-operator-controller-manager-77db58b9dd-srsvv" (UID: "44442d63-1bbc-4d1c-9e9d-2a9ad59baf59") : secret "metrics-server-cert" not found Jan 29 15:44:36 crc kubenswrapper[5008]: E0129 15:44:36.523572 5008 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://registry.redhat.io/redhat/community-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" image="registry.redhat.io/redhat/community-operator-index:v4.18" Jan 29 15:44:36 crc kubenswrapper[5008]: E0129 15:44:36.524035 5008 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-gkwsn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-9l2c6_openshift-marketplace(decefe5c-189e-43f8-88b2-f93a00567c3e): ErrImagePull: initializing source docker://registry.redhat.io/redhat/community-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" logger="UnhandledError" Jan 29 15:44:36 crc kubenswrapper[5008]: E0129 15:44:36.525683 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"initializing source docker://registry.redhat.io/redhat/community-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)\"" pod="openshift-marketplace/community-operators-9l2c6" podUID="decefe5c-189e-43f8-88b2-f93a00567c3e" Jan 29 15:44:37 crc kubenswrapper[5008]: I0129 15:44:37.315051 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-z75gs" event={"ID":"014fe771-fe01-4b92-b038-862615b75136","Type":"ContainerStarted","Data":"3d4dceb557efb379fc43836d7c0b6854e7a45385d099f1155ac83813cd0b127b"} Jan 29 15:44:38 crc kubenswrapper[5008]: I0129 15:44:38.186583 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/4ff89cd9-951e-4907-b60c-a1a1c08007a4-cert\") pod \"infra-operator-controller-manager-79955696d6-zvcs5\" (UID: \"4ff89cd9-951e-4907-b60c-a1a1c08007a4\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-zvcs5" Jan 29 15:44:38 crc kubenswrapper[5008]: E0129 15:44:38.186753 5008 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 29 15:44:38 crc kubenswrapper[5008]: E0129 15:44:38.186828 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4ff89cd9-951e-4907-b60c-a1a1c08007a4-cert podName:4ff89cd9-951e-4907-b60c-a1a1c08007a4 nodeName:}" failed. No retries permitted until 2026-01-29 15:44:46.186810446 +0000 UTC m=+1029.859664683 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/4ff89cd9-951e-4907-b60c-a1a1c08007a4-cert") pod "infra-operator-controller-manager-79955696d6-zvcs5" (UID: "4ff89cd9-951e-4907-b60c-a1a1c08007a4") : secret "infra-operator-webhook-server-cert" not found Jan 29 15:44:38 crc kubenswrapper[5008]: I0129 15:44:38.593043 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/9f5d1ef8-a9b5-428a-b441-b7d763dbd102-cert\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4dxkdxv\" (UID: \"9f5d1ef8-a9b5-428a-b441-b7d763dbd102\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dxkdxv" Jan 29 15:44:38 crc kubenswrapper[5008]: E0129 15:44:38.593246 5008 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 29 15:44:38 crc kubenswrapper[5008]: E0129 15:44:38.593331 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9f5d1ef8-a9b5-428a-b441-b7d763dbd102-cert podName:9f5d1ef8-a9b5-428a-b441-b7d763dbd102 nodeName:}" failed. No retries permitted until 2026-01-29 15:44:46.593309661 +0000 UTC m=+1030.266163898 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/9f5d1ef8-a9b5-428a-b441-b7d763dbd102-cert") pod "openstack-baremetal-operator-controller-manager-59c4b45c4dxkdxv" (UID: "9f5d1ef8-a9b5-428a-b441-b7d763dbd102") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 29 15:44:38 crc kubenswrapper[5008]: I0129 15:44:38.999037 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/44442d63-1bbc-4d1c-9e9d-2a9ad59baf59-webhook-certs\") pod \"openstack-operator-controller-manager-77db58b9dd-srsvv\" (UID: \"44442d63-1bbc-4d1c-9e9d-2a9ad59baf59\") " pod="openstack-operators/openstack-operator-controller-manager-77db58b9dd-srsvv" Jan 29 15:44:38 crc kubenswrapper[5008]: I0129 15:44:38.999248 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/44442d63-1bbc-4d1c-9e9d-2a9ad59baf59-metrics-certs\") pod \"openstack-operator-controller-manager-77db58b9dd-srsvv\" (UID: \"44442d63-1bbc-4d1c-9e9d-2a9ad59baf59\") " pod="openstack-operators/openstack-operator-controller-manager-77db58b9dd-srsvv" Jan 29 15:44:38 crc kubenswrapper[5008]: E0129 15:44:38.999302 5008 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 29 15:44:38 crc kubenswrapper[5008]: E0129 15:44:38.999422 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/44442d63-1bbc-4d1c-9e9d-2a9ad59baf59-webhook-certs podName:44442d63-1bbc-4d1c-9e9d-2a9ad59baf59 nodeName:}" failed. No retries permitted until 2026-01-29 15:44:46.999391476 +0000 UTC m=+1030.672245743 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/44442d63-1bbc-4d1c-9e9d-2a9ad59baf59-webhook-certs") pod "openstack-operator-controller-manager-77db58b9dd-srsvv" (UID: "44442d63-1bbc-4d1c-9e9d-2a9ad59baf59") : secret "webhook-server-cert" not found Jan 29 15:44:38 crc kubenswrapper[5008]: E0129 15:44:38.999465 5008 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 29 15:44:38 crc kubenswrapper[5008]: E0129 15:44:38.999518 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/44442d63-1bbc-4d1c-9e9d-2a9ad59baf59-metrics-certs podName:44442d63-1bbc-4d1c-9e9d-2a9ad59baf59 nodeName:}" failed. No retries permitted until 2026-01-29 15:44:46.999502679 +0000 UTC m=+1030.672356936 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/44442d63-1bbc-4d1c-9e9d-2a9ad59baf59-metrics-certs") pod "openstack-operator-controller-manager-77db58b9dd-srsvv" (UID: "44442d63-1bbc-4d1c-9e9d-2a9ad59baf59") : secret "metrics-server-cert" not found Jan 29 15:44:39 crc kubenswrapper[5008]: E0129 15:44:39.666898 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-6kzcj" podUID="c82fc869-759d-4902-9aef-fdd69452b420" Jan 29 15:44:43 crc kubenswrapper[5008]: I0129 15:44:43.991108 5008 patch_prober.go:28] interesting pod/machine-config-daemon-gk9q8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 15:44:43 crc kubenswrapper[5008]: I0129 15:44:43.991567 5008 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-gk9q8" podUID="ca0fcb2d-733d-4bde-9bbf-3f7082d0e244" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 15:44:46 crc kubenswrapper[5008]: I0129 15:44:46.208580 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/4ff89cd9-951e-4907-b60c-a1a1c08007a4-cert\") pod \"infra-operator-controller-manager-79955696d6-zvcs5\" (UID: \"4ff89cd9-951e-4907-b60c-a1a1c08007a4\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-zvcs5" Jan 29 15:44:46 crc kubenswrapper[5008]: I0129 15:44:46.215330 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/4ff89cd9-951e-4907-b60c-a1a1c08007a4-cert\") pod \"infra-operator-controller-manager-79955696d6-zvcs5\" (UID: \"4ff89cd9-951e-4907-b60c-a1a1c08007a4\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-zvcs5" Jan 29 15:44:46 crc kubenswrapper[5008]: I0129 15:44:46.390301 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-controller-manager-dockercfg-tbwkr" Jan 29 15:44:46 crc kubenswrapper[5008]: I0129 15:44:46.400013 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-79955696d6-zvcs5" Jan 29 15:44:46 crc kubenswrapper[5008]: I0129 15:44:46.615209 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/9f5d1ef8-a9b5-428a-b441-b7d763dbd102-cert\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4dxkdxv\" (UID: \"9f5d1ef8-a9b5-428a-b441-b7d763dbd102\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dxkdxv" Jan 29 15:44:46 crc kubenswrapper[5008]: I0129 15:44:46.622666 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/9f5d1ef8-a9b5-428a-b441-b7d763dbd102-cert\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4dxkdxv\" (UID: \"9f5d1ef8-a9b5-428a-b441-b7d763dbd102\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dxkdxv" Jan 29 15:44:46 crc kubenswrapper[5008]: I0129 15:44:46.721497 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-controller-manager-dockercfg-wmglr" Jan 29 15:44:46 crc kubenswrapper[5008]: I0129 15:44:46.729627 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dxkdxv" Jan 29 15:44:47 crc kubenswrapper[5008]: I0129 15:44:47.021006 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/44442d63-1bbc-4d1c-9e9d-2a9ad59baf59-webhook-certs\") pod \"openstack-operator-controller-manager-77db58b9dd-srsvv\" (UID: \"44442d63-1bbc-4d1c-9e9d-2a9ad59baf59\") " pod="openstack-operators/openstack-operator-controller-manager-77db58b9dd-srsvv" Jan 29 15:44:47 crc kubenswrapper[5008]: I0129 15:44:47.021362 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/44442d63-1bbc-4d1c-9e9d-2a9ad59baf59-metrics-certs\") pod \"openstack-operator-controller-manager-77db58b9dd-srsvv\" (UID: \"44442d63-1bbc-4d1c-9e9d-2a9ad59baf59\") " pod="openstack-operators/openstack-operator-controller-manager-77db58b9dd-srsvv" Jan 29 15:44:47 crc kubenswrapper[5008]: I0129 15:44:47.028899 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/44442d63-1bbc-4d1c-9e9d-2a9ad59baf59-webhook-certs\") pod \"openstack-operator-controller-manager-77db58b9dd-srsvv\" (UID: \"44442d63-1bbc-4d1c-9e9d-2a9ad59baf59\") " pod="openstack-operators/openstack-operator-controller-manager-77db58b9dd-srsvv" Jan 29 15:44:47 crc kubenswrapper[5008]: I0129 15:44:47.033422 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/44442d63-1bbc-4d1c-9e9d-2a9ad59baf59-metrics-certs\") pod \"openstack-operator-controller-manager-77db58b9dd-srsvv\" (UID: \"44442d63-1bbc-4d1c-9e9d-2a9ad59baf59\") " pod="openstack-operators/openstack-operator-controller-manager-77db58b9dd-srsvv" Jan 29 15:44:47 crc kubenswrapper[5008]: I0129 15:44:47.153029 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-manager-dockercfg-wddh7" Jan 29 15:44:47 crc kubenswrapper[5008]: I0129 15:44:47.160906 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-77db58b9dd-srsvv" Jan 29 15:44:47 crc kubenswrapper[5008]: E0129 15:44:47.354722 5008 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/horizon-operator@sha256:027cd7ab61ef5071d9ad6b729c95a98e51cd254642f01dc019d44cc98a9232f8" Jan 29 15:44:47 crc kubenswrapper[5008]: E0129 15:44:47.354960 5008 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/horizon-operator@sha256:027cd7ab61ef5071d9ad6b729c95a98e51cd254642f01dc019d44cc98a9232f8,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-qmws6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod horizon-operator-controller-manager-5fb775575f-qs9wh_openstack-operators(cae67616-1145-4057-b304-08a322e78d9d): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 29 15:44:47 crc kubenswrapper[5008]: E0129 15:44:47.356245 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-qs9wh" podUID="cae67616-1145-4057-b304-08a322e78d9d" Jan 29 15:44:47 crc kubenswrapper[5008]: E0129 15:44:47.393602 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/horizon-operator@sha256:027cd7ab61ef5071d9ad6b729c95a98e51cd254642f01dc019d44cc98a9232f8\\\"\"" pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-qs9wh" podUID="cae67616-1145-4057-b304-08a322e78d9d" Jan 29 15:44:47 crc kubenswrapper[5008]: E0129 15:44:47.907412 5008 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/swift-operator@sha256:42ad717de1b82267d244b016e5491a5b66a5c3deb6b8c2906a379e1296a2c382" Jan 29 15:44:47 crc kubenswrapper[5008]: E0129 15:44:47.907617 5008 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/swift-operator@sha256:42ad717de1b82267d244b016e5491a5b66a5c3deb6b8c2906a379e1296a2c382,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-twwv9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod swift-operator-controller-manager-68fc8c869-84h7l_openstack-operators(a9dfe223-8569-48bb-8b52-c3fb069208a0): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 29 15:44:47 crc kubenswrapper[5008]: E0129 15:44:47.908848 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/swift-operator-controller-manager-68fc8c869-84h7l" podUID="a9dfe223-8569-48bb-8b52-c3fb069208a0" Jan 29 15:44:48 crc kubenswrapper[5008]: E0129 15:44:48.401241 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/swift-operator@sha256:42ad717de1b82267d244b016e5491a5b66a5c3deb6b8c2906a379e1296a2c382\\\"\"" pod="openstack-operators/swift-operator-controller-manager-68fc8c869-84h7l" podUID="a9dfe223-8569-48bb-8b52-c3fb069208a0" Jan 29 15:44:48 crc kubenswrapper[5008]: E0129 15:44:48.488858 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-9l2c6" podUID="decefe5c-189e-43f8-88b2-f93a00567c3e" Jan 29 15:44:48 crc kubenswrapper[5008]: E0129 15:44:48.513280 5008 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/heat-operator@sha256:27d83ada27cf70cda0c5738f97551d81f1ea4068e83a090f3312e22172d72e10" Jan 29 15:44:48 crc kubenswrapper[5008]: E0129 15:44:48.513510 5008 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/heat-operator@sha256:27d83ada27cf70cda0c5738f97551d81f1ea4068e83a090f3312e22172d72e10,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-8wxlp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod heat-operator-controller-manager-69d6db494d-9sf7f_openstack-operators(b46e3eea-2330-4b3f-b45d-34ae38a0dde9): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 29 15:44:48 crc kubenswrapper[5008]: E0129 15:44:48.514685 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/heat-operator-controller-manager-69d6db494d-9sf7f" podUID="b46e3eea-2330-4b3f-b45d-34ae38a0dde9" Jan 29 15:44:49 crc kubenswrapper[5008]: E0129 15:44:49.406959 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/heat-operator@sha256:27d83ada27cf70cda0c5738f97551d81f1ea4068e83a090f3312e22172d72e10\\\"\"" pod="openstack-operators/heat-operator-controller-manager-69d6db494d-9sf7f" podUID="b46e3eea-2330-4b3f-b45d-34ae38a0dde9" Jan 29 15:44:50 crc kubenswrapper[5008]: E0129 15:44:50.484330 5008 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/neutron-operator@sha256:bbb46b8b3b69fdfad7bafc10a7e88f6ea58bcdc3c91e30beb79e24417d52e0f6" Jan 29 15:44:50 crc kubenswrapper[5008]: E0129 15:44:50.484564 5008 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/neutron-operator@sha256:bbb46b8b3b69fdfad7bafc10a7e88f6ea58bcdc3c91e30beb79e24417d52e0f6,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-gmdcr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod neutron-operator-controller-manager-585dbc889-44qcp_openstack-operators(14020423-5911-4b69-8889-b12267c9bbf9): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 29 15:44:50 crc kubenswrapper[5008]: E0129 15:44:50.485767 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/neutron-operator-controller-manager-585dbc889-44qcp" podUID="14020423-5911-4b69-8889-b12267c9bbf9" Jan 29 15:44:51 crc kubenswrapper[5008]: E0129 15:44:51.041963 5008 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/manila-operator@sha256:cd911e8d7a7a1104d77691dbaaf54370015cbb82859337746db5a9186d5dc566" Jan 29 15:44:51 crc kubenswrapper[5008]: E0129 15:44:51.042143 5008 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/manila-operator@sha256:cd911e8d7a7a1104d77691dbaaf54370015cbb82859337746db5a9186d5dc566,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-wk5pq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod manila-operator-controller-manager-7dd968899f-q7khh_openstack-operators(e57e9a97-d32e-4464-b12c-ba44a4643ada): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 29 15:44:51 crc kubenswrapper[5008]: E0129 15:44:51.043430 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/manila-operator-controller-manager-7dd968899f-q7khh" podUID="e57e9a97-d32e-4464-b12c-ba44a4643ada" Jan 29 15:44:51 crc kubenswrapper[5008]: E0129 15:44:51.436835 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/manila-operator@sha256:cd911e8d7a7a1104d77691dbaaf54370015cbb82859337746db5a9186d5dc566\\\"\"" pod="openstack-operators/manila-operator-controller-manager-7dd968899f-q7khh" podUID="e57e9a97-d32e-4464-b12c-ba44a4643ada" Jan 29 15:44:51 crc kubenswrapper[5008]: E0129 15:44:51.437212 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/neutron-operator@sha256:bbb46b8b3b69fdfad7bafc10a7e88f6ea58bcdc3c91e30beb79e24417d52e0f6\\\"\"" pod="openstack-operators/neutron-operator-controller-manager-585dbc889-44qcp" podUID="14020423-5911-4b69-8889-b12267c9bbf9" Jan 29 15:44:51 crc kubenswrapper[5008]: E0129 15:44:51.843889 5008 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/mariadb-operator@sha256:2d493137559b74e23edb4788b7fbdb38b3e239df0f2d7e6e540e50b2355fc3cf" Jan 29 15:44:51 crc kubenswrapper[5008]: E0129 15:44:51.844083 5008 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/mariadb-operator@sha256:2d493137559b74e23edb4788b7fbdb38b3e239df0f2d7e6e540e50b2355fc3cf,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-8hpbg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod mariadb-operator-controller-manager-67bf948998-bjjwz_openstack-operators(d39876a5-4ca3-44e2-a4c5-c6541c2ec812): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 29 15:44:51 crc kubenswrapper[5008]: E0129 15:44:51.845286 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-bjjwz" podUID="d39876a5-4ca3-44e2-a4c5-c6541c2ec812" Jan 29 15:44:52 crc kubenswrapper[5008]: E0129 15:44:52.443139 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/mariadb-operator@sha256:2d493137559b74e23edb4788b7fbdb38b3e239df0f2d7e6e540e50b2355fc3cf\\\"\"" pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-bjjwz" podUID="d39876a5-4ca3-44e2-a4c5-c6541c2ec812" Jan 29 15:44:53 crc kubenswrapper[5008]: E0129 15:44:53.534086 5008 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/test-operator@sha256:3e01e99d3ca1b6c20b1bb015b00cfcbffc584f22a93dc6fe4019d63b813c0241" Jan 29 15:44:53 crc kubenswrapper[5008]: E0129 15:44:53.534265 5008 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/test-operator@sha256:3e01e99d3ca1b6c20b1bb015b00cfcbffc584f22a93dc6fe4019d63b813c0241,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-bm9mw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod test-operator-controller-manager-56f8bfcd9f-fxz5k_openstack-operators(d4fd527b-7108-4f94-b7a9-bb0b358b8c3c): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 29 15:44:53 crc kubenswrapper[5008]: E0129 15:44:53.535480 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-fxz5k" podUID="d4fd527b-7108-4f94-b7a9-bb0b358b8c3c" Jan 29 15:44:53 crc kubenswrapper[5008]: E0129 15:44:53.890670 5008 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/keystone-operator@sha256:319c969e88f109b26487a9f5a67203682803d7386424703ab7ca0340be99ae17" Jan 29 15:44:53 crc kubenswrapper[5008]: E0129 15:44:53.890999 5008 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/keystone-operator@sha256:319c969e88f109b26487a9f5a67203682803d7386424703ab7ca0340be99ae17,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-j7jgj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod keystone-operator-controller-manager-84f48565d4-qhwnb_openstack-operators(e76346a9-7ba5-4178-82b7-da9f0c337c08): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 29 15:44:53 crc kubenswrapper[5008]: E0129 15:44:53.892880 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/keystone-operator-controller-manager-84f48565d4-qhwnb" podUID="e76346a9-7ba5-4178-82b7-da9f0c337c08" Jan 29 15:44:53 crc kubenswrapper[5008]: E0129 15:44:53.960739 5008 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://registry.redhat.io/redhat/certified-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Jan 29 15:44:53 crc kubenswrapper[5008]: E0129 15:44:53.960967 5008 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-m6t5h,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-6kzcj_openshift-marketplace(c82fc869-759d-4902-9aef-fdd69452b420): ErrImagePull: initializing source docker://registry.redhat.io/redhat/certified-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" logger="UnhandledError" Jan 29 15:44:53 crc kubenswrapper[5008]: E0129 15:44:53.962177 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"initializing source docker://registry.redhat.io/redhat/certified-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)\"" pod="openshift-marketplace/certified-operators-6kzcj" podUID="c82fc869-759d-4902-9aef-fdd69452b420" Jan 29 15:44:54 crc kubenswrapper[5008]: E0129 15:44:54.456006 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/test-operator@sha256:3e01e99d3ca1b6c20b1bb015b00cfcbffc584f22a93dc6fe4019d63b813c0241\\\"\"" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-fxz5k" podUID="d4fd527b-7108-4f94-b7a9-bb0b358b8c3c" Jan 29 15:44:54 crc kubenswrapper[5008]: E0129 15:44:54.456936 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/keystone-operator@sha256:319c969e88f109b26487a9f5a67203682803d7386424703ab7ca0340be99ae17\\\"\"" pod="openstack-operators/keystone-operator-controller-manager-84f48565d4-qhwnb" podUID="e76346a9-7ba5-4178-82b7-da9f0c337c08" Jan 29 15:45:00 crc kubenswrapper[5008]: I0129 15:45:00.143463 5008 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29495025-5c6mh"] Jan 29 15:45:00 crc kubenswrapper[5008]: I0129 15:45:00.146193 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29495025-5c6mh" Jan 29 15:45:00 crc kubenswrapper[5008]: I0129 15:45:00.151532 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29495025-5c6mh"] Jan 29 15:45:00 crc kubenswrapper[5008]: I0129 15:45:00.154072 5008 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 29 15:45:00 crc kubenswrapper[5008]: I0129 15:45:00.154122 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 29 15:45:00 crc kubenswrapper[5008]: I0129 15:45:00.227472 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6bfb4d07-e2b9-42e2-951c-3d9f2ad23202-config-volume\") pod \"collect-profiles-29495025-5c6mh\" (UID: \"6bfb4d07-e2b9-42e2-951c-3d9f2ad23202\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29495025-5c6mh" Jan 29 15:45:00 crc kubenswrapper[5008]: I0129 15:45:00.227609 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b5tsl\" (UniqueName: \"kubernetes.io/projected/6bfb4d07-e2b9-42e2-951c-3d9f2ad23202-kube-api-access-b5tsl\") pod \"collect-profiles-29495025-5c6mh\" (UID: \"6bfb4d07-e2b9-42e2-951c-3d9f2ad23202\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29495025-5c6mh" Jan 29 15:45:00 crc kubenswrapper[5008]: I0129 15:45:00.227661 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/6bfb4d07-e2b9-42e2-951c-3d9f2ad23202-secret-volume\") pod \"collect-profiles-29495025-5c6mh\" (UID: \"6bfb4d07-e2b9-42e2-951c-3d9f2ad23202\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29495025-5c6mh" Jan 29 15:45:00 crc kubenswrapper[5008]: I0129 15:45:00.333343 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b5tsl\" (UniqueName: \"kubernetes.io/projected/6bfb4d07-e2b9-42e2-951c-3d9f2ad23202-kube-api-access-b5tsl\") pod \"collect-profiles-29495025-5c6mh\" (UID: \"6bfb4d07-e2b9-42e2-951c-3d9f2ad23202\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29495025-5c6mh" Jan 29 15:45:00 crc kubenswrapper[5008]: I0129 15:45:00.333394 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/6bfb4d07-e2b9-42e2-951c-3d9f2ad23202-secret-volume\") pod \"collect-profiles-29495025-5c6mh\" (UID: \"6bfb4d07-e2b9-42e2-951c-3d9f2ad23202\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29495025-5c6mh" Jan 29 15:45:00 crc kubenswrapper[5008]: I0129 15:45:00.333448 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6bfb4d07-e2b9-42e2-951c-3d9f2ad23202-config-volume\") pod \"collect-profiles-29495025-5c6mh\" (UID: \"6bfb4d07-e2b9-42e2-951c-3d9f2ad23202\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29495025-5c6mh" Jan 29 15:45:00 crc kubenswrapper[5008]: I0129 15:45:00.336410 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6bfb4d07-e2b9-42e2-951c-3d9f2ad23202-config-volume\") pod \"collect-profiles-29495025-5c6mh\" (UID: \"6bfb4d07-e2b9-42e2-951c-3d9f2ad23202\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29495025-5c6mh" Jan 29 15:45:00 crc kubenswrapper[5008]: I0129 15:45:00.342677 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/6bfb4d07-e2b9-42e2-951c-3d9f2ad23202-secret-volume\") pod \"collect-profiles-29495025-5c6mh\" (UID: \"6bfb4d07-e2b9-42e2-951c-3d9f2ad23202\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29495025-5c6mh" Jan 29 15:45:00 crc kubenswrapper[5008]: I0129 15:45:00.349226 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b5tsl\" (UniqueName: \"kubernetes.io/projected/6bfb4d07-e2b9-42e2-951c-3d9f2ad23202-kube-api-access-b5tsl\") pod \"collect-profiles-29495025-5c6mh\" (UID: \"6bfb4d07-e2b9-42e2-951c-3d9f2ad23202\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29495025-5c6mh" Jan 29 15:45:00 crc kubenswrapper[5008]: I0129 15:45:00.482912 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29495025-5c6mh" Jan 29 15:45:01 crc kubenswrapper[5008]: E0129 15:45:01.795041 5008 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2" Jan 29 15:45:01 crc kubenswrapper[5008]: E0129 15:45:01.795546 5008 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:operator,Image:quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2,Command:[/manager],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:0,ContainerPort:9782,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:OPERATOR_NAMESPACE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{200 -3} {} 200m DecimalSI},memory: {{524288000 0} {} 500Mi BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-9v2kd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-cluster-operator-manager-668c99d594-vtv85_openstack-operators(1a373ec7-8da3-4b3e-a08a-e5e8b8e5a2d1): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 29 15:45:01 crc kubenswrapper[5008]: E0129 15:45:01.796848 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-vtv85" podUID="1a373ec7-8da3-4b3e-a08a-e5e8b8e5a2d1" Jan 29 15:45:02 crc kubenswrapper[5008]: W0129 15:45:02.434662 5008 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod44442d63_1bbc_4d1c_9e9d_2a9ad59baf59.slice/crio-b2b4b257c1e2e613d3b85d65807876417cd66f497d0755a83c9f76778be36b25 WatchSource:0}: Error finding container b2b4b257c1e2e613d3b85d65807876417cd66f497d0755a83c9f76778be36b25: Status 404 returned error can't find the container with id b2b4b257c1e2e613d3b85d65807876417cd66f497d0755a83c9f76778be36b25 Jan 29 15:45:02 crc kubenswrapper[5008]: I0129 15:45:02.436481 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-77db58b9dd-srsvv"] Jan 29 15:45:02 crc kubenswrapper[5008]: I0129 15:45:02.479433 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-79955696d6-zvcs5"] Jan 29 15:45:02 crc kubenswrapper[5008]: W0129 15:45:02.490704 5008 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4ff89cd9_951e_4907_b60c_a1a1c08007a4.slice/crio-f7ebc82a36e4c12e5c6d40e020cb1fd798c35ca22216dfeeb9ce08450114850d WatchSource:0}: Error finding container f7ebc82a36e4c12e5c6d40e020cb1fd798c35ca22216dfeeb9ce08450114850d: Status 404 returned error can't find the container with id f7ebc82a36e4c12e5c6d40e020cb1fd798c35ca22216dfeeb9ce08450114850d Jan 29 15:45:02 crc kubenswrapper[5008]: I0129 15:45:02.515033 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-77db58b9dd-srsvv" event={"ID":"44442d63-1bbc-4d1c-9e9d-2a9ad59baf59","Type":"ContainerStarted","Data":"b2b4b257c1e2e613d3b85d65807876417cd66f497d0755a83c9f76778be36b25"} Jan 29 15:45:02 crc kubenswrapper[5008]: I0129 15:45:02.516486 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-79955696d6-zvcs5" event={"ID":"4ff89cd9-951e-4907-b60c-a1a1c08007a4","Type":"ContainerStarted","Data":"f7ebc82a36e4c12e5c6d40e020cb1fd798c35ca22216dfeeb9ce08450114850d"} Jan 29 15:45:02 crc kubenswrapper[5008]: I0129 15:45:02.531515 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29495025-5c6mh"] Jan 29 15:45:02 crc kubenswrapper[5008]: W0129 15:45:02.540841 5008 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6bfb4d07_e2b9_42e2_951c_3d9f2ad23202.slice/crio-4ba45acf3a1ef175f4029e9d7b056c8442e4ecfde30985996aca525c99650ef6 WatchSource:0}: Error finding container 4ba45acf3a1ef175f4029e9d7b056c8442e4ecfde30985996aca525c99650ef6: Status 404 returned error can't find the container with id 4ba45acf3a1ef175f4029e9d7b056c8442e4ecfde30985996aca525c99650ef6 Jan 29 15:45:02 crc kubenswrapper[5008]: I0129 15:45:02.545543 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dxkdxv"] Jan 29 15:45:02 crc kubenswrapper[5008]: W0129 15:45:02.549467 5008 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9f5d1ef8_a9b5_428a_b441_b7d763dbd102.slice/crio-f81d007337c0ae0ba3a13af9502663e1c87ee952f3d7b62a069c96af34017843 WatchSource:0}: Error finding container f81d007337c0ae0ba3a13af9502663e1c87ee952f3d7b62a069c96af34017843: Status 404 returned error can't find the container with id f81d007337c0ae0ba3a13af9502663e1c87ee952f3d7b62a069c96af34017843 Jan 29 15:45:03 crc kubenswrapper[5008]: E0129 15:45:03.459732 5008 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://registry.redhat.io/redhat/community-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" image="registry.redhat.io/redhat/community-operator-index:v4.18" Jan 29 15:45:03 crc kubenswrapper[5008]: E0129 15:45:03.460448 5008 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-gkwsn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-9l2c6_openshift-marketplace(decefe5c-189e-43f8-88b2-f93a00567c3e): ErrImagePull: initializing source docker://registry.redhat.io/redhat/community-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" logger="UnhandledError" Jan 29 15:45:03 crc kubenswrapper[5008]: E0129 15:45:03.461706 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"initializing source docker://registry.redhat.io/redhat/community-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)\"" pod="openshift-marketplace/community-operators-9l2c6" podUID="decefe5c-189e-43f8-88b2-f93a00567c3e" Jan 29 15:45:03 crc kubenswrapper[5008]: I0129 15:45:03.535365 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-7dd968899f-q7khh" event={"ID":"e57e9a97-d32e-4464-b12c-ba44a4643ada","Type":"ContainerStarted","Data":"c2d1b7d8799ed8d3a59de78318cd3be39480998c5c215d797defc6b83d404a15"} Jan 29 15:45:03 crc kubenswrapper[5008]: I0129 15:45:03.536234 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/manila-operator-controller-manager-7dd968899f-q7khh" Jan 29 15:45:03 crc kubenswrapper[5008]: I0129 15:45:03.540207 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-64b5b76f97-bbsft" event={"ID":"30b3e5fd-7f41-4ed9-a1de-cb282994ad38","Type":"ContainerStarted","Data":"b9ee149dddccb6b9517f9dba8a3e94506d3a274e3c46deca091303149a12db0b"} Jan 29 15:45:03 crc kubenswrapper[5008]: I0129 15:45:03.540692 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/telemetry-operator-controller-manager-64b5b76f97-bbsft" Jan 29 15:45:03 crc kubenswrapper[5008]: I0129 15:45:03.542656 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-8886f4c47-s4fq5" event={"ID":"94a4547d-0c92-41e4-8ca7-64e21df1708e","Type":"ContainerStarted","Data":"64f2e5197bfbd62ff69cdf92e9b2adf305abbe989c3646b5d6c2502257b9d949"} Jan 29 15:45:03 crc kubenswrapper[5008]: I0129 15:45:03.543082 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/glance-operator-controller-manager-8886f4c47-s4fq5" Jan 29 15:45:03 crc kubenswrapper[5008]: I0129 15:45:03.544387 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-8d874c8fc-4zrsr" event={"ID":"6e775178-095e-451d-bded-b83f229c4231","Type":"ContainerStarted","Data":"2b112ab79d1cb7717290a5b9b1e5c0e493c3116c27a9deb5b4aade6377ffa3ab"} Jan 29 15:45:03 crc kubenswrapper[5008]: I0129 15:45:03.544723 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/cinder-operator-controller-manager-8d874c8fc-4zrsr" Jan 29 15:45:03 crc kubenswrapper[5008]: I0129 15:45:03.547528 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-68fc8c869-84h7l" event={"ID":"a9dfe223-8569-48bb-8b52-c3fb069208a0","Type":"ContainerStarted","Data":"e25ead091de4fe779a23ce535171ba2caba03f9b38ac241dc91dc997c469f2dd"} Jan 29 15:45:03 crc kubenswrapper[5008]: I0129 15:45:03.547930 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/swift-operator-controller-manager-68fc8c869-84h7l" Jan 29 15:45:03 crc kubenswrapper[5008]: I0129 15:45:03.551485 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-xjf4m" event={"ID":"ce6a1921-bd9b-47c4-8f5f-9443d8e4c08f","Type":"ContainerStarted","Data":"40b0246239a5efd3622070802992e9db11b54f99b0aa46b0f17e0f85ee43399b"} Jan 29 15:45:03 crc kubenswrapper[5008]: I0129 15:45:03.551930 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-xjf4m" Jan 29 15:45:03 crc kubenswrapper[5008]: I0129 15:45:03.554932 5008 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/manila-operator-controller-manager-7dd968899f-q7khh" podStartSLOduration=2.501005741 podStartE2EDuration="33.554924167s" podCreationTimestamp="2026-01-29 15:44:30 +0000 UTC" firstStartedPulling="2026-01-29 15:44:31.842214061 +0000 UTC m=+1015.515068298" lastFinishedPulling="2026-01-29 15:45:02.896132487 +0000 UTC m=+1046.568986724" observedRunningTime="2026-01-29 15:45:03.552676072 +0000 UTC m=+1047.225530319" watchObservedRunningTime="2026-01-29 15:45:03.554924167 +0000 UTC m=+1047.227778414" Jan 29 15:45:03 crc kubenswrapper[5008]: I0129 15:45:03.557670 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-ncxxj" event={"ID":"6196a4fd-8576-412f-9140-cf61b98444a4","Type":"ContainerStarted","Data":"edbeeb5eeb10303c1729a87a26721192931478d08054dbc30d7b715261ea147f"} Jan 29 15:45:03 crc kubenswrapper[5008]: I0129 15:45:03.558244 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-ncxxj" Jan 29 15:45:03 crc kubenswrapper[5008]: I0129 15:45:03.560139 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-6687f8d877-zbddd" event={"ID":"4dc123ee-b76c-46a7-9aea-76457232036b","Type":"ContainerStarted","Data":"029183a84c3497c531d73ec4b22d0d8d25ae57bbac89692e9efb623ca705da2c"} Jan 29 15:45:03 crc kubenswrapper[5008]: I0129 15:45:03.560491 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/octavia-operator-controller-manager-6687f8d877-zbddd" Jan 29 15:45:03 crc kubenswrapper[5008]: I0129 15:45:03.569018 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-77db58b9dd-srsvv" event={"ID":"44442d63-1bbc-4d1c-9e9d-2a9ad59baf59","Type":"ContainerStarted","Data":"3ca73951a4eb2de3393864c3bd4a6f147981f65407fc2d62e07b35712b10a665"} Jan 29 15:45:03 crc kubenswrapper[5008]: I0129 15:45:03.569170 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-manager-77db58b9dd-srsvv" Jan 29 15:45:03 crc kubenswrapper[5008]: I0129 15:45:03.572456 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-55bff696bd-klqvj" event={"ID":"27a92a88-ee29-47fd-b4cf-5e3232ce7573","Type":"ContainerStarted","Data":"a7daabfe5c20ff13f77d5f49309357873dfc9e580fe3d9d806a7bf1061d9fb7b"} Jan 29 15:45:03 crc kubenswrapper[5008]: I0129 15:45:03.573043 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/nova-operator-controller-manager-55bff696bd-klqvj" Jan 29 15:45:03 crc kubenswrapper[5008]: I0129 15:45:03.578134 5008 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/glance-operator-controller-manager-8886f4c47-s4fq5" podStartSLOduration=11.351185617 podStartE2EDuration="33.578120309s" podCreationTimestamp="2026-01-29 15:44:30 +0000 UTC" firstStartedPulling="2026-01-29 15:44:31.609211214 +0000 UTC m=+1015.282065451" lastFinishedPulling="2026-01-29 15:44:53.836145906 +0000 UTC m=+1037.509000143" observedRunningTime="2026-01-29 15:45:03.575776302 +0000 UTC m=+1047.248630539" watchObservedRunningTime="2026-01-29 15:45:03.578120309 +0000 UTC m=+1047.250974546" Jan 29 15:45:03 crc kubenswrapper[5008]: I0129 15:45:03.588508 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-hh7sg" event={"ID":"68468eb9-9e76-4f2f-9aba-cc3198e0a241","Type":"ContainerStarted","Data":"dec822d4cfb2ad4f55625cd2cae1ce5a25bcd65ed8be0ea74f9941e89df21308"} Jan 29 15:45:03 crc kubenswrapper[5008]: I0129 15:45:03.588543 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-hh7sg" Jan 29 15:45:03 crc kubenswrapper[5008]: I0129 15:45:03.594912 5008 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-xjf4m" podStartSLOduration=4.014153501 podStartE2EDuration="33.594900155s" podCreationTimestamp="2026-01-29 15:44:30 +0000 UTC" firstStartedPulling="2026-01-29 15:44:32.391213174 +0000 UTC m=+1016.064067411" lastFinishedPulling="2026-01-29 15:45:01.971959818 +0000 UTC m=+1045.644814065" observedRunningTime="2026-01-29 15:45:03.591819479 +0000 UTC m=+1047.264673716" watchObservedRunningTime="2026-01-29 15:45:03.594900155 +0000 UTC m=+1047.267754392" Jan 29 15:45:03 crc kubenswrapper[5008]: I0129 15:45:03.597561 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-6d9697b7f4-n4xtj" event={"ID":"7a610d2e-cb71-4995-a0e8-f6dc26f7664a","Type":"ContainerStarted","Data":"1b336e59666efcd1490d4a401733d6fd7317ac50554a224e54ce910bc925d425"} Jan 29 15:45:03 crc kubenswrapper[5008]: I0129 15:45:03.597621 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/designate-operator-controller-manager-6d9697b7f4-n4xtj" Jan 29 15:45:03 crc kubenswrapper[5008]: I0129 15:45:03.599470 5008 generic.go:334] "Generic (PLEG): container finished" podID="6bfb4d07-e2b9-42e2-951c-3d9f2ad23202" containerID="ac0b6463e1c89ffcdf2ab1a2ad453e18c97ff25a3453de99022e4a7402303b41" exitCode=0 Jan 29 15:45:03 crc kubenswrapper[5008]: I0129 15:45:03.599588 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29495025-5c6mh" event={"ID":"6bfb4d07-e2b9-42e2-951c-3d9f2ad23202","Type":"ContainerDied","Data":"ac0b6463e1c89ffcdf2ab1a2ad453e18c97ff25a3453de99022e4a7402303b41"} Jan 29 15:45:03 crc kubenswrapper[5008]: I0129 15:45:03.599609 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29495025-5c6mh" event={"ID":"6bfb4d07-e2b9-42e2-951c-3d9f2ad23202","Type":"ContainerStarted","Data":"4ba45acf3a1ef175f4029e9d7b056c8442e4ecfde30985996aca525c99650ef6"} Jan 29 15:45:03 crc kubenswrapper[5008]: I0129 15:45:03.601206 5008 generic.go:334] "Generic (PLEG): container finished" podID="014fe771-fe01-4b92-b038-862615b75136" containerID="6146763d50fe2db378760e8a9cd32d988036e3f58c7668e786dd7811a893a9b6" exitCode=0 Jan 29 15:45:03 crc kubenswrapper[5008]: I0129 15:45:03.601247 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-z75gs" event={"ID":"014fe771-fe01-4b92-b038-862615b75136","Type":"ContainerDied","Data":"6146763d50fe2db378760e8a9cd32d988036e3f58c7668e786dd7811a893a9b6"} Jan 29 15:45:03 crc kubenswrapper[5008]: I0129 15:45:03.609491 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-564965969-dwhc5" event={"ID":"a2163508-5800-4d97-b8d4-1f3815764822","Type":"ContainerStarted","Data":"62f64581f3e967e653f0f7bf82542da1b26324470d5536dae67a08ae7718103b"} Jan 29 15:45:03 crc kubenswrapper[5008]: I0129 15:45:03.609963 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/watcher-operator-controller-manager-564965969-dwhc5" Jan 29 15:45:03 crc kubenswrapper[5008]: I0129 15:45:03.615965 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dxkdxv" event={"ID":"9f5d1ef8-a9b5-428a-b441-b7d763dbd102","Type":"ContainerStarted","Data":"f81d007337c0ae0ba3a13af9502663e1c87ee952f3d7b62a069c96af34017843"} Jan 29 15:45:03 crc kubenswrapper[5008]: I0129 15:45:03.629108 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-qs9wh" event={"ID":"cae67616-1145-4057-b304-08a322e78d9d","Type":"ContainerStarted","Data":"abeb5f460434b370df78fba321d8cfb4051c19265056a363be0c2c42b3403ae4"} Jan 29 15:45:03 crc kubenswrapper[5008]: I0129 15:45:03.631225 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-qs9wh" Jan 29 15:45:03 crc kubenswrapper[5008]: I0129 15:45:03.635994 5008 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/swift-operator-controller-manager-68fc8c869-84h7l" podStartSLOduration=2.987048831 podStartE2EDuration="33.635973458s" podCreationTimestamp="2026-01-29 15:44:30 +0000 UTC" firstStartedPulling="2026-01-29 15:44:32.338258953 +0000 UTC m=+1016.011113190" lastFinishedPulling="2026-01-29 15:45:02.98718358 +0000 UTC m=+1046.660037817" observedRunningTime="2026-01-29 15:45:03.63147382 +0000 UTC m=+1047.304328057" watchObservedRunningTime="2026-01-29 15:45:03.635973458 +0000 UTC m=+1047.308827695" Jan 29 15:45:03 crc kubenswrapper[5008]: I0129 15:45:03.651365 5008 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/cinder-operator-controller-manager-8d874c8fc-4zrsr" podStartSLOduration=11.424350437 podStartE2EDuration="33.65134446s" podCreationTimestamp="2026-01-29 15:44:30 +0000 UTC" firstStartedPulling="2026-01-29 15:44:31.655929824 +0000 UTC m=+1015.328784061" lastFinishedPulling="2026-01-29 15:44:53.882923807 +0000 UTC m=+1037.555778084" observedRunningTime="2026-01-29 15:45:03.649954706 +0000 UTC m=+1047.322808933" watchObservedRunningTime="2026-01-29 15:45:03.65134446 +0000 UTC m=+1047.324198697" Jan 29 15:45:03 crc kubenswrapper[5008]: I0129 15:45:03.674693 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-788c46999f-qjtzq" event={"ID":"cb2d6253-7fa7-41a9-9d0b-002ef590c4db","Type":"ContainerStarted","Data":"05265b3e4a002912dbd3447c2ee696494182fb85b244603e887ec83f3ef57e47"} Jan 29 15:45:03 crc kubenswrapper[5008]: I0129 15:45:03.675894 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ovn-operator-controller-manager-788c46999f-qjtzq" Jan 29 15:45:03 crc kubenswrapper[5008]: I0129 15:45:03.684524 5008 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/telemetry-operator-controller-manager-64b5b76f97-bbsft" podStartSLOduration=4.078459916 podStartE2EDuration="33.684506713s" podCreationTimestamp="2026-01-29 15:44:30 +0000 UTC" firstStartedPulling="2026-01-29 15:44:32.381504789 +0000 UTC m=+1016.054359026" lastFinishedPulling="2026-01-29 15:45:01.987551586 +0000 UTC m=+1045.660405823" observedRunningTime="2026-01-29 15:45:03.674732306 +0000 UTC m=+1047.347586543" watchObservedRunningTime="2026-01-29 15:45:03.684506713 +0000 UTC m=+1047.357360950" Jan 29 15:45:03 crc kubenswrapper[5008]: I0129 15:45:03.740001 5008 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/octavia-operator-controller-manager-6687f8d877-zbddd" podStartSLOduration=4.161982008 podStartE2EDuration="33.739988605s" podCreationTimestamp="2026-01-29 15:44:30 +0000 UTC" firstStartedPulling="2026-01-29 15:44:32.392540936 +0000 UTC m=+1016.065395173" lastFinishedPulling="2026-01-29 15:45:01.970547533 +0000 UTC m=+1045.643401770" observedRunningTime="2026-01-29 15:45:03.738364405 +0000 UTC m=+1047.411218642" watchObservedRunningTime="2026-01-29 15:45:03.739988605 +0000 UTC m=+1047.412842842" Jan 29 15:45:03 crc kubenswrapper[5008]: E0129 15:45:03.763925 5008 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://registry.redhat.io/redhat/redhat-marketplace-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Jan 29 15:45:03 crc kubenswrapper[5008]: E0129 15:45:03.764066 5008 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-tg272,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-z75gs_openshift-marketplace(014fe771-fe01-4b92-b038-862615b75136): ErrImagePull: initializing source docker://registry.redhat.io/redhat/redhat-marketplace-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" logger="UnhandledError" Jan 29 15:45:03 crc kubenswrapper[5008]: E0129 15:45:03.766884 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"initializing source docker://registry.redhat.io/redhat/redhat-marketplace-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)\"" pod="openshift-marketplace/redhat-marketplace-z75gs" podUID="014fe771-fe01-4b92-b038-862615b75136" Jan 29 15:45:03 crc kubenswrapper[5008]: I0129 15:45:03.805107 5008 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-hh7sg" podStartSLOduration=13.298796169 podStartE2EDuration="33.80509074s" podCreationTimestamp="2026-01-29 15:44:30 +0000 UTC" firstStartedPulling="2026-01-29 15:44:31.330372368 +0000 UTC m=+1015.003226605" lastFinishedPulling="2026-01-29 15:44:51.836666939 +0000 UTC m=+1035.509521176" observedRunningTime="2026-01-29 15:45:03.772071661 +0000 UTC m=+1047.444925908" watchObservedRunningTime="2026-01-29 15:45:03.80509074 +0000 UTC m=+1047.477944977" Jan 29 15:45:03 crc kubenswrapper[5008]: I0129 15:45:03.808424 5008 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/nova-operator-controller-manager-55bff696bd-klqvj" podStartSLOduration=4.230673378 podStartE2EDuration="33.80841262s" podCreationTimestamp="2026-01-29 15:44:30 +0000 UTC" firstStartedPulling="2026-01-29 15:44:32.391576103 +0000 UTC m=+1016.064430340" lastFinishedPulling="2026-01-29 15:45:01.969315305 +0000 UTC m=+1045.642169582" observedRunningTime="2026-01-29 15:45:03.807092488 +0000 UTC m=+1047.479946725" watchObservedRunningTime="2026-01-29 15:45:03.80841262 +0000 UTC m=+1047.481266857" Jan 29 15:45:03 crc kubenswrapper[5008]: I0129 15:45:03.857980 5008 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-qs9wh" podStartSLOduration=3.059110983 podStartE2EDuration="33.857967929s" podCreationTimestamp="2026-01-29 15:44:30 +0000 UTC" firstStartedPulling="2026-01-29 15:44:31.991679157 +0000 UTC m=+1015.664533394" lastFinishedPulling="2026-01-29 15:45:02.790536093 +0000 UTC m=+1046.463390340" observedRunningTime="2026-01-29 15:45:03.857200541 +0000 UTC m=+1047.530054778" watchObservedRunningTime="2026-01-29 15:45:03.857967929 +0000 UTC m=+1047.530822166" Jan 29 15:45:03 crc kubenswrapper[5008]: I0129 15:45:03.895482 5008 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-manager-77db58b9dd-srsvv" podStartSLOduration=33.895465596 podStartE2EDuration="33.895465596s" podCreationTimestamp="2026-01-29 15:44:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 15:45:03.893578831 +0000 UTC m=+1047.566433068" watchObservedRunningTime="2026-01-29 15:45:03.895465596 +0000 UTC m=+1047.568319853" Jan 29 15:45:03 crc kubenswrapper[5008]: I0129 15:45:03.906986 5008 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/designate-operator-controller-manager-6d9697b7f4-n4xtj" podStartSLOduration=11.65763433 podStartE2EDuration="33.906974464s" podCreationTimestamp="2026-01-29 15:44:30 +0000 UTC" firstStartedPulling="2026-01-29 15:44:31.633485001 +0000 UTC m=+1015.306339238" lastFinishedPulling="2026-01-29 15:44:53.882825105 +0000 UTC m=+1037.555679372" observedRunningTime="2026-01-29 15:45:03.90513837 +0000 UTC m=+1047.578005468" watchObservedRunningTime="2026-01-29 15:45:03.906974464 +0000 UTC m=+1047.579828691" Jan 29 15:45:03 crc kubenswrapper[5008]: I0129 15:45:03.920820 5008 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-ncxxj" podStartSLOduration=13.907767701000001 podStartE2EDuration="33.920804999s" podCreationTimestamp="2026-01-29 15:44:30 +0000 UTC" firstStartedPulling="2026-01-29 15:44:31.823614151 +0000 UTC m=+1015.496468388" lastFinishedPulling="2026-01-29 15:44:51.836651449 +0000 UTC m=+1035.509505686" observedRunningTime="2026-01-29 15:45:03.91831979 +0000 UTC m=+1047.591174027" watchObservedRunningTime="2026-01-29 15:45:03.920804999 +0000 UTC m=+1047.593659236" Jan 29 15:45:03 crc kubenswrapper[5008]: I0129 15:45:03.938353 5008 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/watcher-operator-controller-manager-564965969-dwhc5" podStartSLOduration=4.334894101 podStartE2EDuration="33.938338934s" podCreationTimestamp="2026-01-29 15:44:30 +0000 UTC" firstStartedPulling="2026-01-29 15:44:32.387614777 +0000 UTC m=+1016.060469014" lastFinishedPulling="2026-01-29 15:45:01.9910596 +0000 UTC m=+1045.663913847" observedRunningTime="2026-01-29 15:45:03.934351167 +0000 UTC m=+1047.607205404" watchObservedRunningTime="2026-01-29 15:45:03.938338934 +0000 UTC m=+1047.611193171" Jan 29 15:45:03 crc kubenswrapper[5008]: I0129 15:45:03.961654 5008 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ovn-operator-controller-manager-788c46999f-qjtzq" podStartSLOduration=12.450714328 podStartE2EDuration="33.961637557s" podCreationTimestamp="2026-01-29 15:44:30 +0000 UTC" firstStartedPulling="2026-01-29 15:44:32.374253753 +0000 UTC m=+1016.047107990" lastFinishedPulling="2026-01-29 15:44:53.885176982 +0000 UTC m=+1037.558031219" observedRunningTime="2026-01-29 15:45:03.957835196 +0000 UTC m=+1047.630689433" watchObservedRunningTime="2026-01-29 15:45:03.961637557 +0000 UTC m=+1047.634491794" Jan 29 15:45:04 crc kubenswrapper[5008]: E0129 15:45:04.324317 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-6kzcj" podUID="c82fc869-759d-4902-9aef-fdd69452b420" Jan 29 15:45:04 crc kubenswrapper[5008]: I0129 15:45:04.681974 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-69d6db494d-9sf7f" event={"ID":"b46e3eea-2330-4b3f-b45d-34ae38a0dde9","Type":"ContainerStarted","Data":"e74665f2cdbce441cbd0fa4745148c26ed1081999840132eae1a9cea0c76feb5"} Jan 29 15:45:04 crc kubenswrapper[5008]: E0129 15:45:04.685441 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-z75gs" podUID="014fe771-fe01-4b92-b038-862615b75136" Jan 29 15:45:04 crc kubenswrapper[5008]: I0129 15:45:04.719380 5008 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/heat-operator-controller-manager-69d6db494d-9sf7f" podStartSLOduration=2.65305577 podStartE2EDuration="34.719361011s" podCreationTimestamp="2026-01-29 15:44:30 +0000 UTC" firstStartedPulling="2026-01-29 15:44:31.855345589 +0000 UTC m=+1015.528199826" lastFinishedPulling="2026-01-29 15:45:03.92165083 +0000 UTC m=+1047.594505067" observedRunningTime="2026-01-29 15:45:04.702279126 +0000 UTC m=+1048.375133383" watchObservedRunningTime="2026-01-29 15:45:04.719361011 +0000 UTC m=+1048.392215258" Jan 29 15:45:05 crc kubenswrapper[5008]: I0129 15:45:05.008110 5008 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29495025-5c6mh" Jan 29 15:45:05 crc kubenswrapper[5008]: I0129 15:45:05.114402 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6bfb4d07-e2b9-42e2-951c-3d9f2ad23202-config-volume\") pod \"6bfb4d07-e2b9-42e2-951c-3d9f2ad23202\" (UID: \"6bfb4d07-e2b9-42e2-951c-3d9f2ad23202\") " Jan 29 15:45:05 crc kubenswrapper[5008]: I0129 15:45:05.114520 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/6bfb4d07-e2b9-42e2-951c-3d9f2ad23202-secret-volume\") pod \"6bfb4d07-e2b9-42e2-951c-3d9f2ad23202\" (UID: \"6bfb4d07-e2b9-42e2-951c-3d9f2ad23202\") " Jan 29 15:45:05 crc kubenswrapper[5008]: I0129 15:45:05.114575 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b5tsl\" (UniqueName: \"kubernetes.io/projected/6bfb4d07-e2b9-42e2-951c-3d9f2ad23202-kube-api-access-b5tsl\") pod \"6bfb4d07-e2b9-42e2-951c-3d9f2ad23202\" (UID: \"6bfb4d07-e2b9-42e2-951c-3d9f2ad23202\") " Jan 29 15:45:05 crc kubenswrapper[5008]: I0129 15:45:05.115769 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6bfb4d07-e2b9-42e2-951c-3d9f2ad23202-config-volume" (OuterVolumeSpecName: "config-volume") pod "6bfb4d07-e2b9-42e2-951c-3d9f2ad23202" (UID: "6bfb4d07-e2b9-42e2-951c-3d9f2ad23202"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:45:05 crc kubenswrapper[5008]: I0129 15:45:05.121158 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6bfb4d07-e2b9-42e2-951c-3d9f2ad23202-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "6bfb4d07-e2b9-42e2-951c-3d9f2ad23202" (UID: "6bfb4d07-e2b9-42e2-951c-3d9f2ad23202"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:45:05 crc kubenswrapper[5008]: I0129 15:45:05.121102 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6bfb4d07-e2b9-42e2-951c-3d9f2ad23202-kube-api-access-b5tsl" (OuterVolumeSpecName: "kube-api-access-b5tsl") pod "6bfb4d07-e2b9-42e2-951c-3d9f2ad23202" (UID: "6bfb4d07-e2b9-42e2-951c-3d9f2ad23202"). InnerVolumeSpecName "kube-api-access-b5tsl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:45:05 crc kubenswrapper[5008]: I0129 15:45:05.216404 5008 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6bfb4d07-e2b9-42e2-951c-3d9f2ad23202-config-volume\") on node \"crc\" DevicePath \"\"" Jan 29 15:45:05 crc kubenswrapper[5008]: I0129 15:45:05.216446 5008 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/6bfb4d07-e2b9-42e2-951c-3d9f2ad23202-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 29 15:45:05 crc kubenswrapper[5008]: I0129 15:45:05.216459 5008 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b5tsl\" (UniqueName: \"kubernetes.io/projected/6bfb4d07-e2b9-42e2-951c-3d9f2ad23202-kube-api-access-b5tsl\") on node \"crc\" DevicePath \"\"" Jan 29 15:45:05 crc kubenswrapper[5008]: I0129 15:45:05.326805 5008 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 29 15:45:05 crc kubenswrapper[5008]: I0129 15:45:05.692235 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29495025-5c6mh" event={"ID":"6bfb4d07-e2b9-42e2-951c-3d9f2ad23202","Type":"ContainerDied","Data":"4ba45acf3a1ef175f4029e9d7b056c8442e4ecfde30985996aca525c99650ef6"} Jan 29 15:45:05 crc kubenswrapper[5008]: I0129 15:45:05.692307 5008 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4ba45acf3a1ef175f4029e9d7b056c8442e4ecfde30985996aca525c99650ef6" Jan 29 15:45:05 crc kubenswrapper[5008]: I0129 15:45:05.692317 5008 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29495025-5c6mh" Jan 29 15:45:07 crc kubenswrapper[5008]: I0129 15:45:07.168958 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-manager-77db58b9dd-srsvv" Jan 29 15:45:07 crc kubenswrapper[5008]: I0129 15:45:07.715599 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-bjjwz" event={"ID":"d39876a5-4ca3-44e2-a4c5-c6541c2ec812","Type":"ContainerStarted","Data":"3f21253bce924e7eaadfcefeb40aa20f8865fddcdd5547ea99ebd70f67299196"} Jan 29 15:45:07 crc kubenswrapper[5008]: I0129 15:45:07.715954 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-bjjwz" Jan 29 15:45:07 crc kubenswrapper[5008]: I0129 15:45:07.716879 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-79955696d6-zvcs5" event={"ID":"4ff89cd9-951e-4907-b60c-a1a1c08007a4","Type":"ContainerStarted","Data":"67374a8df764e4400300a81cd50767b0992ed79dcfc8091f6d4bf5484b09fba2"} Jan 29 15:45:07 crc kubenswrapper[5008]: I0129 15:45:07.717032 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/infra-operator-controller-manager-79955696d6-zvcs5" Jan 29 15:45:07 crc kubenswrapper[5008]: I0129 15:45:07.718286 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dxkdxv" event={"ID":"9f5d1ef8-a9b5-428a-b441-b7d763dbd102","Type":"ContainerStarted","Data":"fbedf6f95722853c707dece2abc3325005e22f2716b0a48ac2ddb7b1c831f4d5"} Jan 29 15:45:07 crc kubenswrapper[5008]: I0129 15:45:07.718495 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dxkdxv" Jan 29 15:45:07 crc kubenswrapper[5008]: I0129 15:45:07.730352 5008 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-bjjwz" podStartSLOduration=3.608112006 podStartE2EDuration="37.730334559s" podCreationTimestamp="2026-01-29 15:44:30 +0000 UTC" firstStartedPulling="2026-01-29 15:44:32.369837547 +0000 UTC m=+1016.042691784" lastFinishedPulling="2026-01-29 15:45:06.4920601 +0000 UTC m=+1050.164914337" observedRunningTime="2026-01-29 15:45:07.728230639 +0000 UTC m=+1051.401084876" watchObservedRunningTime="2026-01-29 15:45:07.730334559 +0000 UTC m=+1051.403188816" Jan 29 15:45:07 crc kubenswrapper[5008]: I0129 15:45:07.746395 5008 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/infra-operator-controller-manager-79955696d6-zvcs5" podStartSLOduration=33.727232886 podStartE2EDuration="37.746375228s" podCreationTimestamp="2026-01-29 15:44:30 +0000 UTC" firstStartedPulling="2026-01-29 15:45:02.492940952 +0000 UTC m=+1046.165795189" lastFinishedPulling="2026-01-29 15:45:06.512083274 +0000 UTC m=+1050.184937531" observedRunningTime="2026-01-29 15:45:07.743965129 +0000 UTC m=+1051.416819396" watchObservedRunningTime="2026-01-29 15:45:07.746375228 +0000 UTC m=+1051.419229475" Jan 29 15:45:07 crc kubenswrapper[5008]: I0129 15:45:07.785045 5008 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dxkdxv" podStartSLOduration=33.855957071 podStartE2EDuration="37.785027823s" podCreationTimestamp="2026-01-29 15:44:30 +0000 UTC" firstStartedPulling="2026-01-29 15:45:02.560874346 +0000 UTC m=+1046.233728583" lastFinishedPulling="2026-01-29 15:45:06.489945058 +0000 UTC m=+1050.162799335" observedRunningTime="2026-01-29 15:45:07.778276879 +0000 UTC m=+1051.451131126" watchObservedRunningTime="2026-01-29 15:45:07.785027823 +0000 UTC m=+1051.457882080" Jan 29 15:45:09 crc kubenswrapper[5008]: I0129 15:45:09.745148 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-585dbc889-44qcp" event={"ID":"14020423-5911-4b69-8889-b12267c9bbf9","Type":"ContainerStarted","Data":"6b42c189843f865aeb4d6b78a6d289090887d3a0d8a4d7788b1ee3272759fde4"} Jan 29 15:45:09 crc kubenswrapper[5008]: I0129 15:45:09.745805 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/neutron-operator-controller-manager-585dbc889-44qcp" Jan 29 15:45:09 crc kubenswrapper[5008]: I0129 15:45:09.764121 5008 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/neutron-operator-controller-manager-585dbc889-44qcp" podStartSLOduration=4.600557219 podStartE2EDuration="39.764099916s" podCreationTimestamp="2026-01-29 15:44:30 +0000 UTC" firstStartedPulling="2026-01-29 15:44:31.865339451 +0000 UTC m=+1015.538193688" lastFinishedPulling="2026-01-29 15:45:07.028882108 +0000 UTC m=+1050.701736385" observedRunningTime="2026-01-29 15:45:09.760237712 +0000 UTC m=+1053.433091939" watchObservedRunningTime="2026-01-29 15:45:09.764099916 +0000 UTC m=+1053.436954213" Jan 29 15:45:10 crc kubenswrapper[5008]: I0129 15:45:10.561652 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-hh7sg" Jan 29 15:45:10 crc kubenswrapper[5008]: I0129 15:45:10.607223 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/designate-operator-controller-manager-6d9697b7f4-n4xtj" Jan 29 15:45:10 crc kubenswrapper[5008]: I0129 15:45:10.611007 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/cinder-operator-controller-manager-8d874c8fc-4zrsr" Jan 29 15:45:10 crc kubenswrapper[5008]: I0129 15:45:10.634899 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/glance-operator-controller-manager-8886f4c47-s4fq5" Jan 29 15:45:10 crc kubenswrapper[5008]: I0129 15:45:10.691115 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/manila-operator-controller-manager-7dd968899f-q7khh" Jan 29 15:45:10 crc kubenswrapper[5008]: I0129 15:45:10.728082 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/heat-operator-controller-manager-69d6db494d-9sf7f" Jan 29 15:45:10 crc kubenswrapper[5008]: I0129 15:45:10.731486 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/heat-operator-controller-manager-69d6db494d-9sf7f" Jan 29 15:45:10 crc kubenswrapper[5008]: I0129 15:45:10.751893 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-fxz5k" event={"ID":"d4fd527b-7108-4f94-b7a9-bb0b358b8c3c","Type":"ContainerStarted","Data":"66fad5914636c645b6adbeb3857b2a7a42c761abe9ed7666b21b08ae58584f10"} Jan 29 15:45:10 crc kubenswrapper[5008]: I0129 15:45:10.752082 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-fxz5k" Jan 29 15:45:10 crc kubenswrapper[5008]: I0129 15:45:10.753065 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-84f48565d4-qhwnb" event={"ID":"e76346a9-7ba5-4178-82b7-da9f0c337c08","Type":"ContainerStarted","Data":"965fa0b13fd2d11bb62fe5f807f525ab01ed8547b3557573aefd8a284466f1c1"} Jan 29 15:45:10 crc kubenswrapper[5008]: I0129 15:45:10.753520 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/keystone-operator-controller-manager-84f48565d4-qhwnb" Jan 29 15:45:10 crc kubenswrapper[5008]: I0129 15:45:10.758064 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-qs9wh" Jan 29 15:45:10 crc kubenswrapper[5008]: I0129 15:45:10.769861 5008 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-fxz5k" podStartSLOduration=2.911689296 podStartE2EDuration="40.769845799s" podCreationTimestamp="2026-01-29 15:44:30 +0000 UTC" firstStartedPulling="2026-01-29 15:44:32.378076206 +0000 UTC m=+1016.050930443" lastFinishedPulling="2026-01-29 15:45:10.236232679 +0000 UTC m=+1053.909086946" observedRunningTime="2026-01-29 15:45:10.768092197 +0000 UTC m=+1054.440946434" watchObservedRunningTime="2026-01-29 15:45:10.769845799 +0000 UTC m=+1054.442700036" Jan 29 15:45:10 crc kubenswrapper[5008]: I0129 15:45:10.794679 5008 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/keystone-operator-controller-manager-84f48565d4-qhwnb" podStartSLOduration=2.26327123 podStartE2EDuration="40.79466348s" podCreationTimestamp="2026-01-29 15:44:30 +0000 UTC" firstStartedPulling="2026-01-29 15:44:31.872027763 +0000 UTC m=+1015.544882000" lastFinishedPulling="2026-01-29 15:45:10.403420013 +0000 UTC m=+1054.076274250" observedRunningTime="2026-01-29 15:45:10.789155527 +0000 UTC m=+1054.462009764" watchObservedRunningTime="2026-01-29 15:45:10.79466348 +0000 UTC m=+1054.467517717" Jan 29 15:45:10 crc kubenswrapper[5008]: I0129 15:45:10.873714 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-ncxxj" Jan 29 15:45:11 crc kubenswrapper[5008]: I0129 15:45:11.031062 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-bjjwz" Jan 29 15:45:11 crc kubenswrapper[5008]: I0129 15:45:11.050018 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/nova-operator-controller-manager-55bff696bd-klqvj" Jan 29 15:45:11 crc kubenswrapper[5008]: I0129 15:45:11.103518 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/octavia-operator-controller-manager-6687f8d877-zbddd" Jan 29 15:45:11 crc kubenswrapper[5008]: I0129 15:45:11.159140 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ovn-operator-controller-manager-788c46999f-qjtzq" Jan 29 15:45:11 crc kubenswrapper[5008]: I0129 15:45:11.202304 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/swift-operator-controller-manager-68fc8c869-84h7l" Jan 29 15:45:11 crc kubenswrapper[5008]: I0129 15:45:11.227921 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/telemetry-operator-controller-manager-64b5b76f97-bbsft" Jan 29 15:45:11 crc kubenswrapper[5008]: I0129 15:45:11.230900 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-xjf4m" Jan 29 15:45:11 crc kubenswrapper[5008]: I0129 15:45:11.506696 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/watcher-operator-controller-manager-564965969-dwhc5" Jan 29 15:45:13 crc kubenswrapper[5008]: I0129 15:45:13.990697 5008 patch_prober.go:28] interesting pod/machine-config-daemon-gk9q8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 15:45:13 crc kubenswrapper[5008]: I0129 15:45:13.990850 5008 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-gk9q8" podUID="ca0fcb2d-733d-4bde-9bbf-3f7082d0e244" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 15:45:13 crc kubenswrapper[5008]: I0129 15:45:13.990931 5008 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-gk9q8" Jan 29 15:45:13 crc kubenswrapper[5008]: I0129 15:45:13.992099 5008 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"f87de1e980db0bd16d914932ff79d49ee9898f73c25f93235e4e1fda574d4c5a"} pod="openshift-machine-config-operator/machine-config-daemon-gk9q8" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 29 15:45:13 crc kubenswrapper[5008]: I0129 15:45:13.992586 5008 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-gk9q8" podUID="ca0fcb2d-733d-4bde-9bbf-3f7082d0e244" containerName="machine-config-daemon" containerID="cri-o://f87de1e980db0bd16d914932ff79d49ee9898f73c25f93235e4e1fda574d4c5a" gracePeriod=600 Jan 29 15:45:16 crc kubenswrapper[5008]: E0129 15:45:16.327850 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-6kzcj" podUID="c82fc869-759d-4902-9aef-fdd69452b420" Jan 29 15:45:16 crc kubenswrapper[5008]: E0129 15:45:16.327852 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-vtv85" podUID="1a373ec7-8da3-4b3e-a08a-e5e8b8e5a2d1" Jan 29 15:45:16 crc kubenswrapper[5008]: E0129 15:45:16.328076 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-9l2c6" podUID="decefe5c-189e-43f8-88b2-f93a00567c3e" Jan 29 15:45:16 crc kubenswrapper[5008]: I0129 15:45:16.415065 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/infra-operator-controller-manager-79955696d6-zvcs5" Jan 29 15:45:16 crc kubenswrapper[5008]: I0129 15:45:16.737877 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dxkdxv" Jan 29 15:45:18 crc kubenswrapper[5008]: E0129 15:45:18.449209 5008 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://registry.redhat.io/redhat/redhat-marketplace-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Jan 29 15:45:18 crc kubenswrapper[5008]: E0129 15:45:18.449380 5008 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-tg272,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-z75gs_openshift-marketplace(014fe771-fe01-4b92-b038-862615b75136): ErrImagePull: initializing source docker://registry.redhat.io/redhat/redhat-marketplace-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" logger="UnhandledError" Jan 29 15:45:18 crc kubenswrapper[5008]: E0129 15:45:18.451333 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"initializing source docker://registry.redhat.io/redhat/redhat-marketplace-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)\"" pod="openshift-marketplace/redhat-marketplace-z75gs" podUID="014fe771-fe01-4b92-b038-862615b75136" Jan 29 15:45:20 crc kubenswrapper[5008]: I0129 15:45:20.978146 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/keystone-operator-controller-manager-84f48565d4-qhwnb" Jan 29 15:45:21 crc kubenswrapper[5008]: I0129 15:45:21.009122 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/neutron-operator-controller-manager-585dbc889-44qcp" Jan 29 15:45:21 crc kubenswrapper[5008]: I0129 15:45:21.465228 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-fxz5k" Jan 29 15:45:26 crc kubenswrapper[5008]: I0129 15:45:26.101175 5008 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-config-operator_machine-config-daemon-gk9q8_ca0fcb2d-733d-4bde-9bbf-3f7082d0e244/machine-config-daemon/4.log" Jan 29 15:45:26 crc kubenswrapper[5008]: I0129 15:45:26.102713 5008 generic.go:334] "Generic (PLEG): container finished" podID="ca0fcb2d-733d-4bde-9bbf-3f7082d0e244" containerID="f87de1e980db0bd16d914932ff79d49ee9898f73c25f93235e4e1fda574d4c5a" exitCode=-1 Jan 29 15:45:26 crc kubenswrapper[5008]: I0129 15:45:26.102759 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-gk9q8" event={"ID":"ca0fcb2d-733d-4bde-9bbf-3f7082d0e244","Type":"ContainerDied","Data":"f87de1e980db0bd16d914932ff79d49ee9898f73c25f93235e4e1fda574d4c5a"} Jan 29 15:45:26 crc kubenswrapper[5008]: I0129 15:45:26.102840 5008 scope.go:117] "RemoveContainer" containerID="d89267ade5f0f1bc5747291958183960695e4e4e932d44027e6c4704ebb5c4ef" Jan 29 15:45:27 crc kubenswrapper[5008]: E0129 15:45:27.360199 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-9l2c6" podUID="decefe5c-189e-43f8-88b2-f93a00567c3e" Jan 29 15:45:28 crc kubenswrapper[5008]: E0129 15:45:28.324989 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-6kzcj" podUID="c82fc869-759d-4902-9aef-fdd69452b420" Jan 29 15:45:31 crc kubenswrapper[5008]: E0129 15:45:31.326851 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-z75gs" podUID="014fe771-fe01-4b92-b038-862615b75136" Jan 29 15:45:35 crc kubenswrapper[5008]: I0129 15:45:35.169582 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-gk9q8" event={"ID":"ca0fcb2d-733d-4bde-9bbf-3f7082d0e244","Type":"ContainerStarted","Data":"afcf72806e2f44481eaccbb425ccc0452067f0e28ee8224a454fe6d6fab03a1b"} Jan 29 15:45:35 crc kubenswrapper[5008]: I0129 15:45:35.171370 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-vtv85" event={"ID":"1a373ec7-8da3-4b3e-a08a-e5e8b8e5a2d1","Type":"ContainerStarted","Data":"dd151ea38c4064e07bdf2b218590a45c407525f6fb598dffc985f5c79d6326a7"} Jan 29 15:45:35 crc kubenswrapper[5008]: I0129 15:45:35.205731 5008 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-vtv85" podStartSLOduration=2.110222511 podStartE2EDuration="1m4.205706795s" podCreationTimestamp="2026-01-29 15:44:31 +0000 UTC" firstStartedPulling="2026-01-29 15:44:32.384524023 +0000 UTC m=+1016.057378260" lastFinishedPulling="2026-01-29 15:45:34.480008307 +0000 UTC m=+1078.152862544" observedRunningTime="2026-01-29 15:45:35.199970856 +0000 UTC m=+1078.872825103" watchObservedRunningTime="2026-01-29 15:45:35.205706795 +0000 UTC m=+1078.878561042" Jan 29 15:45:39 crc kubenswrapper[5008]: E0129 15:45:39.326203 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-9l2c6" podUID="decefe5c-189e-43f8-88b2-f93a00567c3e" Jan 29 15:45:44 crc kubenswrapper[5008]: I0129 15:45:44.241344 5008 generic.go:334] "Generic (PLEG): container finished" podID="c82fc869-759d-4902-9aef-fdd69452b420" containerID="252ca65842c9d7357ac65b037452a00da92ce644c45e1b9f0b6e067af34afb31" exitCode=0 Jan 29 15:45:44 crc kubenswrapper[5008]: I0129 15:45:44.241478 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6kzcj" event={"ID":"c82fc869-759d-4902-9aef-fdd69452b420","Type":"ContainerDied","Data":"252ca65842c9d7357ac65b037452a00da92ce644c45e1b9f0b6e067af34afb31"} Jan 29 15:45:45 crc kubenswrapper[5008]: I0129 15:45:45.250912 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6kzcj" event={"ID":"c82fc869-759d-4902-9aef-fdd69452b420","Type":"ContainerStarted","Data":"aa91505cf8b4d23056bc4bbc41262f55839afe4692887dc71784f0fbc58a28a6"} Jan 29 15:45:45 crc kubenswrapper[5008]: I0129 15:45:45.293724 5008 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-6kzcj" podStartSLOduration=2.434106012 podStartE2EDuration="1m39.29370484s" podCreationTimestamp="2026-01-29 15:44:06 +0000 UTC" firstStartedPulling="2026-01-29 15:44:07.877247959 +0000 UTC m=+991.550102196" lastFinishedPulling="2026-01-29 15:45:44.736846787 +0000 UTC m=+1088.409701024" observedRunningTime="2026-01-29 15:45:45.275353776 +0000 UTC m=+1088.948208043" watchObservedRunningTime="2026-01-29 15:45:45.29370484 +0000 UTC m=+1088.966559077" Jan 29 15:45:46 crc kubenswrapper[5008]: I0129 15:45:46.802825 5008 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-6kzcj" Jan 29 15:45:46 crc kubenswrapper[5008]: I0129 15:45:46.803201 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-6kzcj" Jan 29 15:45:46 crc kubenswrapper[5008]: I0129 15:45:46.876980 5008 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-6kzcj" Jan 29 15:45:48 crc kubenswrapper[5008]: I0129 15:45:48.274043 5008 generic.go:334] "Generic (PLEG): container finished" podID="014fe771-fe01-4b92-b038-862615b75136" containerID="b091a2c3cf526d0bdb7bf3376685f7d0e8e07a65ed76f6cc14da757b75460432" exitCode=0 Jan 29 15:45:48 crc kubenswrapper[5008]: I0129 15:45:48.274134 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-z75gs" event={"ID":"014fe771-fe01-4b92-b038-862615b75136","Type":"ContainerDied","Data":"b091a2c3cf526d0bdb7bf3376685f7d0e8e07a65ed76f6cc14da757b75460432"} Jan 29 15:45:49 crc kubenswrapper[5008]: I0129 15:45:49.283049 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-z75gs" event={"ID":"014fe771-fe01-4b92-b038-862615b75136","Type":"ContainerStarted","Data":"ecc4e5a68e9a1c47e753728740eeba62f98f13393292d44b9163dac6f6b4fb16"} Jan 29 15:45:49 crc kubenswrapper[5008]: I0129 15:45:49.302465 5008 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-z75gs" podStartSLOduration=31.160997991 podStartE2EDuration="1m16.30245s" podCreationTimestamp="2026-01-29 15:44:33 +0000 UTC" firstStartedPulling="2026-01-29 15:45:03.602516299 +0000 UTC m=+1047.275370536" lastFinishedPulling="2026-01-29 15:45:48.743968278 +0000 UTC m=+1092.416822545" observedRunningTime="2026-01-29 15:45:49.30163862 +0000 UTC m=+1092.974492867" watchObservedRunningTime="2026-01-29 15:45:49.30245 +0000 UTC m=+1092.975304237" Jan 29 15:45:49 crc kubenswrapper[5008]: I0129 15:45:49.497136 5008 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-d4fhx"] Jan 29 15:45:49 crc kubenswrapper[5008]: E0129 15:45:49.497412 5008 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6bfb4d07-e2b9-42e2-951c-3d9f2ad23202" containerName="collect-profiles" Jan 29 15:45:49 crc kubenswrapper[5008]: I0129 15:45:49.497423 5008 state_mem.go:107] "Deleted CPUSet assignment" podUID="6bfb4d07-e2b9-42e2-951c-3d9f2ad23202" containerName="collect-profiles" Jan 29 15:45:49 crc kubenswrapper[5008]: I0129 15:45:49.497550 5008 memory_manager.go:354] "RemoveStaleState removing state" podUID="6bfb4d07-e2b9-42e2-951c-3d9f2ad23202" containerName="collect-profiles" Jan 29 15:45:49 crc kubenswrapper[5008]: I0129 15:45:49.501243 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-d4fhx" Jan 29 15:45:49 crc kubenswrapper[5008]: I0129 15:45:49.503022 5008 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns" Jan 29 15:45:49 crc kubenswrapper[5008]: I0129 15:45:49.504442 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dnsmasq-dns-dockercfg-96kv2" Jan 29 15:45:49 crc kubenswrapper[5008]: I0129 15:45:49.505533 5008 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openshift-service-ca.crt" Jan 29 15:45:49 crc kubenswrapper[5008]: I0129 15:45:49.505652 5008 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"kube-root-ca.crt" Jan 29 15:45:49 crc kubenswrapper[5008]: I0129 15:45:49.514936 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-d4fhx"] Jan 29 15:45:49 crc kubenswrapper[5008]: I0129 15:45:49.585017 5008 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-s5tkh"] Jan 29 15:45:49 crc kubenswrapper[5008]: I0129 15:45:49.586084 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-s5tkh" Jan 29 15:45:49 crc kubenswrapper[5008]: I0129 15:45:49.588187 5008 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-svc" Jan 29 15:45:49 crc kubenswrapper[5008]: I0129 15:45:49.599941 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-s5tkh"] Jan 29 15:45:49 crc kubenswrapper[5008]: I0129 15:45:49.616005 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sqfht\" (UniqueName: \"kubernetes.io/projected/b128f8df-0b1b-4062-9c3d-fd0f1d2e8078-kube-api-access-sqfht\") pod \"dnsmasq-dns-675f4bcbfc-d4fhx\" (UID: \"b128f8df-0b1b-4062-9c3d-fd0f1d2e8078\") " pod="openstack/dnsmasq-dns-675f4bcbfc-d4fhx" Jan 29 15:45:49 crc kubenswrapper[5008]: I0129 15:45:49.616144 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b128f8df-0b1b-4062-9c3d-fd0f1d2e8078-config\") pod \"dnsmasq-dns-675f4bcbfc-d4fhx\" (UID: \"b128f8df-0b1b-4062-9c3d-fd0f1d2e8078\") " pod="openstack/dnsmasq-dns-675f4bcbfc-d4fhx" Jan 29 15:45:49 crc kubenswrapper[5008]: I0129 15:45:49.717089 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b128f8df-0b1b-4062-9c3d-fd0f1d2e8078-config\") pod \"dnsmasq-dns-675f4bcbfc-d4fhx\" (UID: \"b128f8df-0b1b-4062-9c3d-fd0f1d2e8078\") " pod="openstack/dnsmasq-dns-675f4bcbfc-d4fhx" Jan 29 15:45:49 crc kubenswrapper[5008]: I0129 15:45:49.717347 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3db905f0-53de-4983-b70f-c883bfe123ba-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-s5tkh\" (UID: \"3db905f0-53de-4983-b70f-c883bfe123ba\") " pod="openstack/dnsmasq-dns-78dd6ddcc-s5tkh" Jan 29 15:45:49 crc kubenswrapper[5008]: I0129 15:45:49.717507 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sqfht\" (UniqueName: \"kubernetes.io/projected/b128f8df-0b1b-4062-9c3d-fd0f1d2e8078-kube-api-access-sqfht\") pod \"dnsmasq-dns-675f4bcbfc-d4fhx\" (UID: \"b128f8df-0b1b-4062-9c3d-fd0f1d2e8078\") " pod="openstack/dnsmasq-dns-675f4bcbfc-d4fhx" Jan 29 15:45:49 crc kubenswrapper[5008]: I0129 15:45:49.717543 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hhgbd\" (UniqueName: \"kubernetes.io/projected/3db905f0-53de-4983-b70f-c883bfe123ba-kube-api-access-hhgbd\") pod \"dnsmasq-dns-78dd6ddcc-s5tkh\" (UID: \"3db905f0-53de-4983-b70f-c883bfe123ba\") " pod="openstack/dnsmasq-dns-78dd6ddcc-s5tkh" Jan 29 15:45:49 crc kubenswrapper[5008]: I0129 15:45:49.717567 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3db905f0-53de-4983-b70f-c883bfe123ba-config\") pod \"dnsmasq-dns-78dd6ddcc-s5tkh\" (UID: \"3db905f0-53de-4983-b70f-c883bfe123ba\") " pod="openstack/dnsmasq-dns-78dd6ddcc-s5tkh" Jan 29 15:45:49 crc kubenswrapper[5008]: I0129 15:45:49.717999 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b128f8df-0b1b-4062-9c3d-fd0f1d2e8078-config\") pod \"dnsmasq-dns-675f4bcbfc-d4fhx\" (UID: \"b128f8df-0b1b-4062-9c3d-fd0f1d2e8078\") " pod="openstack/dnsmasq-dns-675f4bcbfc-d4fhx" Jan 29 15:45:49 crc kubenswrapper[5008]: I0129 15:45:49.737552 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sqfht\" (UniqueName: \"kubernetes.io/projected/b128f8df-0b1b-4062-9c3d-fd0f1d2e8078-kube-api-access-sqfht\") pod \"dnsmasq-dns-675f4bcbfc-d4fhx\" (UID: \"b128f8df-0b1b-4062-9c3d-fd0f1d2e8078\") " pod="openstack/dnsmasq-dns-675f4bcbfc-d4fhx" Jan 29 15:45:49 crc kubenswrapper[5008]: I0129 15:45:49.818919 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3db905f0-53de-4983-b70f-c883bfe123ba-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-s5tkh\" (UID: \"3db905f0-53de-4983-b70f-c883bfe123ba\") " pod="openstack/dnsmasq-dns-78dd6ddcc-s5tkh" Jan 29 15:45:49 crc kubenswrapper[5008]: I0129 15:45:49.819002 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hhgbd\" (UniqueName: \"kubernetes.io/projected/3db905f0-53de-4983-b70f-c883bfe123ba-kube-api-access-hhgbd\") pod \"dnsmasq-dns-78dd6ddcc-s5tkh\" (UID: \"3db905f0-53de-4983-b70f-c883bfe123ba\") " pod="openstack/dnsmasq-dns-78dd6ddcc-s5tkh" Jan 29 15:45:49 crc kubenswrapper[5008]: I0129 15:45:49.819029 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3db905f0-53de-4983-b70f-c883bfe123ba-config\") pod \"dnsmasq-dns-78dd6ddcc-s5tkh\" (UID: \"3db905f0-53de-4983-b70f-c883bfe123ba\") " pod="openstack/dnsmasq-dns-78dd6ddcc-s5tkh" Jan 29 15:45:49 crc kubenswrapper[5008]: I0129 15:45:49.819697 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3db905f0-53de-4983-b70f-c883bfe123ba-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-s5tkh\" (UID: \"3db905f0-53de-4983-b70f-c883bfe123ba\") " pod="openstack/dnsmasq-dns-78dd6ddcc-s5tkh" Jan 29 15:45:49 crc kubenswrapper[5008]: I0129 15:45:49.819741 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3db905f0-53de-4983-b70f-c883bfe123ba-config\") pod \"dnsmasq-dns-78dd6ddcc-s5tkh\" (UID: \"3db905f0-53de-4983-b70f-c883bfe123ba\") " pod="openstack/dnsmasq-dns-78dd6ddcc-s5tkh" Jan 29 15:45:49 crc kubenswrapper[5008]: I0129 15:45:49.821698 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-d4fhx" Jan 29 15:45:49 crc kubenswrapper[5008]: I0129 15:45:49.849280 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hhgbd\" (UniqueName: \"kubernetes.io/projected/3db905f0-53de-4983-b70f-c883bfe123ba-kube-api-access-hhgbd\") pod \"dnsmasq-dns-78dd6ddcc-s5tkh\" (UID: \"3db905f0-53de-4983-b70f-c883bfe123ba\") " pod="openstack/dnsmasq-dns-78dd6ddcc-s5tkh" Jan 29 15:45:49 crc kubenswrapper[5008]: I0129 15:45:49.906021 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-s5tkh" Jan 29 15:45:50 crc kubenswrapper[5008]: I0129 15:45:50.262880 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-d4fhx"] Jan 29 15:45:50 crc kubenswrapper[5008]: I0129 15:45:50.289100 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-675f4bcbfc-d4fhx" event={"ID":"b128f8df-0b1b-4062-9c3d-fd0f1d2e8078","Type":"ContainerStarted","Data":"7dcfe1c84af859609b7cd8621d352272c552ebce1b442395a6dd0d1578eb8603"} Jan 29 15:45:50 crc kubenswrapper[5008]: I0129 15:45:50.410759 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-s5tkh"] Jan 29 15:45:50 crc kubenswrapper[5008]: W0129 15:45:50.421435 5008 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3db905f0_53de_4983_b70f_c883bfe123ba.slice/crio-30c8df1cacab6aea0ebe156b76659c8cb48d207b8fd5bb6861527a9757db6348 WatchSource:0}: Error finding container 30c8df1cacab6aea0ebe156b76659c8cb48d207b8fd5bb6861527a9757db6348: Status 404 returned error can't find the container with id 30c8df1cacab6aea0ebe156b76659c8cb48d207b8fd5bb6861527a9757db6348 Jan 29 15:45:51 crc kubenswrapper[5008]: I0129 15:45:51.303563 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78dd6ddcc-s5tkh" event={"ID":"3db905f0-53de-4983-b70f-c883bfe123ba","Type":"ContainerStarted","Data":"30c8df1cacab6aea0ebe156b76659c8cb48d207b8fd5bb6861527a9757db6348"} Jan 29 15:45:52 crc kubenswrapper[5008]: I0129 15:45:52.354995 5008 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-d4fhx"] Jan 29 15:45:52 crc kubenswrapper[5008]: I0129 15:45:52.384314 5008 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-vs5xd"] Jan 29 15:45:52 crc kubenswrapper[5008]: I0129 15:45:52.385339 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-vs5xd" Jan 29 15:45:52 crc kubenswrapper[5008]: I0129 15:45:52.411639 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-vs5xd"] Jan 29 15:45:52 crc kubenswrapper[5008]: I0129 15:45:52.558507 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gt2jc\" (UniqueName: \"kubernetes.io/projected/eaa396b6-206d-4e0f-8983-ee9ac16c910a-kube-api-access-gt2jc\") pod \"dnsmasq-dns-666b6646f7-vs5xd\" (UID: \"eaa396b6-206d-4e0f-8983-ee9ac16c910a\") " pod="openstack/dnsmasq-dns-666b6646f7-vs5xd" Jan 29 15:45:52 crc kubenswrapper[5008]: I0129 15:45:52.558582 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/eaa396b6-206d-4e0f-8983-ee9ac16c910a-dns-svc\") pod \"dnsmasq-dns-666b6646f7-vs5xd\" (UID: \"eaa396b6-206d-4e0f-8983-ee9ac16c910a\") " pod="openstack/dnsmasq-dns-666b6646f7-vs5xd" Jan 29 15:45:52 crc kubenswrapper[5008]: I0129 15:45:52.558610 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/eaa396b6-206d-4e0f-8983-ee9ac16c910a-config\") pod \"dnsmasq-dns-666b6646f7-vs5xd\" (UID: \"eaa396b6-206d-4e0f-8983-ee9ac16c910a\") " pod="openstack/dnsmasq-dns-666b6646f7-vs5xd" Jan 29 15:45:52 crc kubenswrapper[5008]: I0129 15:45:52.588990 5008 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-s5tkh"] Jan 29 15:45:52 crc kubenswrapper[5008]: I0129 15:45:52.614883 5008 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-7pwkf"] Jan 29 15:45:52 crc kubenswrapper[5008]: I0129 15:45:52.622039 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-7pwkf" Jan 29 15:45:52 crc kubenswrapper[5008]: I0129 15:45:52.649244 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-7pwkf"] Jan 29 15:45:52 crc kubenswrapper[5008]: I0129 15:45:52.660932 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gt2jc\" (UniqueName: \"kubernetes.io/projected/eaa396b6-206d-4e0f-8983-ee9ac16c910a-kube-api-access-gt2jc\") pod \"dnsmasq-dns-666b6646f7-vs5xd\" (UID: \"eaa396b6-206d-4e0f-8983-ee9ac16c910a\") " pod="openstack/dnsmasq-dns-666b6646f7-vs5xd" Jan 29 15:45:52 crc kubenswrapper[5008]: I0129 15:45:52.660995 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/eaa396b6-206d-4e0f-8983-ee9ac16c910a-dns-svc\") pod \"dnsmasq-dns-666b6646f7-vs5xd\" (UID: \"eaa396b6-206d-4e0f-8983-ee9ac16c910a\") " pod="openstack/dnsmasq-dns-666b6646f7-vs5xd" Jan 29 15:45:52 crc kubenswrapper[5008]: I0129 15:45:52.661013 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/eaa396b6-206d-4e0f-8983-ee9ac16c910a-config\") pod \"dnsmasq-dns-666b6646f7-vs5xd\" (UID: \"eaa396b6-206d-4e0f-8983-ee9ac16c910a\") " pod="openstack/dnsmasq-dns-666b6646f7-vs5xd" Jan 29 15:45:52 crc kubenswrapper[5008]: I0129 15:45:52.661925 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/eaa396b6-206d-4e0f-8983-ee9ac16c910a-config\") pod \"dnsmasq-dns-666b6646f7-vs5xd\" (UID: \"eaa396b6-206d-4e0f-8983-ee9ac16c910a\") " pod="openstack/dnsmasq-dns-666b6646f7-vs5xd" Jan 29 15:45:52 crc kubenswrapper[5008]: I0129 15:45:52.664640 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/eaa396b6-206d-4e0f-8983-ee9ac16c910a-dns-svc\") pod \"dnsmasq-dns-666b6646f7-vs5xd\" (UID: \"eaa396b6-206d-4e0f-8983-ee9ac16c910a\") " pod="openstack/dnsmasq-dns-666b6646f7-vs5xd" Jan 29 15:45:52 crc kubenswrapper[5008]: I0129 15:45:52.691710 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gt2jc\" (UniqueName: \"kubernetes.io/projected/eaa396b6-206d-4e0f-8983-ee9ac16c910a-kube-api-access-gt2jc\") pod \"dnsmasq-dns-666b6646f7-vs5xd\" (UID: \"eaa396b6-206d-4e0f-8983-ee9ac16c910a\") " pod="openstack/dnsmasq-dns-666b6646f7-vs5xd" Jan 29 15:45:52 crc kubenswrapper[5008]: I0129 15:45:52.724166 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-vs5xd" Jan 29 15:45:52 crc kubenswrapper[5008]: I0129 15:45:52.768301 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d528ee94-b499-4f20-8603-6dcc9e8b0361-config\") pod \"dnsmasq-dns-57d769cc4f-7pwkf\" (UID: \"d528ee94-b499-4f20-8603-6dcc9e8b0361\") " pod="openstack/dnsmasq-dns-57d769cc4f-7pwkf" Jan 29 15:45:52 crc kubenswrapper[5008]: I0129 15:45:52.768432 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-75fqk\" (UniqueName: \"kubernetes.io/projected/d528ee94-b499-4f20-8603-6dcc9e8b0361-kube-api-access-75fqk\") pod \"dnsmasq-dns-57d769cc4f-7pwkf\" (UID: \"d528ee94-b499-4f20-8603-6dcc9e8b0361\") " pod="openstack/dnsmasq-dns-57d769cc4f-7pwkf" Jan 29 15:45:52 crc kubenswrapper[5008]: I0129 15:45:52.768527 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d528ee94-b499-4f20-8603-6dcc9e8b0361-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-7pwkf\" (UID: \"d528ee94-b499-4f20-8603-6dcc9e8b0361\") " pod="openstack/dnsmasq-dns-57d769cc4f-7pwkf" Jan 29 15:45:52 crc kubenswrapper[5008]: I0129 15:45:52.870476 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d528ee94-b499-4f20-8603-6dcc9e8b0361-config\") pod \"dnsmasq-dns-57d769cc4f-7pwkf\" (UID: \"d528ee94-b499-4f20-8603-6dcc9e8b0361\") " pod="openstack/dnsmasq-dns-57d769cc4f-7pwkf" Jan 29 15:45:52 crc kubenswrapper[5008]: I0129 15:45:52.870543 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-75fqk\" (UniqueName: \"kubernetes.io/projected/d528ee94-b499-4f20-8603-6dcc9e8b0361-kube-api-access-75fqk\") pod \"dnsmasq-dns-57d769cc4f-7pwkf\" (UID: \"d528ee94-b499-4f20-8603-6dcc9e8b0361\") " pod="openstack/dnsmasq-dns-57d769cc4f-7pwkf" Jan 29 15:45:52 crc kubenswrapper[5008]: I0129 15:45:52.870574 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d528ee94-b499-4f20-8603-6dcc9e8b0361-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-7pwkf\" (UID: \"d528ee94-b499-4f20-8603-6dcc9e8b0361\") " pod="openstack/dnsmasq-dns-57d769cc4f-7pwkf" Jan 29 15:45:52 crc kubenswrapper[5008]: I0129 15:45:52.872805 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d528ee94-b499-4f20-8603-6dcc9e8b0361-config\") pod \"dnsmasq-dns-57d769cc4f-7pwkf\" (UID: \"d528ee94-b499-4f20-8603-6dcc9e8b0361\") " pod="openstack/dnsmasq-dns-57d769cc4f-7pwkf" Jan 29 15:45:52 crc kubenswrapper[5008]: I0129 15:45:52.872892 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d528ee94-b499-4f20-8603-6dcc9e8b0361-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-7pwkf\" (UID: \"d528ee94-b499-4f20-8603-6dcc9e8b0361\") " pod="openstack/dnsmasq-dns-57d769cc4f-7pwkf" Jan 29 15:45:52 crc kubenswrapper[5008]: I0129 15:45:52.911102 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-75fqk\" (UniqueName: \"kubernetes.io/projected/d528ee94-b499-4f20-8603-6dcc9e8b0361-kube-api-access-75fqk\") pod \"dnsmasq-dns-57d769cc4f-7pwkf\" (UID: \"d528ee94-b499-4f20-8603-6dcc9e8b0361\") " pod="openstack/dnsmasq-dns-57d769cc4f-7pwkf" Jan 29 15:45:52 crc kubenswrapper[5008]: I0129 15:45:52.971855 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-7pwkf" Jan 29 15:45:53 crc kubenswrapper[5008]: I0129 15:45:53.215119 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-vs5xd"] Jan 29 15:45:53 crc kubenswrapper[5008]: I0129 15:45:53.318503 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-vs5xd" event={"ID":"eaa396b6-206d-4e0f-8983-ee9ac16c910a","Type":"ContainerStarted","Data":"309fd497280f26c9fefa297dd5016a654c256866d10ab9c20a829153df0b8be3"} Jan 29 15:45:53 crc kubenswrapper[5008]: I0129 15:45:53.398396 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-7pwkf"] Jan 29 15:45:53 crc kubenswrapper[5008]: W0129 15:45:53.403436 5008 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd528ee94_b499_4f20_8603_6dcc9e8b0361.slice/crio-7e40b85878fc9eb94adb0dc672f4b4d3fd0475b78dd43bc83dd4dd513c313465 WatchSource:0}: Error finding container 7e40b85878fc9eb94adb0dc672f4b4d3fd0475b78dd43bc83dd4dd513c313465: Status 404 returned error can't find the container with id 7e40b85878fc9eb94adb0dc672f4b4d3fd0475b78dd43bc83dd4dd513c313465 Jan 29 15:45:53 crc kubenswrapper[5008]: I0129 15:45:53.480244 5008 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Jan 29 15:45:53 crc kubenswrapper[5008]: I0129 15:45:53.481563 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 29 15:45:53 crc kubenswrapper[5008]: I0129 15:45:53.484442 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Jan 29 15:45:53 crc kubenswrapper[5008]: I0129 15:45:53.488771 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 29 15:45:53 crc kubenswrapper[5008]: I0129 15:45:53.490422 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Jan 29 15:45:53 crc kubenswrapper[5008]: I0129 15:45:53.490429 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Jan 29 15:45:53 crc kubenswrapper[5008]: I0129 15:45:53.490603 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-7kjkn" Jan 29 15:45:53 crc kubenswrapper[5008]: I0129 15:45:53.492650 5008 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Jan 29 15:45:53 crc kubenswrapper[5008]: I0129 15:45:53.493088 5008 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Jan 29 15:45:53 crc kubenswrapper[5008]: I0129 15:45:53.493611 5008 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Jan 29 15:45:53 crc kubenswrapper[5008]: I0129 15:45:53.579401 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/8c8683a3-18f6-4242-9991-b542aed9143b-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"8c8683a3-18f6-4242-9991-b542aed9143b\") " pod="openstack/rabbitmq-server-0" Jan 29 15:45:53 crc kubenswrapper[5008]: I0129 15:45:53.579450 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/8c8683a3-18f6-4242-9991-b542aed9143b-config-data\") pod \"rabbitmq-server-0\" (UID: \"8c8683a3-18f6-4242-9991-b542aed9143b\") " pod="openstack/rabbitmq-server-0" Jan 29 15:45:53 crc kubenswrapper[5008]: I0129 15:45:53.579492 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"rabbitmq-server-0\" (UID: \"8c8683a3-18f6-4242-9991-b542aed9143b\") " pod="openstack/rabbitmq-server-0" Jan 29 15:45:53 crc kubenswrapper[5008]: I0129 15:45:53.579520 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/8c8683a3-18f6-4242-9991-b542aed9143b-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"8c8683a3-18f6-4242-9991-b542aed9143b\") " pod="openstack/rabbitmq-server-0" Jan 29 15:45:53 crc kubenswrapper[5008]: I0129 15:45:53.579536 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/8c8683a3-18f6-4242-9991-b542aed9143b-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"8c8683a3-18f6-4242-9991-b542aed9143b\") " pod="openstack/rabbitmq-server-0" Jan 29 15:45:53 crc kubenswrapper[5008]: I0129 15:45:53.579562 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/8c8683a3-18f6-4242-9991-b542aed9143b-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"8c8683a3-18f6-4242-9991-b542aed9143b\") " pod="openstack/rabbitmq-server-0" Jan 29 15:45:53 crc kubenswrapper[5008]: I0129 15:45:53.579578 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/8c8683a3-18f6-4242-9991-b542aed9143b-server-conf\") pod \"rabbitmq-server-0\" (UID: \"8c8683a3-18f6-4242-9991-b542aed9143b\") " pod="openstack/rabbitmq-server-0" Jan 29 15:45:53 crc kubenswrapper[5008]: I0129 15:45:53.579595 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w8s6q\" (UniqueName: \"kubernetes.io/projected/8c8683a3-18f6-4242-9991-b542aed9143b-kube-api-access-w8s6q\") pod \"rabbitmq-server-0\" (UID: \"8c8683a3-18f6-4242-9991-b542aed9143b\") " pod="openstack/rabbitmq-server-0" Jan 29 15:45:53 crc kubenswrapper[5008]: I0129 15:45:53.579618 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/8c8683a3-18f6-4242-9991-b542aed9143b-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"8c8683a3-18f6-4242-9991-b542aed9143b\") " pod="openstack/rabbitmq-server-0" Jan 29 15:45:53 crc kubenswrapper[5008]: I0129 15:45:53.579639 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/8c8683a3-18f6-4242-9991-b542aed9143b-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"8c8683a3-18f6-4242-9991-b542aed9143b\") " pod="openstack/rabbitmq-server-0" Jan 29 15:45:53 crc kubenswrapper[5008]: I0129 15:45:53.579657 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/8c8683a3-18f6-4242-9991-b542aed9143b-pod-info\") pod \"rabbitmq-server-0\" (UID: \"8c8683a3-18f6-4242-9991-b542aed9143b\") " pod="openstack/rabbitmq-server-0" Jan 29 15:45:53 crc kubenswrapper[5008]: I0129 15:45:53.680773 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/8c8683a3-18f6-4242-9991-b542aed9143b-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"8c8683a3-18f6-4242-9991-b542aed9143b\") " pod="openstack/rabbitmq-server-0" Jan 29 15:45:53 crc kubenswrapper[5008]: I0129 15:45:53.680835 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/8c8683a3-18f6-4242-9991-b542aed9143b-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"8c8683a3-18f6-4242-9991-b542aed9143b\") " pod="openstack/rabbitmq-server-0" Jan 29 15:45:53 crc kubenswrapper[5008]: I0129 15:45:53.680866 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/8c8683a3-18f6-4242-9991-b542aed9143b-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"8c8683a3-18f6-4242-9991-b542aed9143b\") " pod="openstack/rabbitmq-server-0" Jan 29 15:45:53 crc kubenswrapper[5008]: I0129 15:45:53.680885 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/8c8683a3-18f6-4242-9991-b542aed9143b-server-conf\") pod \"rabbitmq-server-0\" (UID: \"8c8683a3-18f6-4242-9991-b542aed9143b\") " pod="openstack/rabbitmq-server-0" Jan 29 15:45:53 crc kubenswrapper[5008]: I0129 15:45:53.680907 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w8s6q\" (UniqueName: \"kubernetes.io/projected/8c8683a3-18f6-4242-9991-b542aed9143b-kube-api-access-w8s6q\") pod \"rabbitmq-server-0\" (UID: \"8c8683a3-18f6-4242-9991-b542aed9143b\") " pod="openstack/rabbitmq-server-0" Jan 29 15:45:53 crc kubenswrapper[5008]: I0129 15:45:53.680935 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/8c8683a3-18f6-4242-9991-b542aed9143b-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"8c8683a3-18f6-4242-9991-b542aed9143b\") " pod="openstack/rabbitmq-server-0" Jan 29 15:45:53 crc kubenswrapper[5008]: I0129 15:45:53.680957 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/8c8683a3-18f6-4242-9991-b542aed9143b-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"8c8683a3-18f6-4242-9991-b542aed9143b\") " pod="openstack/rabbitmq-server-0" Jan 29 15:45:53 crc kubenswrapper[5008]: I0129 15:45:53.680976 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/8c8683a3-18f6-4242-9991-b542aed9143b-pod-info\") pod \"rabbitmq-server-0\" (UID: \"8c8683a3-18f6-4242-9991-b542aed9143b\") " pod="openstack/rabbitmq-server-0" Jan 29 15:45:53 crc kubenswrapper[5008]: I0129 15:45:53.680998 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/8c8683a3-18f6-4242-9991-b542aed9143b-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"8c8683a3-18f6-4242-9991-b542aed9143b\") " pod="openstack/rabbitmq-server-0" Jan 29 15:45:53 crc kubenswrapper[5008]: I0129 15:45:53.681016 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/8c8683a3-18f6-4242-9991-b542aed9143b-config-data\") pod \"rabbitmq-server-0\" (UID: \"8c8683a3-18f6-4242-9991-b542aed9143b\") " pod="openstack/rabbitmq-server-0" Jan 29 15:45:53 crc kubenswrapper[5008]: I0129 15:45:53.681052 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"rabbitmq-server-0\" (UID: \"8c8683a3-18f6-4242-9991-b542aed9143b\") " pod="openstack/rabbitmq-server-0" Jan 29 15:45:53 crc kubenswrapper[5008]: I0129 15:45:53.681390 5008 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"rabbitmq-server-0\" (UID: \"8c8683a3-18f6-4242-9991-b542aed9143b\") device mount path \"/mnt/openstack/pv09\"" pod="openstack/rabbitmq-server-0" Jan 29 15:45:53 crc kubenswrapper[5008]: I0129 15:45:53.682691 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/8c8683a3-18f6-4242-9991-b542aed9143b-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"8c8683a3-18f6-4242-9991-b542aed9143b\") " pod="openstack/rabbitmq-server-0" Jan 29 15:45:53 crc kubenswrapper[5008]: I0129 15:45:53.683311 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/8c8683a3-18f6-4242-9991-b542aed9143b-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"8c8683a3-18f6-4242-9991-b542aed9143b\") " pod="openstack/rabbitmq-server-0" Jan 29 15:45:53 crc kubenswrapper[5008]: I0129 15:45:53.683635 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/8c8683a3-18f6-4242-9991-b542aed9143b-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"8c8683a3-18f6-4242-9991-b542aed9143b\") " pod="openstack/rabbitmq-server-0" Jan 29 15:45:53 crc kubenswrapper[5008]: I0129 15:45:53.684012 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/8c8683a3-18f6-4242-9991-b542aed9143b-config-data\") pod \"rabbitmq-server-0\" (UID: \"8c8683a3-18f6-4242-9991-b542aed9143b\") " pod="openstack/rabbitmq-server-0" Jan 29 15:45:53 crc kubenswrapper[5008]: I0129 15:45:53.685812 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/8c8683a3-18f6-4242-9991-b542aed9143b-server-conf\") pod \"rabbitmq-server-0\" (UID: \"8c8683a3-18f6-4242-9991-b542aed9143b\") " pod="openstack/rabbitmq-server-0" Jan 29 15:45:53 crc kubenswrapper[5008]: I0129 15:45:53.686943 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/8c8683a3-18f6-4242-9991-b542aed9143b-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"8c8683a3-18f6-4242-9991-b542aed9143b\") " pod="openstack/rabbitmq-server-0" Jan 29 15:45:53 crc kubenswrapper[5008]: I0129 15:45:53.687363 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/8c8683a3-18f6-4242-9991-b542aed9143b-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"8c8683a3-18f6-4242-9991-b542aed9143b\") " pod="openstack/rabbitmq-server-0" Jan 29 15:45:53 crc kubenswrapper[5008]: I0129 15:45:53.687475 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/8c8683a3-18f6-4242-9991-b542aed9143b-pod-info\") pod \"rabbitmq-server-0\" (UID: \"8c8683a3-18f6-4242-9991-b542aed9143b\") " pod="openstack/rabbitmq-server-0" Jan 29 15:45:53 crc kubenswrapper[5008]: I0129 15:45:53.687678 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/8c8683a3-18f6-4242-9991-b542aed9143b-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"8c8683a3-18f6-4242-9991-b542aed9143b\") " pod="openstack/rabbitmq-server-0" Jan 29 15:45:53 crc kubenswrapper[5008]: I0129 15:45:53.696577 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w8s6q\" (UniqueName: \"kubernetes.io/projected/8c8683a3-18f6-4242-9991-b542aed9143b-kube-api-access-w8s6q\") pod \"rabbitmq-server-0\" (UID: \"8c8683a3-18f6-4242-9991-b542aed9143b\") " pod="openstack/rabbitmq-server-0" Jan 29 15:45:53 crc kubenswrapper[5008]: I0129 15:45:53.739289 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"rabbitmq-server-0\" (UID: \"8c8683a3-18f6-4242-9991-b542aed9143b\") " pod="openstack/rabbitmq-server-0" Jan 29 15:45:53 crc kubenswrapper[5008]: I0129 15:45:53.745342 5008 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 29 15:45:53 crc kubenswrapper[5008]: I0129 15:45:53.746948 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 29 15:45:53 crc kubenswrapper[5008]: I0129 15:45:53.749471 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Jan 29 15:45:53 crc kubenswrapper[5008]: I0129 15:45:53.749665 5008 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Jan 29 15:45:53 crc kubenswrapper[5008]: I0129 15:45:53.749936 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Jan 29 15:45:53 crc kubenswrapper[5008]: I0129 15:45:53.749975 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-tfhm4" Jan 29 15:45:53 crc kubenswrapper[5008]: I0129 15:45:53.750148 5008 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Jan 29 15:45:53 crc kubenswrapper[5008]: I0129 15:45:53.750365 5008 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Jan 29 15:45:53 crc kubenswrapper[5008]: I0129 15:45:53.753196 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 29 15:45:53 crc kubenswrapper[5008]: I0129 15:45:53.756137 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Jan 29 15:45:53 crc kubenswrapper[5008]: I0129 15:45:53.811173 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 29 15:45:53 crc kubenswrapper[5008]: I0129 15:45:53.835736 5008 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-z75gs" Jan 29 15:45:53 crc kubenswrapper[5008]: I0129 15:45:53.835805 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-z75gs" Jan 29 15:45:53 crc kubenswrapper[5008]: I0129 15:45:53.884045 5008 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-z75gs" Jan 29 15:45:53 crc kubenswrapper[5008]: I0129 15:45:53.886114 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vhdc9\" (UniqueName: \"kubernetes.io/projected/4dcd0990-beb1-445a-b387-b2b78c1a39d2-kube-api-access-vhdc9\") pod \"rabbitmq-cell1-server-0\" (UID: \"4dcd0990-beb1-445a-b387-b2b78c1a39d2\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 15:45:53 crc kubenswrapper[5008]: I0129 15:45:53.886196 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/4dcd0990-beb1-445a-b387-b2b78c1a39d2-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"4dcd0990-beb1-445a-b387-b2b78c1a39d2\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 15:45:53 crc kubenswrapper[5008]: I0129 15:45:53.886271 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/4dcd0990-beb1-445a-b387-b2b78c1a39d2-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"4dcd0990-beb1-445a-b387-b2b78c1a39d2\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 15:45:53 crc kubenswrapper[5008]: I0129 15:45:53.886893 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/4dcd0990-beb1-445a-b387-b2b78c1a39d2-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"4dcd0990-beb1-445a-b387-b2b78c1a39d2\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 15:45:53 crc kubenswrapper[5008]: I0129 15:45:53.886935 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/4dcd0990-beb1-445a-b387-b2b78c1a39d2-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"4dcd0990-beb1-445a-b387-b2b78c1a39d2\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 15:45:53 crc kubenswrapper[5008]: I0129 15:45:53.886968 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/4dcd0990-beb1-445a-b387-b2b78c1a39d2-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"4dcd0990-beb1-445a-b387-b2b78c1a39d2\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 15:45:53 crc kubenswrapper[5008]: I0129 15:45:53.886991 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/4dcd0990-beb1-445a-b387-b2b78c1a39d2-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"4dcd0990-beb1-445a-b387-b2b78c1a39d2\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 15:45:53 crc kubenswrapper[5008]: I0129 15:45:53.887021 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/4dcd0990-beb1-445a-b387-b2b78c1a39d2-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"4dcd0990-beb1-445a-b387-b2b78c1a39d2\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 15:45:53 crc kubenswrapper[5008]: I0129 15:45:53.887065 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/4dcd0990-beb1-445a-b387-b2b78c1a39d2-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"4dcd0990-beb1-445a-b387-b2b78c1a39d2\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 15:45:53 crc kubenswrapper[5008]: I0129 15:45:53.887106 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/4dcd0990-beb1-445a-b387-b2b78c1a39d2-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"4dcd0990-beb1-445a-b387-b2b78c1a39d2\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 15:45:53 crc kubenswrapper[5008]: I0129 15:45:53.887147 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"4dcd0990-beb1-445a-b387-b2b78c1a39d2\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 15:45:53 crc kubenswrapper[5008]: I0129 15:45:53.988702 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"4dcd0990-beb1-445a-b387-b2b78c1a39d2\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 15:45:53 crc kubenswrapper[5008]: I0129 15:45:53.988797 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vhdc9\" (UniqueName: \"kubernetes.io/projected/4dcd0990-beb1-445a-b387-b2b78c1a39d2-kube-api-access-vhdc9\") pod \"rabbitmq-cell1-server-0\" (UID: \"4dcd0990-beb1-445a-b387-b2b78c1a39d2\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 15:45:53 crc kubenswrapper[5008]: I0129 15:45:53.988820 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/4dcd0990-beb1-445a-b387-b2b78c1a39d2-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"4dcd0990-beb1-445a-b387-b2b78c1a39d2\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 15:45:53 crc kubenswrapper[5008]: I0129 15:45:53.988842 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/4dcd0990-beb1-445a-b387-b2b78c1a39d2-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"4dcd0990-beb1-445a-b387-b2b78c1a39d2\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 15:45:53 crc kubenswrapper[5008]: I0129 15:45:53.988871 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/4dcd0990-beb1-445a-b387-b2b78c1a39d2-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"4dcd0990-beb1-445a-b387-b2b78c1a39d2\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 15:45:53 crc kubenswrapper[5008]: I0129 15:45:53.988902 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/4dcd0990-beb1-445a-b387-b2b78c1a39d2-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"4dcd0990-beb1-445a-b387-b2b78c1a39d2\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 15:45:53 crc kubenswrapper[5008]: I0129 15:45:53.988922 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/4dcd0990-beb1-445a-b387-b2b78c1a39d2-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"4dcd0990-beb1-445a-b387-b2b78c1a39d2\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 15:45:53 crc kubenswrapper[5008]: I0129 15:45:53.988961 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/4dcd0990-beb1-445a-b387-b2b78c1a39d2-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"4dcd0990-beb1-445a-b387-b2b78c1a39d2\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 15:45:53 crc kubenswrapper[5008]: I0129 15:45:53.988981 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/4dcd0990-beb1-445a-b387-b2b78c1a39d2-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"4dcd0990-beb1-445a-b387-b2b78c1a39d2\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 15:45:53 crc kubenswrapper[5008]: I0129 15:45:53.989013 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/4dcd0990-beb1-445a-b387-b2b78c1a39d2-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"4dcd0990-beb1-445a-b387-b2b78c1a39d2\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 15:45:53 crc kubenswrapper[5008]: I0129 15:45:53.989035 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/4dcd0990-beb1-445a-b387-b2b78c1a39d2-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"4dcd0990-beb1-445a-b387-b2b78c1a39d2\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 15:45:53 crc kubenswrapper[5008]: I0129 15:45:53.989374 5008 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"4dcd0990-beb1-445a-b387-b2b78c1a39d2\") device mount path \"/mnt/openstack/pv11\"" pod="openstack/rabbitmq-cell1-server-0" Jan 29 15:45:53 crc kubenswrapper[5008]: I0129 15:45:53.989423 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/4dcd0990-beb1-445a-b387-b2b78c1a39d2-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"4dcd0990-beb1-445a-b387-b2b78c1a39d2\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 15:45:53 crc kubenswrapper[5008]: I0129 15:45:53.989928 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/4dcd0990-beb1-445a-b387-b2b78c1a39d2-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"4dcd0990-beb1-445a-b387-b2b78c1a39d2\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 15:45:53 crc kubenswrapper[5008]: I0129 15:45:53.990487 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/4dcd0990-beb1-445a-b387-b2b78c1a39d2-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"4dcd0990-beb1-445a-b387-b2b78c1a39d2\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 15:45:53 crc kubenswrapper[5008]: I0129 15:45:53.990493 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/4dcd0990-beb1-445a-b387-b2b78c1a39d2-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"4dcd0990-beb1-445a-b387-b2b78c1a39d2\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 15:45:53 crc kubenswrapper[5008]: I0129 15:45:53.990529 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/4dcd0990-beb1-445a-b387-b2b78c1a39d2-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"4dcd0990-beb1-445a-b387-b2b78c1a39d2\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 15:45:53 crc kubenswrapper[5008]: I0129 15:45:53.995030 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/4dcd0990-beb1-445a-b387-b2b78c1a39d2-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"4dcd0990-beb1-445a-b387-b2b78c1a39d2\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 15:45:53 crc kubenswrapper[5008]: I0129 15:45:53.997640 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/4dcd0990-beb1-445a-b387-b2b78c1a39d2-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"4dcd0990-beb1-445a-b387-b2b78c1a39d2\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 15:45:54 crc kubenswrapper[5008]: I0129 15:45:54.003923 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/4dcd0990-beb1-445a-b387-b2b78c1a39d2-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"4dcd0990-beb1-445a-b387-b2b78c1a39d2\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 15:45:54 crc kubenswrapper[5008]: I0129 15:45:54.007695 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/4dcd0990-beb1-445a-b387-b2b78c1a39d2-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"4dcd0990-beb1-445a-b387-b2b78c1a39d2\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 15:45:54 crc kubenswrapper[5008]: I0129 15:45:54.012019 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vhdc9\" (UniqueName: \"kubernetes.io/projected/4dcd0990-beb1-445a-b387-b2b78c1a39d2-kube-api-access-vhdc9\") pod \"rabbitmq-cell1-server-0\" (UID: \"4dcd0990-beb1-445a-b387-b2b78c1a39d2\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 15:45:54 crc kubenswrapper[5008]: I0129 15:45:54.024460 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"4dcd0990-beb1-445a-b387-b2b78c1a39d2\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 15:45:54 crc kubenswrapper[5008]: I0129 15:45:54.083085 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 29 15:45:54 crc kubenswrapper[5008]: I0129 15:45:54.304458 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 29 15:45:54 crc kubenswrapper[5008]: I0129 15:45:54.328558 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-7pwkf" event={"ID":"d528ee94-b499-4f20-8603-6dcc9e8b0361","Type":"ContainerStarted","Data":"7e40b85878fc9eb94adb0dc672f4b4d3fd0475b78dd43bc83dd4dd513c313465"} Jan 29 15:45:54 crc kubenswrapper[5008]: I0129 15:45:54.386498 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-z75gs" Jan 29 15:45:54 crc kubenswrapper[5008]: I0129 15:45:54.429677 5008 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-z75gs"] Jan 29 15:45:54 crc kubenswrapper[5008]: I0129 15:45:54.517161 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 29 15:45:55 crc kubenswrapper[5008]: I0129 15:45:55.042042 5008 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-galera-0"] Jan 29 15:45:55 crc kubenswrapper[5008]: I0129 15:45:55.043624 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Jan 29 15:45:55 crc kubenswrapper[5008]: I0129 15:45:55.089459 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-svc" Jan 29 15:45:55 crc kubenswrapper[5008]: I0129 15:45:55.089974 5008 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config-data" Jan 29 15:45:55 crc kubenswrapper[5008]: I0129 15:45:55.090339 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-dockercfg-gx87v" Jan 29 15:45:55 crc kubenswrapper[5008]: I0129 15:45:55.090645 5008 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-scripts" Jan 29 15:45:55 crc kubenswrapper[5008]: I0129 15:45:55.092122 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Jan 29 15:45:55 crc kubenswrapper[5008]: I0129 15:45:55.104376 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"combined-ca-bundle" Jan 29 15:45:55 crc kubenswrapper[5008]: I0129 15:45:55.114899 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/a2958b99-a5fe-447a-93cc-64bade998854-config-data-generated\") pod \"openstack-galera-0\" (UID: \"a2958b99-a5fe-447a-93cc-64bade998854\") " pod="openstack/openstack-galera-0" Jan 29 15:45:55 crc kubenswrapper[5008]: I0129 15:45:55.114952 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a2958b99-a5fe-447a-93cc-64bade998854-operator-scripts\") pod \"openstack-galera-0\" (UID: \"a2958b99-a5fe-447a-93cc-64bade998854\") " pod="openstack/openstack-galera-0" Jan 29 15:45:55 crc kubenswrapper[5008]: I0129 15:45:55.114983 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/a2958b99-a5fe-447a-93cc-64bade998854-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"a2958b99-a5fe-447a-93cc-64bade998854\") " pod="openstack/openstack-galera-0" Jan 29 15:45:55 crc kubenswrapper[5008]: I0129 15:45:55.115060 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/a2958b99-a5fe-447a-93cc-64bade998854-kolla-config\") pod \"openstack-galera-0\" (UID: \"a2958b99-a5fe-447a-93cc-64bade998854\") " pod="openstack/openstack-galera-0" Jan 29 15:45:55 crc kubenswrapper[5008]: I0129 15:45:55.115120 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"openstack-galera-0\" (UID: \"a2958b99-a5fe-447a-93cc-64bade998854\") " pod="openstack/openstack-galera-0" Jan 29 15:45:55 crc kubenswrapper[5008]: I0129 15:45:55.122298 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a2958b99-a5fe-447a-93cc-64bade998854-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"a2958b99-a5fe-447a-93cc-64bade998854\") " pod="openstack/openstack-galera-0" Jan 29 15:45:55 crc kubenswrapper[5008]: I0129 15:45:55.122458 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/a2958b99-a5fe-447a-93cc-64bade998854-config-data-default\") pod \"openstack-galera-0\" (UID: \"a2958b99-a5fe-447a-93cc-64bade998854\") " pod="openstack/openstack-galera-0" Jan 29 15:45:55 crc kubenswrapper[5008]: I0129 15:45:55.122553 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xmr5g\" (UniqueName: \"kubernetes.io/projected/a2958b99-a5fe-447a-93cc-64bade998854-kube-api-access-xmr5g\") pod \"openstack-galera-0\" (UID: \"a2958b99-a5fe-447a-93cc-64bade998854\") " pod="openstack/openstack-galera-0" Jan 29 15:45:55 crc kubenswrapper[5008]: I0129 15:45:55.223875 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"openstack-galera-0\" (UID: \"a2958b99-a5fe-447a-93cc-64bade998854\") " pod="openstack/openstack-galera-0" Jan 29 15:45:55 crc kubenswrapper[5008]: I0129 15:45:55.223970 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a2958b99-a5fe-447a-93cc-64bade998854-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"a2958b99-a5fe-447a-93cc-64bade998854\") " pod="openstack/openstack-galera-0" Jan 29 15:45:55 crc kubenswrapper[5008]: I0129 15:45:55.223999 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/a2958b99-a5fe-447a-93cc-64bade998854-config-data-default\") pod \"openstack-galera-0\" (UID: \"a2958b99-a5fe-447a-93cc-64bade998854\") " pod="openstack/openstack-galera-0" Jan 29 15:45:55 crc kubenswrapper[5008]: I0129 15:45:55.224036 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xmr5g\" (UniqueName: \"kubernetes.io/projected/a2958b99-a5fe-447a-93cc-64bade998854-kube-api-access-xmr5g\") pod \"openstack-galera-0\" (UID: \"a2958b99-a5fe-447a-93cc-64bade998854\") " pod="openstack/openstack-galera-0" Jan 29 15:45:55 crc kubenswrapper[5008]: I0129 15:45:55.224064 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/a2958b99-a5fe-447a-93cc-64bade998854-config-data-generated\") pod \"openstack-galera-0\" (UID: \"a2958b99-a5fe-447a-93cc-64bade998854\") " pod="openstack/openstack-galera-0" Jan 29 15:45:55 crc kubenswrapper[5008]: I0129 15:45:55.224093 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a2958b99-a5fe-447a-93cc-64bade998854-operator-scripts\") pod \"openstack-galera-0\" (UID: \"a2958b99-a5fe-447a-93cc-64bade998854\") " pod="openstack/openstack-galera-0" Jan 29 15:45:55 crc kubenswrapper[5008]: I0129 15:45:55.224125 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/a2958b99-a5fe-447a-93cc-64bade998854-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"a2958b99-a5fe-447a-93cc-64bade998854\") " pod="openstack/openstack-galera-0" Jan 29 15:45:55 crc kubenswrapper[5008]: I0129 15:45:55.224160 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/a2958b99-a5fe-447a-93cc-64bade998854-kolla-config\") pod \"openstack-galera-0\" (UID: \"a2958b99-a5fe-447a-93cc-64bade998854\") " pod="openstack/openstack-galera-0" Jan 29 15:45:55 crc kubenswrapper[5008]: I0129 15:45:55.224201 5008 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"openstack-galera-0\" (UID: \"a2958b99-a5fe-447a-93cc-64bade998854\") device mount path \"/mnt/openstack/pv02\"" pod="openstack/openstack-galera-0" Jan 29 15:45:55 crc kubenswrapper[5008]: I0129 15:45:55.224664 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/a2958b99-a5fe-447a-93cc-64bade998854-config-data-generated\") pod \"openstack-galera-0\" (UID: \"a2958b99-a5fe-447a-93cc-64bade998854\") " pod="openstack/openstack-galera-0" Jan 29 15:45:55 crc kubenswrapper[5008]: I0129 15:45:55.225094 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/a2958b99-a5fe-447a-93cc-64bade998854-kolla-config\") pod \"openstack-galera-0\" (UID: \"a2958b99-a5fe-447a-93cc-64bade998854\") " pod="openstack/openstack-galera-0" Jan 29 15:45:55 crc kubenswrapper[5008]: I0129 15:45:55.225186 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/a2958b99-a5fe-447a-93cc-64bade998854-config-data-default\") pod \"openstack-galera-0\" (UID: \"a2958b99-a5fe-447a-93cc-64bade998854\") " pod="openstack/openstack-galera-0" Jan 29 15:45:55 crc kubenswrapper[5008]: I0129 15:45:55.225681 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a2958b99-a5fe-447a-93cc-64bade998854-operator-scripts\") pod \"openstack-galera-0\" (UID: \"a2958b99-a5fe-447a-93cc-64bade998854\") " pod="openstack/openstack-galera-0" Jan 29 15:45:55 crc kubenswrapper[5008]: I0129 15:45:55.233267 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/a2958b99-a5fe-447a-93cc-64bade998854-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"a2958b99-a5fe-447a-93cc-64bade998854\") " pod="openstack/openstack-galera-0" Jan 29 15:45:55 crc kubenswrapper[5008]: I0129 15:45:55.233300 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a2958b99-a5fe-447a-93cc-64bade998854-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"a2958b99-a5fe-447a-93cc-64bade998854\") " pod="openstack/openstack-galera-0" Jan 29 15:45:55 crc kubenswrapper[5008]: I0129 15:45:55.241084 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xmr5g\" (UniqueName: \"kubernetes.io/projected/a2958b99-a5fe-447a-93cc-64bade998854-kube-api-access-xmr5g\") pod \"openstack-galera-0\" (UID: \"a2958b99-a5fe-447a-93cc-64bade998854\") " pod="openstack/openstack-galera-0" Jan 29 15:45:55 crc kubenswrapper[5008]: I0129 15:45:55.247296 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"openstack-galera-0\" (UID: \"a2958b99-a5fe-447a-93cc-64bade998854\") " pod="openstack/openstack-galera-0" Jan 29 15:45:55 crc kubenswrapper[5008]: I0129 15:45:55.429610 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Jan 29 15:45:56 crc kubenswrapper[5008]: I0129 15:45:56.348765 5008 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-z75gs" podUID="014fe771-fe01-4b92-b038-862615b75136" containerName="registry-server" containerID="cri-o://ecc4e5a68e9a1c47e753728740eeba62f98f13393292d44b9163dac6f6b4fb16" gracePeriod=2 Jan 29 15:45:56 crc kubenswrapper[5008]: I0129 15:45:56.420921 5008 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-cell1-galera-0"] Jan 29 15:45:56 crc kubenswrapper[5008]: I0129 15:45:56.422400 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Jan 29 15:45:56 crc kubenswrapper[5008]: I0129 15:45:56.424715 5008 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-config-data" Jan 29 15:45:56 crc kubenswrapper[5008]: I0129 15:45:56.426221 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-cell1-dockercfg-zdf89" Jan 29 15:45:56 crc kubenswrapper[5008]: I0129 15:45:56.427156 5008 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-scripts" Jan 29 15:45:56 crc kubenswrapper[5008]: I0129 15:45:56.427561 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-cell1-svc" Jan 29 15:45:56 crc kubenswrapper[5008]: I0129 15:45:56.432971 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Jan 29 15:45:56 crc kubenswrapper[5008]: I0129 15:45:56.445903 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/2c8d6871-1129-4597-8a1e-94006a17448a-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"2c8d6871-1129-4597-8a1e-94006a17448a\") " pod="openstack/openstack-cell1-galera-0" Jan 29 15:45:56 crc kubenswrapper[5008]: I0129 15:45:56.445956 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"openstack-cell1-galera-0\" (UID: \"2c8d6871-1129-4597-8a1e-94006a17448a\") " pod="openstack/openstack-cell1-galera-0" Jan 29 15:45:56 crc kubenswrapper[5008]: I0129 15:45:56.445979 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/2c8d6871-1129-4597-8a1e-94006a17448a-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"2c8d6871-1129-4597-8a1e-94006a17448a\") " pod="openstack/openstack-cell1-galera-0" Jan 29 15:45:56 crc kubenswrapper[5008]: I0129 15:45:56.446004 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/2c8d6871-1129-4597-8a1e-94006a17448a-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"2c8d6871-1129-4597-8a1e-94006a17448a\") " pod="openstack/openstack-cell1-galera-0" Jan 29 15:45:56 crc kubenswrapper[5008]: I0129 15:45:56.446029 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-25mkv\" (UniqueName: \"kubernetes.io/projected/2c8d6871-1129-4597-8a1e-94006a17448a-kube-api-access-25mkv\") pod \"openstack-cell1-galera-0\" (UID: \"2c8d6871-1129-4597-8a1e-94006a17448a\") " pod="openstack/openstack-cell1-galera-0" Jan 29 15:45:56 crc kubenswrapper[5008]: I0129 15:45:56.446055 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/2c8d6871-1129-4597-8a1e-94006a17448a-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"2c8d6871-1129-4597-8a1e-94006a17448a\") " pod="openstack/openstack-cell1-galera-0" Jan 29 15:45:56 crc kubenswrapper[5008]: I0129 15:45:56.446082 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2c8d6871-1129-4597-8a1e-94006a17448a-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"2c8d6871-1129-4597-8a1e-94006a17448a\") " pod="openstack/openstack-cell1-galera-0" Jan 29 15:45:56 crc kubenswrapper[5008]: I0129 15:45:56.446118 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2c8d6871-1129-4597-8a1e-94006a17448a-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"2c8d6871-1129-4597-8a1e-94006a17448a\") " pod="openstack/openstack-cell1-galera-0" Jan 29 15:45:56 crc kubenswrapper[5008]: I0129 15:45:56.548730 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/2c8d6871-1129-4597-8a1e-94006a17448a-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"2c8d6871-1129-4597-8a1e-94006a17448a\") " pod="openstack/openstack-cell1-galera-0" Jan 29 15:45:56 crc kubenswrapper[5008]: I0129 15:45:56.548809 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2c8d6871-1129-4597-8a1e-94006a17448a-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"2c8d6871-1129-4597-8a1e-94006a17448a\") " pod="openstack/openstack-cell1-galera-0" Jan 29 15:45:56 crc kubenswrapper[5008]: I0129 15:45:56.548855 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2c8d6871-1129-4597-8a1e-94006a17448a-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"2c8d6871-1129-4597-8a1e-94006a17448a\") " pod="openstack/openstack-cell1-galera-0" Jan 29 15:45:56 crc kubenswrapper[5008]: I0129 15:45:56.548951 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/2c8d6871-1129-4597-8a1e-94006a17448a-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"2c8d6871-1129-4597-8a1e-94006a17448a\") " pod="openstack/openstack-cell1-galera-0" Jan 29 15:45:56 crc kubenswrapper[5008]: I0129 15:45:56.548980 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"openstack-cell1-galera-0\" (UID: \"2c8d6871-1129-4597-8a1e-94006a17448a\") " pod="openstack/openstack-cell1-galera-0" Jan 29 15:45:56 crc kubenswrapper[5008]: I0129 15:45:56.549004 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/2c8d6871-1129-4597-8a1e-94006a17448a-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"2c8d6871-1129-4597-8a1e-94006a17448a\") " pod="openstack/openstack-cell1-galera-0" Jan 29 15:45:56 crc kubenswrapper[5008]: I0129 15:45:56.549032 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/2c8d6871-1129-4597-8a1e-94006a17448a-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"2c8d6871-1129-4597-8a1e-94006a17448a\") " pod="openstack/openstack-cell1-galera-0" Jan 29 15:45:56 crc kubenswrapper[5008]: I0129 15:45:56.549057 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-25mkv\" (UniqueName: \"kubernetes.io/projected/2c8d6871-1129-4597-8a1e-94006a17448a-kube-api-access-25mkv\") pod \"openstack-cell1-galera-0\" (UID: \"2c8d6871-1129-4597-8a1e-94006a17448a\") " pod="openstack/openstack-cell1-galera-0" Jan 29 15:45:56 crc kubenswrapper[5008]: I0129 15:45:56.549365 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/2c8d6871-1129-4597-8a1e-94006a17448a-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"2c8d6871-1129-4597-8a1e-94006a17448a\") " pod="openstack/openstack-cell1-galera-0" Jan 29 15:45:56 crc kubenswrapper[5008]: I0129 15:45:56.550323 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/2c8d6871-1129-4597-8a1e-94006a17448a-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"2c8d6871-1129-4597-8a1e-94006a17448a\") " pod="openstack/openstack-cell1-galera-0" Jan 29 15:45:56 crc kubenswrapper[5008]: I0129 15:45:56.550411 5008 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"openstack-cell1-galera-0\" (UID: \"2c8d6871-1129-4597-8a1e-94006a17448a\") device mount path \"/mnt/openstack/pv05\"" pod="openstack/openstack-cell1-galera-0" Jan 29 15:45:56 crc kubenswrapper[5008]: I0129 15:45:56.550476 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/2c8d6871-1129-4597-8a1e-94006a17448a-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"2c8d6871-1129-4597-8a1e-94006a17448a\") " pod="openstack/openstack-cell1-galera-0" Jan 29 15:45:56 crc kubenswrapper[5008]: I0129 15:45:56.551350 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2c8d6871-1129-4597-8a1e-94006a17448a-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"2c8d6871-1129-4597-8a1e-94006a17448a\") " pod="openstack/openstack-cell1-galera-0" Jan 29 15:45:56 crc kubenswrapper[5008]: I0129 15:45:56.553624 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2c8d6871-1129-4597-8a1e-94006a17448a-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"2c8d6871-1129-4597-8a1e-94006a17448a\") " pod="openstack/openstack-cell1-galera-0" Jan 29 15:45:56 crc kubenswrapper[5008]: I0129 15:45:56.564636 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/2c8d6871-1129-4597-8a1e-94006a17448a-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"2c8d6871-1129-4597-8a1e-94006a17448a\") " pod="openstack/openstack-cell1-galera-0" Jan 29 15:45:56 crc kubenswrapper[5008]: I0129 15:45:56.574677 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-25mkv\" (UniqueName: \"kubernetes.io/projected/2c8d6871-1129-4597-8a1e-94006a17448a-kube-api-access-25mkv\") pod \"openstack-cell1-galera-0\" (UID: \"2c8d6871-1129-4597-8a1e-94006a17448a\") " pod="openstack/openstack-cell1-galera-0" Jan 29 15:45:56 crc kubenswrapper[5008]: I0129 15:45:56.601135 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"openstack-cell1-galera-0\" (UID: \"2c8d6871-1129-4597-8a1e-94006a17448a\") " pod="openstack/openstack-cell1-galera-0" Jan 29 15:45:56 crc kubenswrapper[5008]: I0129 15:45:56.747612 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Jan 29 15:45:56 crc kubenswrapper[5008]: I0129 15:45:56.769067 5008 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/memcached-0"] Jan 29 15:45:56 crc kubenswrapper[5008]: I0129 15:45:56.770259 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Jan 29 15:45:56 crc kubenswrapper[5008]: I0129 15:45:56.772828 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"memcached-memcached-dockercfg-2pxmp" Jan 29 15:45:56 crc kubenswrapper[5008]: I0129 15:45:56.773721 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-memcached-svc" Jan 29 15:45:56 crc kubenswrapper[5008]: I0129 15:45:56.774982 5008 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"memcached-config-data" Jan 29 15:45:56 crc kubenswrapper[5008]: I0129 15:45:56.784082 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Jan 29 15:45:56 crc kubenswrapper[5008]: I0129 15:45:56.861575 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b37ef43d-23ae-4a9c-af60-e616882400c3-combined-ca-bundle\") pod \"memcached-0\" (UID: \"b37ef43d-23ae-4a9c-af60-e616882400c3\") " pod="openstack/memcached-0" Jan 29 15:45:56 crc kubenswrapper[5008]: I0129 15:45:56.861628 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/b37ef43d-23ae-4a9c-af60-e616882400c3-memcached-tls-certs\") pod \"memcached-0\" (UID: \"b37ef43d-23ae-4a9c-af60-e616882400c3\") " pod="openstack/memcached-0" Jan 29 15:45:56 crc kubenswrapper[5008]: I0129 15:45:56.861681 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/b37ef43d-23ae-4a9c-af60-e616882400c3-config-data\") pod \"memcached-0\" (UID: \"b37ef43d-23ae-4a9c-af60-e616882400c3\") " pod="openstack/memcached-0" Jan 29 15:45:56 crc kubenswrapper[5008]: I0129 15:45:56.861736 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/b37ef43d-23ae-4a9c-af60-e616882400c3-kolla-config\") pod \"memcached-0\" (UID: \"b37ef43d-23ae-4a9c-af60-e616882400c3\") " pod="openstack/memcached-0" Jan 29 15:45:56 crc kubenswrapper[5008]: I0129 15:45:56.861756 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fvf98\" (UniqueName: \"kubernetes.io/projected/b37ef43d-23ae-4a9c-af60-e616882400c3-kube-api-access-fvf98\") pod \"memcached-0\" (UID: \"b37ef43d-23ae-4a9c-af60-e616882400c3\") " pod="openstack/memcached-0" Jan 29 15:45:56 crc kubenswrapper[5008]: I0129 15:45:56.866836 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-6kzcj" Jan 29 15:45:56 crc kubenswrapper[5008]: I0129 15:45:56.911188 5008 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-6kzcj"] Jan 29 15:45:56 crc kubenswrapper[5008]: I0129 15:45:56.965601 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/b37ef43d-23ae-4a9c-af60-e616882400c3-kolla-config\") pod \"memcached-0\" (UID: \"b37ef43d-23ae-4a9c-af60-e616882400c3\") " pod="openstack/memcached-0" Jan 29 15:45:56 crc kubenswrapper[5008]: I0129 15:45:56.965650 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fvf98\" (UniqueName: \"kubernetes.io/projected/b37ef43d-23ae-4a9c-af60-e616882400c3-kube-api-access-fvf98\") pod \"memcached-0\" (UID: \"b37ef43d-23ae-4a9c-af60-e616882400c3\") " pod="openstack/memcached-0" Jan 29 15:45:56 crc kubenswrapper[5008]: I0129 15:45:56.965671 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b37ef43d-23ae-4a9c-af60-e616882400c3-combined-ca-bundle\") pod \"memcached-0\" (UID: \"b37ef43d-23ae-4a9c-af60-e616882400c3\") " pod="openstack/memcached-0" Jan 29 15:45:56 crc kubenswrapper[5008]: I0129 15:45:56.965700 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/b37ef43d-23ae-4a9c-af60-e616882400c3-memcached-tls-certs\") pod \"memcached-0\" (UID: \"b37ef43d-23ae-4a9c-af60-e616882400c3\") " pod="openstack/memcached-0" Jan 29 15:45:56 crc kubenswrapper[5008]: I0129 15:45:56.965760 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/b37ef43d-23ae-4a9c-af60-e616882400c3-config-data\") pod \"memcached-0\" (UID: \"b37ef43d-23ae-4a9c-af60-e616882400c3\") " pod="openstack/memcached-0" Jan 29 15:45:56 crc kubenswrapper[5008]: I0129 15:45:56.966806 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/b37ef43d-23ae-4a9c-af60-e616882400c3-config-data\") pod \"memcached-0\" (UID: \"b37ef43d-23ae-4a9c-af60-e616882400c3\") " pod="openstack/memcached-0" Jan 29 15:45:56 crc kubenswrapper[5008]: I0129 15:45:56.966905 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/b37ef43d-23ae-4a9c-af60-e616882400c3-kolla-config\") pod \"memcached-0\" (UID: \"b37ef43d-23ae-4a9c-af60-e616882400c3\") " pod="openstack/memcached-0" Jan 29 15:45:56 crc kubenswrapper[5008]: I0129 15:45:56.970428 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/b37ef43d-23ae-4a9c-af60-e616882400c3-memcached-tls-certs\") pod \"memcached-0\" (UID: \"b37ef43d-23ae-4a9c-af60-e616882400c3\") " pod="openstack/memcached-0" Jan 29 15:45:56 crc kubenswrapper[5008]: I0129 15:45:56.970713 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b37ef43d-23ae-4a9c-af60-e616882400c3-combined-ca-bundle\") pod \"memcached-0\" (UID: \"b37ef43d-23ae-4a9c-af60-e616882400c3\") " pod="openstack/memcached-0" Jan 29 15:45:56 crc kubenswrapper[5008]: I0129 15:45:56.985527 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fvf98\" (UniqueName: \"kubernetes.io/projected/b37ef43d-23ae-4a9c-af60-e616882400c3-kube-api-access-fvf98\") pod \"memcached-0\" (UID: \"b37ef43d-23ae-4a9c-af60-e616882400c3\") " pod="openstack/memcached-0" Jan 29 15:45:57 crc kubenswrapper[5008]: I0129 15:45:57.091459 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Jan 29 15:45:57 crc kubenswrapper[5008]: I0129 15:45:57.363700 5008 generic.go:334] "Generic (PLEG): container finished" podID="014fe771-fe01-4b92-b038-862615b75136" containerID="ecc4e5a68e9a1c47e753728740eeba62f98f13393292d44b9163dac6f6b4fb16" exitCode=0 Jan 29 15:45:57 crc kubenswrapper[5008]: I0129 15:45:57.363859 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-z75gs" event={"ID":"014fe771-fe01-4b92-b038-862615b75136","Type":"ContainerDied","Data":"ecc4e5a68e9a1c47e753728740eeba62f98f13393292d44b9163dac6f6b4fb16"} Jan 29 15:45:57 crc kubenswrapper[5008]: I0129 15:45:57.364275 5008 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-6kzcj" podUID="c82fc869-759d-4902-9aef-fdd69452b420" containerName="registry-server" containerID="cri-o://aa91505cf8b4d23056bc4bbc41262f55839afe4692887dc71784f0fbc58a28a6" gracePeriod=2 Jan 29 15:45:58 crc kubenswrapper[5008]: I0129 15:45:58.371592 5008 generic.go:334] "Generic (PLEG): container finished" podID="c82fc869-759d-4902-9aef-fdd69452b420" containerID="aa91505cf8b4d23056bc4bbc41262f55839afe4692887dc71784f0fbc58a28a6" exitCode=0 Jan 29 15:45:58 crc kubenswrapper[5008]: I0129 15:45:58.371636 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6kzcj" event={"ID":"c82fc869-759d-4902-9aef-fdd69452b420","Type":"ContainerDied","Data":"aa91505cf8b4d23056bc4bbc41262f55839afe4692887dc71784f0fbc58a28a6"} Jan 29 15:45:58 crc kubenswrapper[5008]: W0129 15:45:58.579166 5008 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4dcd0990_beb1_445a_b387_b2b78c1a39d2.slice/crio-d56794076480b52f81cf8a5c95101559ed249b3bb7ac736f6b6e673f01eb9a6f WatchSource:0}: Error finding container d56794076480b52f81cf8a5c95101559ed249b3bb7ac736f6b6e673f01eb9a6f: Status 404 returned error can't find the container with id d56794076480b52f81cf8a5c95101559ed249b3bb7ac736f6b6e673f01eb9a6f Jan 29 15:45:58 crc kubenswrapper[5008]: I0129 15:45:58.644161 5008 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Jan 29 15:45:58 crc kubenswrapper[5008]: I0129 15:45:58.645303 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 29 15:45:58 crc kubenswrapper[5008]: I0129 15:45:58.654791 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-ceilometer-dockercfg-4fqm2" Jan 29 15:45:58 crc kubenswrapper[5008]: I0129 15:45:58.658091 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 29 15:45:58 crc kubenswrapper[5008]: I0129 15:45:58.711544 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hzp55\" (UniqueName: \"kubernetes.io/projected/2691fca5-fe1e-4796-bf43-7135e9d5a198-kube-api-access-hzp55\") pod \"kube-state-metrics-0\" (UID: \"2691fca5-fe1e-4796-bf43-7135e9d5a198\") " pod="openstack/kube-state-metrics-0" Jan 29 15:45:58 crc kubenswrapper[5008]: I0129 15:45:58.813120 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hzp55\" (UniqueName: \"kubernetes.io/projected/2691fca5-fe1e-4796-bf43-7135e9d5a198-kube-api-access-hzp55\") pod \"kube-state-metrics-0\" (UID: \"2691fca5-fe1e-4796-bf43-7135e9d5a198\") " pod="openstack/kube-state-metrics-0" Jan 29 15:45:58 crc kubenswrapper[5008]: I0129 15:45:58.832203 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hzp55\" (UniqueName: \"kubernetes.io/projected/2691fca5-fe1e-4796-bf43-7135e9d5a198-kube-api-access-hzp55\") pod \"kube-state-metrics-0\" (UID: \"2691fca5-fe1e-4796-bf43-7135e9d5a198\") " pod="openstack/kube-state-metrics-0" Jan 29 15:45:58 crc kubenswrapper[5008]: I0129 15:45:58.996094 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 29 15:45:59 crc kubenswrapper[5008]: I0129 15:45:59.379312 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"4dcd0990-beb1-445a-b387-b2b78c1a39d2","Type":"ContainerStarted","Data":"d56794076480b52f81cf8a5c95101559ed249b3bb7ac736f6b6e673f01eb9a6f"} Jan 29 15:45:59 crc kubenswrapper[5008]: W0129 15:45:59.394074 5008 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8c8683a3_18f6_4242_9991_b542aed9143b.slice/crio-2811d2fc177f55081ce3ed3924ed922d08db70a9a6e876627e25d9035bac49e2 WatchSource:0}: Error finding container 2811d2fc177f55081ce3ed3924ed922d08db70a9a6e876627e25d9035bac49e2: Status 404 returned error can't find the container with id 2811d2fc177f55081ce3ed3924ed922d08db70a9a6e876627e25d9035bac49e2 Jan 29 15:46:00 crc kubenswrapper[5008]: I0129 15:46:00.386479 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"8c8683a3-18f6-4242-9991-b542aed9143b","Type":"ContainerStarted","Data":"2811d2fc177f55081ce3ed3924ed922d08db70a9a6e876627e25d9035bac49e2"} Jan 29 15:46:02 crc kubenswrapper[5008]: I0129 15:46:02.992591 5008 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-bw9wr"] Jan 29 15:46:02 crc kubenswrapper[5008]: I0129 15:46:02.994216 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-bw9wr" Jan 29 15:46:03 crc kubenswrapper[5008]: I0129 15:46:03.000576 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovncontroller-ovndbs" Jan 29 15:46:03 crc kubenswrapper[5008]: I0129 15:46:03.000885 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncontroller-ovncontroller-dockercfg-4kjfp" Jan 29 15:46:03 crc kubenswrapper[5008]: I0129 15:46:03.000974 5008 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-scripts" Jan 29 15:46:03 crc kubenswrapper[5008]: I0129 15:46:03.019774 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-bw9wr"] Jan 29 15:46:03 crc kubenswrapper[5008]: I0129 15:46:03.087075 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lpldm\" (UniqueName: \"kubernetes.io/projected/0dd702c8-269b-4fb6-a3a7-03adf93d916a-kube-api-access-lpldm\") pod \"ovn-controller-bw9wr\" (UID: \"0dd702c8-269b-4fb6-a3a7-03adf93d916a\") " pod="openstack/ovn-controller-bw9wr" Jan 29 15:46:03 crc kubenswrapper[5008]: I0129 15:46:03.087407 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/0dd702c8-269b-4fb6-a3a7-03adf93d916a-var-run-ovn\") pod \"ovn-controller-bw9wr\" (UID: \"0dd702c8-269b-4fb6-a3a7-03adf93d916a\") " pod="openstack/ovn-controller-bw9wr" Jan 29 15:46:03 crc kubenswrapper[5008]: I0129 15:46:03.087443 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/0dd702c8-269b-4fb6-a3a7-03adf93d916a-scripts\") pod \"ovn-controller-bw9wr\" (UID: \"0dd702c8-269b-4fb6-a3a7-03adf93d916a\") " pod="openstack/ovn-controller-bw9wr" Jan 29 15:46:03 crc kubenswrapper[5008]: I0129 15:46:03.087506 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/0dd702c8-269b-4fb6-a3a7-03adf93d916a-var-log-ovn\") pod \"ovn-controller-bw9wr\" (UID: \"0dd702c8-269b-4fb6-a3a7-03adf93d916a\") " pod="openstack/ovn-controller-bw9wr" Jan 29 15:46:03 crc kubenswrapper[5008]: I0129 15:46:03.087525 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/0dd702c8-269b-4fb6-a3a7-03adf93d916a-ovn-controller-tls-certs\") pod \"ovn-controller-bw9wr\" (UID: \"0dd702c8-269b-4fb6-a3a7-03adf93d916a\") " pod="openstack/ovn-controller-bw9wr" Jan 29 15:46:03 crc kubenswrapper[5008]: I0129 15:46:03.087580 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/0dd702c8-269b-4fb6-a3a7-03adf93d916a-var-run\") pod \"ovn-controller-bw9wr\" (UID: \"0dd702c8-269b-4fb6-a3a7-03adf93d916a\") " pod="openstack/ovn-controller-bw9wr" Jan 29 15:46:03 crc kubenswrapper[5008]: I0129 15:46:03.087601 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0dd702c8-269b-4fb6-a3a7-03adf93d916a-combined-ca-bundle\") pod \"ovn-controller-bw9wr\" (UID: \"0dd702c8-269b-4fb6-a3a7-03adf93d916a\") " pod="openstack/ovn-controller-bw9wr" Jan 29 15:46:03 crc kubenswrapper[5008]: I0129 15:46:03.098700 5008 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-ovs-k5zwb"] Jan 29 15:46:03 crc kubenswrapper[5008]: I0129 15:46:03.100249 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-k5zwb" Jan 29 15:46:03 crc kubenswrapper[5008]: I0129 15:46:03.121441 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-k5zwb"] Jan 29 15:46:03 crc kubenswrapper[5008]: I0129 15:46:03.188593 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/fb07a603-1696-4378-8d99-382d5bc152da-scripts\") pod \"ovn-controller-ovs-k5zwb\" (UID: \"fb07a603-1696-4378-8d99-382d5bc152da\") " pod="openstack/ovn-controller-ovs-k5zwb" Jan 29 15:46:03 crc kubenswrapper[5008]: I0129 15:46:03.188706 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/0dd702c8-269b-4fb6-a3a7-03adf93d916a-var-log-ovn\") pod \"ovn-controller-bw9wr\" (UID: \"0dd702c8-269b-4fb6-a3a7-03adf93d916a\") " pod="openstack/ovn-controller-bw9wr" Jan 29 15:46:03 crc kubenswrapper[5008]: I0129 15:46:03.188727 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/0dd702c8-269b-4fb6-a3a7-03adf93d916a-ovn-controller-tls-certs\") pod \"ovn-controller-bw9wr\" (UID: \"0dd702c8-269b-4fb6-a3a7-03adf93d916a\") " pod="openstack/ovn-controller-bw9wr" Jan 29 15:46:03 crc kubenswrapper[5008]: I0129 15:46:03.188759 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/fb07a603-1696-4378-8d99-382d5bc152da-var-log\") pod \"ovn-controller-ovs-k5zwb\" (UID: \"fb07a603-1696-4378-8d99-382d5bc152da\") " pod="openstack/ovn-controller-ovs-k5zwb" Jan 29 15:46:03 crc kubenswrapper[5008]: I0129 15:46:03.189442 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/0dd702c8-269b-4fb6-a3a7-03adf93d916a-var-log-ovn\") pod \"ovn-controller-bw9wr\" (UID: \"0dd702c8-269b-4fb6-a3a7-03adf93d916a\") " pod="openstack/ovn-controller-bw9wr" Jan 29 15:46:03 crc kubenswrapper[5008]: I0129 15:46:03.189714 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/0dd702c8-269b-4fb6-a3a7-03adf93d916a-var-run\") pod \"ovn-controller-bw9wr\" (UID: \"0dd702c8-269b-4fb6-a3a7-03adf93d916a\") " pod="openstack/ovn-controller-bw9wr" Jan 29 15:46:03 crc kubenswrapper[5008]: I0129 15:46:03.189746 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0dd702c8-269b-4fb6-a3a7-03adf93d916a-combined-ca-bundle\") pod \"ovn-controller-bw9wr\" (UID: \"0dd702c8-269b-4fb6-a3a7-03adf93d916a\") " pod="openstack/ovn-controller-bw9wr" Jan 29 15:46:03 crc kubenswrapper[5008]: I0129 15:46:03.189767 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/fb07a603-1696-4378-8d99-382d5bc152da-var-run\") pod \"ovn-controller-ovs-k5zwb\" (UID: \"fb07a603-1696-4378-8d99-382d5bc152da\") " pod="openstack/ovn-controller-ovs-k5zwb" Jan 29 15:46:03 crc kubenswrapper[5008]: I0129 15:46:03.189805 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/fb07a603-1696-4378-8d99-382d5bc152da-var-lib\") pod \"ovn-controller-ovs-k5zwb\" (UID: \"fb07a603-1696-4378-8d99-382d5bc152da\") " pod="openstack/ovn-controller-ovs-k5zwb" Jan 29 15:46:03 crc kubenswrapper[5008]: I0129 15:46:03.189831 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lpldm\" (UniqueName: \"kubernetes.io/projected/0dd702c8-269b-4fb6-a3a7-03adf93d916a-kube-api-access-lpldm\") pod \"ovn-controller-bw9wr\" (UID: \"0dd702c8-269b-4fb6-a3a7-03adf93d916a\") " pod="openstack/ovn-controller-bw9wr" Jan 29 15:46:03 crc kubenswrapper[5008]: I0129 15:46:03.189957 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/fb07a603-1696-4378-8d99-382d5bc152da-etc-ovs\") pod \"ovn-controller-ovs-k5zwb\" (UID: \"fb07a603-1696-4378-8d99-382d5bc152da\") " pod="openstack/ovn-controller-ovs-k5zwb" Jan 29 15:46:03 crc kubenswrapper[5008]: I0129 15:46:03.189978 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c8pfz\" (UniqueName: \"kubernetes.io/projected/fb07a603-1696-4378-8d99-382d5bc152da-kube-api-access-c8pfz\") pod \"ovn-controller-ovs-k5zwb\" (UID: \"fb07a603-1696-4378-8d99-382d5bc152da\") " pod="openstack/ovn-controller-ovs-k5zwb" Jan 29 15:46:03 crc kubenswrapper[5008]: I0129 15:46:03.190043 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/0dd702c8-269b-4fb6-a3a7-03adf93d916a-var-run-ovn\") pod \"ovn-controller-bw9wr\" (UID: \"0dd702c8-269b-4fb6-a3a7-03adf93d916a\") " pod="openstack/ovn-controller-bw9wr" Jan 29 15:46:03 crc kubenswrapper[5008]: I0129 15:46:03.190069 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/0dd702c8-269b-4fb6-a3a7-03adf93d916a-scripts\") pod \"ovn-controller-bw9wr\" (UID: \"0dd702c8-269b-4fb6-a3a7-03adf93d916a\") " pod="openstack/ovn-controller-bw9wr" Jan 29 15:46:03 crc kubenswrapper[5008]: I0129 15:46:03.190228 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/0dd702c8-269b-4fb6-a3a7-03adf93d916a-var-run-ovn\") pod \"ovn-controller-bw9wr\" (UID: \"0dd702c8-269b-4fb6-a3a7-03adf93d916a\") " pod="openstack/ovn-controller-bw9wr" Jan 29 15:46:03 crc kubenswrapper[5008]: I0129 15:46:03.190418 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/0dd702c8-269b-4fb6-a3a7-03adf93d916a-var-run\") pod \"ovn-controller-bw9wr\" (UID: \"0dd702c8-269b-4fb6-a3a7-03adf93d916a\") " pod="openstack/ovn-controller-bw9wr" Jan 29 15:46:03 crc kubenswrapper[5008]: I0129 15:46:03.196276 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/0dd702c8-269b-4fb6-a3a7-03adf93d916a-scripts\") pod \"ovn-controller-bw9wr\" (UID: \"0dd702c8-269b-4fb6-a3a7-03adf93d916a\") " pod="openstack/ovn-controller-bw9wr" Jan 29 15:46:03 crc kubenswrapper[5008]: I0129 15:46:03.199558 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/0dd702c8-269b-4fb6-a3a7-03adf93d916a-ovn-controller-tls-certs\") pod \"ovn-controller-bw9wr\" (UID: \"0dd702c8-269b-4fb6-a3a7-03adf93d916a\") " pod="openstack/ovn-controller-bw9wr" Jan 29 15:46:03 crc kubenswrapper[5008]: I0129 15:46:03.211463 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0dd702c8-269b-4fb6-a3a7-03adf93d916a-combined-ca-bundle\") pod \"ovn-controller-bw9wr\" (UID: \"0dd702c8-269b-4fb6-a3a7-03adf93d916a\") " pod="openstack/ovn-controller-bw9wr" Jan 29 15:46:03 crc kubenswrapper[5008]: I0129 15:46:03.217859 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lpldm\" (UniqueName: \"kubernetes.io/projected/0dd702c8-269b-4fb6-a3a7-03adf93d916a-kube-api-access-lpldm\") pod \"ovn-controller-bw9wr\" (UID: \"0dd702c8-269b-4fb6-a3a7-03adf93d916a\") " pod="openstack/ovn-controller-bw9wr" Jan 29 15:46:03 crc kubenswrapper[5008]: I0129 15:46:03.292553 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/fb07a603-1696-4378-8d99-382d5bc152da-etc-ovs\") pod \"ovn-controller-ovs-k5zwb\" (UID: \"fb07a603-1696-4378-8d99-382d5bc152da\") " pod="openstack/ovn-controller-ovs-k5zwb" Jan 29 15:46:03 crc kubenswrapper[5008]: I0129 15:46:03.292592 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c8pfz\" (UniqueName: \"kubernetes.io/projected/fb07a603-1696-4378-8d99-382d5bc152da-kube-api-access-c8pfz\") pod \"ovn-controller-ovs-k5zwb\" (UID: \"fb07a603-1696-4378-8d99-382d5bc152da\") " pod="openstack/ovn-controller-ovs-k5zwb" Jan 29 15:46:03 crc kubenswrapper[5008]: I0129 15:46:03.292631 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/fb07a603-1696-4378-8d99-382d5bc152da-scripts\") pod \"ovn-controller-ovs-k5zwb\" (UID: \"fb07a603-1696-4378-8d99-382d5bc152da\") " pod="openstack/ovn-controller-ovs-k5zwb" Jan 29 15:46:03 crc kubenswrapper[5008]: I0129 15:46:03.292662 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/fb07a603-1696-4378-8d99-382d5bc152da-var-log\") pod \"ovn-controller-ovs-k5zwb\" (UID: \"fb07a603-1696-4378-8d99-382d5bc152da\") " pod="openstack/ovn-controller-ovs-k5zwb" Jan 29 15:46:03 crc kubenswrapper[5008]: I0129 15:46:03.292679 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/fb07a603-1696-4378-8d99-382d5bc152da-var-run\") pod \"ovn-controller-ovs-k5zwb\" (UID: \"fb07a603-1696-4378-8d99-382d5bc152da\") " pod="openstack/ovn-controller-ovs-k5zwb" Jan 29 15:46:03 crc kubenswrapper[5008]: I0129 15:46:03.292699 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/fb07a603-1696-4378-8d99-382d5bc152da-var-lib\") pod \"ovn-controller-ovs-k5zwb\" (UID: \"fb07a603-1696-4378-8d99-382d5bc152da\") " pod="openstack/ovn-controller-ovs-k5zwb" Jan 29 15:46:03 crc kubenswrapper[5008]: I0129 15:46:03.293186 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/fb07a603-1696-4378-8d99-382d5bc152da-var-lib\") pod \"ovn-controller-ovs-k5zwb\" (UID: \"fb07a603-1696-4378-8d99-382d5bc152da\") " pod="openstack/ovn-controller-ovs-k5zwb" Jan 29 15:46:03 crc kubenswrapper[5008]: I0129 15:46:03.293184 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/fb07a603-1696-4378-8d99-382d5bc152da-etc-ovs\") pod \"ovn-controller-ovs-k5zwb\" (UID: \"fb07a603-1696-4378-8d99-382d5bc152da\") " pod="openstack/ovn-controller-ovs-k5zwb" Jan 29 15:46:03 crc kubenswrapper[5008]: I0129 15:46:03.293241 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/fb07a603-1696-4378-8d99-382d5bc152da-var-run\") pod \"ovn-controller-ovs-k5zwb\" (UID: \"fb07a603-1696-4378-8d99-382d5bc152da\") " pod="openstack/ovn-controller-ovs-k5zwb" Jan 29 15:46:03 crc kubenswrapper[5008]: I0129 15:46:03.293245 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/fb07a603-1696-4378-8d99-382d5bc152da-var-log\") pod \"ovn-controller-ovs-k5zwb\" (UID: \"fb07a603-1696-4378-8d99-382d5bc152da\") " pod="openstack/ovn-controller-ovs-k5zwb" Jan 29 15:46:03 crc kubenswrapper[5008]: I0129 15:46:03.294519 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/fb07a603-1696-4378-8d99-382d5bc152da-scripts\") pod \"ovn-controller-ovs-k5zwb\" (UID: \"fb07a603-1696-4378-8d99-382d5bc152da\") " pod="openstack/ovn-controller-ovs-k5zwb" Jan 29 15:46:03 crc kubenswrapper[5008]: I0129 15:46:03.311763 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c8pfz\" (UniqueName: \"kubernetes.io/projected/fb07a603-1696-4378-8d99-382d5bc152da-kube-api-access-c8pfz\") pod \"ovn-controller-ovs-k5zwb\" (UID: \"fb07a603-1696-4378-8d99-382d5bc152da\") " pod="openstack/ovn-controller-ovs-k5zwb" Jan 29 15:46:03 crc kubenswrapper[5008]: I0129 15:46:03.319494 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-bw9wr" Jan 29 15:46:03 crc kubenswrapper[5008]: I0129 15:46:03.417173 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-k5zwb" Jan 29 15:46:03 crc kubenswrapper[5008]: I0129 15:46:03.799335 5008 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-nb-0"] Jan 29 15:46:03 crc kubenswrapper[5008]: I0129 15:46:03.807117 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Jan 29 15:46:03 crc kubenswrapper[5008]: I0129 15:46:03.811288 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovn-metrics" Jan 29 15:46:03 crc kubenswrapper[5008]: I0129 15:46:03.812144 5008 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-config" Jan 29 15:46:03 crc kubenswrapper[5008]: I0129 15:46:03.812363 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-nb-ovndbs" Jan 29 15:46:03 crc kubenswrapper[5008]: I0129 15:46:03.812591 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-nb-dockercfg-52kcd" Jan 29 15:46:03 crc kubenswrapper[5008]: I0129 15:46:03.816995 5008 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-scripts" Jan 29 15:46:03 crc kubenswrapper[5008]: E0129 15:46:03.836916 5008 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of ecc4e5a68e9a1c47e753728740eeba62f98f13393292d44b9163dac6f6b4fb16 is running failed: container process not found" containerID="ecc4e5a68e9a1c47e753728740eeba62f98f13393292d44b9163dac6f6b4fb16" cmd=["grpc_health_probe","-addr=:50051"] Jan 29 15:46:03 crc kubenswrapper[5008]: E0129 15:46:03.838281 5008 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of ecc4e5a68e9a1c47e753728740eeba62f98f13393292d44b9163dac6f6b4fb16 is running failed: container process not found" containerID="ecc4e5a68e9a1c47e753728740eeba62f98f13393292d44b9163dac6f6b4fb16" cmd=["grpc_health_probe","-addr=:50051"] Jan 29 15:46:03 crc kubenswrapper[5008]: I0129 15:46:03.839080 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Jan 29 15:46:03 crc kubenswrapper[5008]: E0129 15:46:03.839177 5008 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of ecc4e5a68e9a1c47e753728740eeba62f98f13393292d44b9163dac6f6b4fb16 is running failed: container process not found" containerID="ecc4e5a68e9a1c47e753728740eeba62f98f13393292d44b9163dac6f6b4fb16" cmd=["grpc_health_probe","-addr=:50051"] Jan 29 15:46:03 crc kubenswrapper[5008]: E0129 15:46:03.839241 5008 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of ecc4e5a68e9a1c47e753728740eeba62f98f13393292d44b9163dac6f6b4fb16 is running failed: container process not found" probeType="Readiness" pod="openshift-marketplace/redhat-marketplace-z75gs" podUID="014fe771-fe01-4b92-b038-862615b75136" containerName="registry-server" Jan 29 15:46:03 crc kubenswrapper[5008]: I0129 15:46:03.904458 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8rrwg\" (UniqueName: \"kubernetes.io/projected/4d502938-9e22-4a6c-951e-b476cb87ee8f-kube-api-access-8rrwg\") pod \"ovsdbserver-nb-0\" (UID: \"4d502938-9e22-4a6c-951e-b476cb87ee8f\") " pod="openstack/ovsdbserver-nb-0" Jan 29 15:46:03 crc kubenswrapper[5008]: I0129 15:46:03.905261 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/4d502938-9e22-4a6c-951e-b476cb87ee8f-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"4d502938-9e22-4a6c-951e-b476cb87ee8f\") " pod="openstack/ovsdbserver-nb-0" Jan 29 15:46:03 crc kubenswrapper[5008]: I0129 15:46:03.905354 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"ovsdbserver-nb-0\" (UID: \"4d502938-9e22-4a6c-951e-b476cb87ee8f\") " pod="openstack/ovsdbserver-nb-0" Jan 29 15:46:03 crc kubenswrapper[5008]: I0129 15:46:03.905417 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4d502938-9e22-4a6c-951e-b476cb87ee8f-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"4d502938-9e22-4a6c-951e-b476cb87ee8f\") " pod="openstack/ovsdbserver-nb-0" Jan 29 15:46:03 crc kubenswrapper[5008]: I0129 15:46:03.905496 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/4d502938-9e22-4a6c-951e-b476cb87ee8f-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"4d502938-9e22-4a6c-951e-b476cb87ee8f\") " pod="openstack/ovsdbserver-nb-0" Jan 29 15:46:03 crc kubenswrapper[5008]: I0129 15:46:03.905576 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/4d502938-9e22-4a6c-951e-b476cb87ee8f-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"4d502938-9e22-4a6c-951e-b476cb87ee8f\") " pod="openstack/ovsdbserver-nb-0" Jan 29 15:46:03 crc kubenswrapper[5008]: I0129 15:46:03.905671 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4d502938-9e22-4a6c-951e-b476cb87ee8f-config\") pod \"ovsdbserver-nb-0\" (UID: \"4d502938-9e22-4a6c-951e-b476cb87ee8f\") " pod="openstack/ovsdbserver-nb-0" Jan 29 15:46:03 crc kubenswrapper[5008]: I0129 15:46:03.905724 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/4d502938-9e22-4a6c-951e-b476cb87ee8f-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"4d502938-9e22-4a6c-951e-b476cb87ee8f\") " pod="openstack/ovsdbserver-nb-0" Jan 29 15:46:04 crc kubenswrapper[5008]: I0129 15:46:04.006655 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/4d502938-9e22-4a6c-951e-b476cb87ee8f-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"4d502938-9e22-4a6c-951e-b476cb87ee8f\") " pod="openstack/ovsdbserver-nb-0" Jan 29 15:46:04 crc kubenswrapper[5008]: I0129 15:46:04.006709 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"ovsdbserver-nb-0\" (UID: \"4d502938-9e22-4a6c-951e-b476cb87ee8f\") " pod="openstack/ovsdbserver-nb-0" Jan 29 15:46:04 crc kubenswrapper[5008]: I0129 15:46:04.006763 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4d502938-9e22-4a6c-951e-b476cb87ee8f-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"4d502938-9e22-4a6c-951e-b476cb87ee8f\") " pod="openstack/ovsdbserver-nb-0" Jan 29 15:46:04 crc kubenswrapper[5008]: I0129 15:46:04.006820 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/4d502938-9e22-4a6c-951e-b476cb87ee8f-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"4d502938-9e22-4a6c-951e-b476cb87ee8f\") " pod="openstack/ovsdbserver-nb-0" Jan 29 15:46:04 crc kubenswrapper[5008]: I0129 15:46:04.006846 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4d502938-9e22-4a6c-951e-b476cb87ee8f-config\") pod \"ovsdbserver-nb-0\" (UID: \"4d502938-9e22-4a6c-951e-b476cb87ee8f\") " pod="openstack/ovsdbserver-nb-0" Jan 29 15:46:04 crc kubenswrapper[5008]: I0129 15:46:04.006861 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/4d502938-9e22-4a6c-951e-b476cb87ee8f-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"4d502938-9e22-4a6c-951e-b476cb87ee8f\") " pod="openstack/ovsdbserver-nb-0" Jan 29 15:46:04 crc kubenswrapper[5008]: I0129 15:46:04.006889 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/4d502938-9e22-4a6c-951e-b476cb87ee8f-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"4d502938-9e22-4a6c-951e-b476cb87ee8f\") " pod="openstack/ovsdbserver-nb-0" Jan 29 15:46:04 crc kubenswrapper[5008]: I0129 15:46:04.006926 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8rrwg\" (UniqueName: \"kubernetes.io/projected/4d502938-9e22-4a6c-951e-b476cb87ee8f-kube-api-access-8rrwg\") pod \"ovsdbserver-nb-0\" (UID: \"4d502938-9e22-4a6c-951e-b476cb87ee8f\") " pod="openstack/ovsdbserver-nb-0" Jan 29 15:46:04 crc kubenswrapper[5008]: I0129 15:46:04.007150 5008 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"ovsdbserver-nb-0\" (UID: \"4d502938-9e22-4a6c-951e-b476cb87ee8f\") device mount path \"/mnt/openstack/pv01\"" pod="openstack/ovsdbserver-nb-0" Jan 29 15:46:04 crc kubenswrapper[5008]: I0129 15:46:04.007433 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/4d502938-9e22-4a6c-951e-b476cb87ee8f-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"4d502938-9e22-4a6c-951e-b476cb87ee8f\") " pod="openstack/ovsdbserver-nb-0" Jan 29 15:46:04 crc kubenswrapper[5008]: I0129 15:46:04.008012 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4d502938-9e22-4a6c-951e-b476cb87ee8f-config\") pod \"ovsdbserver-nb-0\" (UID: \"4d502938-9e22-4a6c-951e-b476cb87ee8f\") " pod="openstack/ovsdbserver-nb-0" Jan 29 15:46:04 crc kubenswrapper[5008]: I0129 15:46:04.008862 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/4d502938-9e22-4a6c-951e-b476cb87ee8f-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"4d502938-9e22-4a6c-951e-b476cb87ee8f\") " pod="openstack/ovsdbserver-nb-0" Jan 29 15:46:04 crc kubenswrapper[5008]: I0129 15:46:04.012380 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/4d502938-9e22-4a6c-951e-b476cb87ee8f-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"4d502938-9e22-4a6c-951e-b476cb87ee8f\") " pod="openstack/ovsdbserver-nb-0" Jan 29 15:46:04 crc kubenswrapper[5008]: I0129 15:46:04.013838 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/4d502938-9e22-4a6c-951e-b476cb87ee8f-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"4d502938-9e22-4a6c-951e-b476cb87ee8f\") " pod="openstack/ovsdbserver-nb-0" Jan 29 15:46:04 crc kubenswrapper[5008]: I0129 15:46:04.017336 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4d502938-9e22-4a6c-951e-b476cb87ee8f-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"4d502938-9e22-4a6c-951e-b476cb87ee8f\") " pod="openstack/ovsdbserver-nb-0" Jan 29 15:46:04 crc kubenswrapper[5008]: I0129 15:46:04.031803 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8rrwg\" (UniqueName: \"kubernetes.io/projected/4d502938-9e22-4a6c-951e-b476cb87ee8f-kube-api-access-8rrwg\") pod \"ovsdbserver-nb-0\" (UID: \"4d502938-9e22-4a6c-951e-b476cb87ee8f\") " pod="openstack/ovsdbserver-nb-0" Jan 29 15:46:04 crc kubenswrapper[5008]: I0129 15:46:04.036944 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"ovsdbserver-nb-0\" (UID: \"4d502938-9e22-4a6c-951e-b476cb87ee8f\") " pod="openstack/ovsdbserver-nb-0" Jan 29 15:46:04 crc kubenswrapper[5008]: I0129 15:46:04.133250 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Jan 29 15:46:04 crc kubenswrapper[5008]: I0129 15:46:04.648904 5008 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-z75gs" Jan 29 15:46:04 crc kubenswrapper[5008]: I0129 15:46:04.719199 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/014fe771-fe01-4b92-b038-862615b75136-catalog-content\") pod \"014fe771-fe01-4b92-b038-862615b75136\" (UID: \"014fe771-fe01-4b92-b038-862615b75136\") " Jan 29 15:46:04 crc kubenswrapper[5008]: I0129 15:46:04.719281 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tg272\" (UniqueName: \"kubernetes.io/projected/014fe771-fe01-4b92-b038-862615b75136-kube-api-access-tg272\") pod \"014fe771-fe01-4b92-b038-862615b75136\" (UID: \"014fe771-fe01-4b92-b038-862615b75136\") " Jan 29 15:46:04 crc kubenswrapper[5008]: I0129 15:46:04.719312 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/014fe771-fe01-4b92-b038-862615b75136-utilities\") pod \"014fe771-fe01-4b92-b038-862615b75136\" (UID: \"014fe771-fe01-4b92-b038-862615b75136\") " Jan 29 15:46:04 crc kubenswrapper[5008]: I0129 15:46:04.720968 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/014fe771-fe01-4b92-b038-862615b75136-utilities" (OuterVolumeSpecName: "utilities") pod "014fe771-fe01-4b92-b038-862615b75136" (UID: "014fe771-fe01-4b92-b038-862615b75136"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 15:46:04 crc kubenswrapper[5008]: I0129 15:46:04.753291 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/014fe771-fe01-4b92-b038-862615b75136-kube-api-access-tg272" (OuterVolumeSpecName: "kube-api-access-tg272") pod "014fe771-fe01-4b92-b038-862615b75136" (UID: "014fe771-fe01-4b92-b038-862615b75136"). InnerVolumeSpecName "kube-api-access-tg272". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:46:04 crc kubenswrapper[5008]: I0129 15:46:04.766295 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/014fe771-fe01-4b92-b038-862615b75136-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "014fe771-fe01-4b92-b038-862615b75136" (UID: "014fe771-fe01-4b92-b038-862615b75136"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 15:46:04 crc kubenswrapper[5008]: I0129 15:46:04.822101 5008 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tg272\" (UniqueName: \"kubernetes.io/projected/014fe771-fe01-4b92-b038-862615b75136-kube-api-access-tg272\") on node \"crc\" DevicePath \"\"" Jan 29 15:46:04 crc kubenswrapper[5008]: I0129 15:46:04.822137 5008 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/014fe771-fe01-4b92-b038-862615b75136-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 15:46:04 crc kubenswrapper[5008]: I0129 15:46:04.822148 5008 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/014fe771-fe01-4b92-b038-862615b75136-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 15:46:05 crc kubenswrapper[5008]: I0129 15:46:05.433056 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-z75gs" event={"ID":"014fe771-fe01-4b92-b038-862615b75136","Type":"ContainerDied","Data":"3d4dceb557efb379fc43836d7c0b6854e7a45385d099f1155ac83813cd0b127b"} Jan 29 15:46:05 crc kubenswrapper[5008]: I0129 15:46:05.433105 5008 scope.go:117] "RemoveContainer" containerID="ecc4e5a68e9a1c47e753728740eeba62f98f13393292d44b9163dac6f6b4fb16" Jan 29 15:46:05 crc kubenswrapper[5008]: I0129 15:46:05.433122 5008 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-z75gs" Jan 29 15:46:05 crc kubenswrapper[5008]: I0129 15:46:05.453900 5008 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-z75gs"] Jan 29 15:46:05 crc kubenswrapper[5008]: I0129 15:46:05.462169 5008 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-z75gs"] Jan 29 15:46:05 crc kubenswrapper[5008]: I0129 15:46:05.469543 5008 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-sb-0"] Jan 29 15:46:05 crc kubenswrapper[5008]: E0129 15:46:05.471612 5008 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="014fe771-fe01-4b92-b038-862615b75136" containerName="extract-utilities" Jan 29 15:46:05 crc kubenswrapper[5008]: I0129 15:46:05.471744 5008 state_mem.go:107] "Deleted CPUSet assignment" podUID="014fe771-fe01-4b92-b038-862615b75136" containerName="extract-utilities" Jan 29 15:46:05 crc kubenswrapper[5008]: E0129 15:46:05.471891 5008 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="014fe771-fe01-4b92-b038-862615b75136" containerName="registry-server" Jan 29 15:46:05 crc kubenswrapper[5008]: I0129 15:46:05.471974 5008 state_mem.go:107] "Deleted CPUSet assignment" podUID="014fe771-fe01-4b92-b038-862615b75136" containerName="registry-server" Jan 29 15:46:05 crc kubenswrapper[5008]: E0129 15:46:05.472062 5008 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="014fe771-fe01-4b92-b038-862615b75136" containerName="extract-content" Jan 29 15:46:05 crc kubenswrapper[5008]: I0129 15:46:05.472144 5008 state_mem.go:107] "Deleted CPUSet assignment" podUID="014fe771-fe01-4b92-b038-862615b75136" containerName="extract-content" Jan 29 15:46:05 crc kubenswrapper[5008]: I0129 15:46:05.472417 5008 memory_manager.go:354] "RemoveStaleState removing state" podUID="014fe771-fe01-4b92-b038-862615b75136" containerName="registry-server" Jan 29 15:46:05 crc kubenswrapper[5008]: I0129 15:46:05.473363 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Jan 29 15:46:05 crc kubenswrapper[5008]: I0129 15:46:05.478112 5008 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-config" Jan 29 15:46:05 crc kubenswrapper[5008]: I0129 15:46:05.478187 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-sb-ovndbs" Jan 29 15:46:05 crc kubenswrapper[5008]: I0129 15:46:05.478406 5008 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-scripts" Jan 29 15:46:05 crc kubenswrapper[5008]: I0129 15:46:05.478617 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-sb-dockercfg-d65gd" Jan 29 15:46:05 crc kubenswrapper[5008]: I0129 15:46:05.493615 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Jan 29 15:46:05 crc kubenswrapper[5008]: I0129 15:46:05.540706 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/ea8d28cd-76d6-4a6e-b6bd-a0e5f0fc2106-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"ea8d28cd-76d6-4a6e-b6bd-a0e5f0fc2106\") " pod="openstack/ovsdbserver-sb-0" Jan 29 15:46:05 crc kubenswrapper[5008]: I0129 15:46:05.540746 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/ea8d28cd-76d6-4a6e-b6bd-a0e5f0fc2106-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"ea8d28cd-76d6-4a6e-b6bd-a0e5f0fc2106\") " pod="openstack/ovsdbserver-sb-0" Jan 29 15:46:05 crc kubenswrapper[5008]: I0129 15:46:05.540769 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"ovsdbserver-sb-0\" (UID: \"ea8d28cd-76d6-4a6e-b6bd-a0e5f0fc2106\") " pod="openstack/ovsdbserver-sb-0" Jan 29 15:46:05 crc kubenswrapper[5008]: I0129 15:46:05.540825 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ea8d28cd-76d6-4a6e-b6bd-a0e5f0fc2106-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"ea8d28cd-76d6-4a6e-b6bd-a0e5f0fc2106\") " pod="openstack/ovsdbserver-sb-0" Jan 29 15:46:05 crc kubenswrapper[5008]: I0129 15:46:05.540845 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/ea8d28cd-76d6-4a6e-b6bd-a0e5f0fc2106-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"ea8d28cd-76d6-4a6e-b6bd-a0e5f0fc2106\") " pod="openstack/ovsdbserver-sb-0" Jan 29 15:46:05 crc kubenswrapper[5008]: I0129 15:46:05.540891 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nkx2s\" (UniqueName: \"kubernetes.io/projected/ea8d28cd-76d6-4a6e-b6bd-a0e5f0fc2106-kube-api-access-nkx2s\") pod \"ovsdbserver-sb-0\" (UID: \"ea8d28cd-76d6-4a6e-b6bd-a0e5f0fc2106\") " pod="openstack/ovsdbserver-sb-0" Jan 29 15:46:05 crc kubenswrapper[5008]: I0129 15:46:05.540943 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/ea8d28cd-76d6-4a6e-b6bd-a0e5f0fc2106-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"ea8d28cd-76d6-4a6e-b6bd-a0e5f0fc2106\") " pod="openstack/ovsdbserver-sb-0" Jan 29 15:46:05 crc kubenswrapper[5008]: I0129 15:46:05.540960 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ea8d28cd-76d6-4a6e-b6bd-a0e5f0fc2106-config\") pod \"ovsdbserver-sb-0\" (UID: \"ea8d28cd-76d6-4a6e-b6bd-a0e5f0fc2106\") " pod="openstack/ovsdbserver-sb-0" Jan 29 15:46:05 crc kubenswrapper[5008]: I0129 15:46:05.642468 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"ovsdbserver-sb-0\" (UID: \"ea8d28cd-76d6-4a6e-b6bd-a0e5f0fc2106\") " pod="openstack/ovsdbserver-sb-0" Jan 29 15:46:05 crc kubenswrapper[5008]: I0129 15:46:05.642555 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ea8d28cd-76d6-4a6e-b6bd-a0e5f0fc2106-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"ea8d28cd-76d6-4a6e-b6bd-a0e5f0fc2106\") " pod="openstack/ovsdbserver-sb-0" Jan 29 15:46:05 crc kubenswrapper[5008]: I0129 15:46:05.642582 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/ea8d28cd-76d6-4a6e-b6bd-a0e5f0fc2106-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"ea8d28cd-76d6-4a6e-b6bd-a0e5f0fc2106\") " pod="openstack/ovsdbserver-sb-0" Jan 29 15:46:05 crc kubenswrapper[5008]: I0129 15:46:05.642637 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nkx2s\" (UniqueName: \"kubernetes.io/projected/ea8d28cd-76d6-4a6e-b6bd-a0e5f0fc2106-kube-api-access-nkx2s\") pod \"ovsdbserver-sb-0\" (UID: \"ea8d28cd-76d6-4a6e-b6bd-a0e5f0fc2106\") " pod="openstack/ovsdbserver-sb-0" Jan 29 15:46:05 crc kubenswrapper[5008]: I0129 15:46:05.642714 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/ea8d28cd-76d6-4a6e-b6bd-a0e5f0fc2106-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"ea8d28cd-76d6-4a6e-b6bd-a0e5f0fc2106\") " pod="openstack/ovsdbserver-sb-0" Jan 29 15:46:05 crc kubenswrapper[5008]: I0129 15:46:05.642742 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ea8d28cd-76d6-4a6e-b6bd-a0e5f0fc2106-config\") pod \"ovsdbserver-sb-0\" (UID: \"ea8d28cd-76d6-4a6e-b6bd-a0e5f0fc2106\") " pod="openstack/ovsdbserver-sb-0" Jan 29 15:46:05 crc kubenswrapper[5008]: I0129 15:46:05.642834 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/ea8d28cd-76d6-4a6e-b6bd-a0e5f0fc2106-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"ea8d28cd-76d6-4a6e-b6bd-a0e5f0fc2106\") " pod="openstack/ovsdbserver-sb-0" Jan 29 15:46:05 crc kubenswrapper[5008]: I0129 15:46:05.642872 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/ea8d28cd-76d6-4a6e-b6bd-a0e5f0fc2106-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"ea8d28cd-76d6-4a6e-b6bd-a0e5f0fc2106\") " pod="openstack/ovsdbserver-sb-0" Jan 29 15:46:05 crc kubenswrapper[5008]: I0129 15:46:05.642955 5008 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"ovsdbserver-sb-0\" (UID: \"ea8d28cd-76d6-4a6e-b6bd-a0e5f0fc2106\") device mount path \"/mnt/openstack/pv08\"" pod="openstack/ovsdbserver-sb-0" Jan 29 15:46:05 crc kubenswrapper[5008]: I0129 15:46:05.644420 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/ea8d28cd-76d6-4a6e-b6bd-a0e5f0fc2106-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"ea8d28cd-76d6-4a6e-b6bd-a0e5f0fc2106\") " pod="openstack/ovsdbserver-sb-0" Jan 29 15:46:05 crc kubenswrapper[5008]: I0129 15:46:05.645034 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ea8d28cd-76d6-4a6e-b6bd-a0e5f0fc2106-config\") pod \"ovsdbserver-sb-0\" (UID: \"ea8d28cd-76d6-4a6e-b6bd-a0e5f0fc2106\") " pod="openstack/ovsdbserver-sb-0" Jan 29 15:46:05 crc kubenswrapper[5008]: I0129 15:46:05.646324 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/ea8d28cd-76d6-4a6e-b6bd-a0e5f0fc2106-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"ea8d28cd-76d6-4a6e-b6bd-a0e5f0fc2106\") " pod="openstack/ovsdbserver-sb-0" Jan 29 15:46:05 crc kubenswrapper[5008]: I0129 15:46:05.649684 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/ea8d28cd-76d6-4a6e-b6bd-a0e5f0fc2106-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"ea8d28cd-76d6-4a6e-b6bd-a0e5f0fc2106\") " pod="openstack/ovsdbserver-sb-0" Jan 29 15:46:05 crc kubenswrapper[5008]: I0129 15:46:05.657013 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ea8d28cd-76d6-4a6e-b6bd-a0e5f0fc2106-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"ea8d28cd-76d6-4a6e-b6bd-a0e5f0fc2106\") " pod="openstack/ovsdbserver-sb-0" Jan 29 15:46:05 crc kubenswrapper[5008]: I0129 15:46:05.662532 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/ea8d28cd-76d6-4a6e-b6bd-a0e5f0fc2106-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"ea8d28cd-76d6-4a6e-b6bd-a0e5f0fc2106\") " pod="openstack/ovsdbserver-sb-0" Jan 29 15:46:05 crc kubenswrapper[5008]: I0129 15:46:05.667407 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nkx2s\" (UniqueName: \"kubernetes.io/projected/ea8d28cd-76d6-4a6e-b6bd-a0e5f0fc2106-kube-api-access-nkx2s\") pod \"ovsdbserver-sb-0\" (UID: \"ea8d28cd-76d6-4a6e-b6bd-a0e5f0fc2106\") " pod="openstack/ovsdbserver-sb-0" Jan 29 15:46:05 crc kubenswrapper[5008]: I0129 15:46:05.682990 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"ovsdbserver-sb-0\" (UID: \"ea8d28cd-76d6-4a6e-b6bd-a0e5f0fc2106\") " pod="openstack/ovsdbserver-sb-0" Jan 29 15:46:05 crc kubenswrapper[5008]: I0129 15:46:05.794188 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Jan 29 15:46:06 crc kubenswrapper[5008]: E0129 15:46:06.803385 5008 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of aa91505cf8b4d23056bc4bbc41262f55839afe4692887dc71784f0fbc58a28a6 is running failed: container process not found" containerID="aa91505cf8b4d23056bc4bbc41262f55839afe4692887dc71784f0fbc58a28a6" cmd=["grpc_health_probe","-addr=:50051"] Jan 29 15:46:06 crc kubenswrapper[5008]: E0129 15:46:06.803762 5008 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of aa91505cf8b4d23056bc4bbc41262f55839afe4692887dc71784f0fbc58a28a6 is running failed: container process not found" containerID="aa91505cf8b4d23056bc4bbc41262f55839afe4692887dc71784f0fbc58a28a6" cmd=["grpc_health_probe","-addr=:50051"] Jan 29 15:46:06 crc kubenswrapper[5008]: E0129 15:46:06.804141 5008 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of aa91505cf8b4d23056bc4bbc41262f55839afe4692887dc71784f0fbc58a28a6 is running failed: container process not found" containerID="aa91505cf8b4d23056bc4bbc41262f55839afe4692887dc71784f0fbc58a28a6" cmd=["grpc_health_probe","-addr=:50051"] Jan 29 15:46:06 crc kubenswrapper[5008]: E0129 15:46:06.804172 5008 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of aa91505cf8b4d23056bc4bbc41262f55839afe4692887dc71784f0fbc58a28a6 is running failed: container process not found" probeType="Readiness" pod="openshift-marketplace/certified-operators-6kzcj" podUID="c82fc869-759d-4902-9aef-fdd69452b420" containerName="registry-server" Jan 29 15:46:07 crc kubenswrapper[5008]: I0129 15:46:07.339869 5008 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="014fe771-fe01-4b92-b038-862615b75136" path="/var/lib/kubelet/pods/014fe771-fe01-4b92-b038-862615b75136/volumes" Jan 29 15:46:10 crc kubenswrapper[5008]: I0129 15:46:10.125849 5008 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-6kzcj" Jan 29 15:46:10 crc kubenswrapper[5008]: I0129 15:46:10.219161 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m6t5h\" (UniqueName: \"kubernetes.io/projected/c82fc869-759d-4902-9aef-fdd69452b420-kube-api-access-m6t5h\") pod \"c82fc869-759d-4902-9aef-fdd69452b420\" (UID: \"c82fc869-759d-4902-9aef-fdd69452b420\") " Jan 29 15:46:10 crc kubenswrapper[5008]: I0129 15:46:10.219240 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c82fc869-759d-4902-9aef-fdd69452b420-utilities\") pod \"c82fc869-759d-4902-9aef-fdd69452b420\" (UID: \"c82fc869-759d-4902-9aef-fdd69452b420\") " Jan 29 15:46:10 crc kubenswrapper[5008]: I0129 15:46:10.219319 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c82fc869-759d-4902-9aef-fdd69452b420-catalog-content\") pod \"c82fc869-759d-4902-9aef-fdd69452b420\" (UID: \"c82fc869-759d-4902-9aef-fdd69452b420\") " Jan 29 15:46:10 crc kubenswrapper[5008]: I0129 15:46:10.223594 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c82fc869-759d-4902-9aef-fdd69452b420-utilities" (OuterVolumeSpecName: "utilities") pod "c82fc869-759d-4902-9aef-fdd69452b420" (UID: "c82fc869-759d-4902-9aef-fdd69452b420"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 15:46:10 crc kubenswrapper[5008]: I0129 15:46:10.228342 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c82fc869-759d-4902-9aef-fdd69452b420-kube-api-access-m6t5h" (OuterVolumeSpecName: "kube-api-access-m6t5h") pod "c82fc869-759d-4902-9aef-fdd69452b420" (UID: "c82fc869-759d-4902-9aef-fdd69452b420"). InnerVolumeSpecName "kube-api-access-m6t5h". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:46:10 crc kubenswrapper[5008]: I0129 15:46:10.276848 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c82fc869-759d-4902-9aef-fdd69452b420-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c82fc869-759d-4902-9aef-fdd69452b420" (UID: "c82fc869-759d-4902-9aef-fdd69452b420"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 15:46:10 crc kubenswrapper[5008]: I0129 15:46:10.320758 5008 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c82fc869-759d-4902-9aef-fdd69452b420-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 15:46:10 crc kubenswrapper[5008]: I0129 15:46:10.320804 5008 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m6t5h\" (UniqueName: \"kubernetes.io/projected/c82fc869-759d-4902-9aef-fdd69452b420-kube-api-access-m6t5h\") on node \"crc\" DevicePath \"\"" Jan 29 15:46:10 crc kubenswrapper[5008]: I0129 15:46:10.320815 5008 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c82fc869-759d-4902-9aef-fdd69452b420-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 15:46:10 crc kubenswrapper[5008]: I0129 15:46:10.468378 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6kzcj" event={"ID":"c82fc869-759d-4902-9aef-fdd69452b420","Type":"ContainerDied","Data":"debd562bbbd639021d945b4eafb3e69ca2ec6a19be12a7aeaf5f75ffdbc60792"} Jan 29 15:46:10 crc kubenswrapper[5008]: I0129 15:46:10.468458 5008 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-6kzcj" Jan 29 15:46:10 crc kubenswrapper[5008]: I0129 15:46:10.501018 5008 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-6kzcj"] Jan 29 15:46:10 crc kubenswrapper[5008]: I0129 15:46:10.507867 5008 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-6kzcj"] Jan 29 15:46:11 crc kubenswrapper[5008]: E0129 15:46:10.998993 5008 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Jan 29 15:46:11 crc kubenswrapper[5008]: E0129 15:46:10.999179 5008 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:ndfhb5h667h568h584h5f9h58dh565h664h587h597h577h64bh5c4h66fh647hbdh68ch5c5h68dh686h5f7h64hd7hc6h55fh57bh98h57fh87h5fh57fq,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-hhgbd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-78dd6ddcc-s5tkh_openstack(3db905f0-53de-4983-b70f-c883bfe123ba): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 29 15:46:11 crc kubenswrapper[5008]: E0129 15:46:11.000492 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-78dd6ddcc-s5tkh" podUID="3db905f0-53de-4983-b70f-c883bfe123ba" Jan 29 15:46:11 crc kubenswrapper[5008]: I0129 15:46:11.335139 5008 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c82fc869-759d-4902-9aef-fdd69452b420" path="/var/lib/kubelet/pods/c82fc869-759d-4902-9aef-fdd69452b420/volumes" Jan 29 15:46:12 crc kubenswrapper[5008]: E0129 15:46:12.139147 5008 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Jan 29 15:46:12 crc kubenswrapper[5008]: E0129 15:46:12.139687 5008 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:nffh5bdhf4h5f8h79h55h77h58fh56dh7bh6fh578hbch55dh68h56bhd9h65dh57ch658hc9h566h666h688h58h65dh684h5d7h6ch575h5d6h88q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-sqfht,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-675f4bcbfc-d4fhx_openstack(b128f8df-0b1b-4062-9c3d-fd0f1d2e8078): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 29 15:46:12 crc kubenswrapper[5008]: E0129 15:46:12.140919 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-675f4bcbfc-d4fhx" podUID="b128f8df-0b1b-4062-9c3d-fd0f1d2e8078" Jan 29 15:46:12 crc kubenswrapper[5008]: I0129 15:46:12.184882 5008 scope.go:117] "RemoveContainer" containerID="b091a2c3cf526d0bdb7bf3376685f7d0e8e07a65ed76f6cc14da757b75460432" Jan 29 15:46:12 crc kubenswrapper[5008]: I0129 15:46:12.227185 5008 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-s5tkh" Jan 29 15:46:12 crc kubenswrapper[5008]: I0129 15:46:12.334411 5008 scope.go:117] "RemoveContainer" containerID="6146763d50fe2db378760e8a9cd32d988036e3f58c7668e786dd7811a893a9b6" Jan 29 15:46:12 crc kubenswrapper[5008]: I0129 15:46:12.351512 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3db905f0-53de-4983-b70f-c883bfe123ba-dns-svc\") pod \"3db905f0-53de-4983-b70f-c883bfe123ba\" (UID: \"3db905f0-53de-4983-b70f-c883bfe123ba\") " Jan 29 15:46:12 crc kubenswrapper[5008]: I0129 15:46:12.351561 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hhgbd\" (UniqueName: \"kubernetes.io/projected/3db905f0-53de-4983-b70f-c883bfe123ba-kube-api-access-hhgbd\") pod \"3db905f0-53de-4983-b70f-c883bfe123ba\" (UID: \"3db905f0-53de-4983-b70f-c883bfe123ba\") " Jan 29 15:46:12 crc kubenswrapper[5008]: I0129 15:46:12.351640 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3db905f0-53de-4983-b70f-c883bfe123ba-config\") pod \"3db905f0-53de-4983-b70f-c883bfe123ba\" (UID: \"3db905f0-53de-4983-b70f-c883bfe123ba\") " Jan 29 15:46:12 crc kubenswrapper[5008]: I0129 15:46:12.353068 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3db905f0-53de-4983-b70f-c883bfe123ba-config" (OuterVolumeSpecName: "config") pod "3db905f0-53de-4983-b70f-c883bfe123ba" (UID: "3db905f0-53de-4983-b70f-c883bfe123ba"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:46:12 crc kubenswrapper[5008]: I0129 15:46:12.353534 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3db905f0-53de-4983-b70f-c883bfe123ba-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "3db905f0-53de-4983-b70f-c883bfe123ba" (UID: "3db905f0-53de-4983-b70f-c883bfe123ba"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:46:12 crc kubenswrapper[5008]: I0129 15:46:12.377885 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3db905f0-53de-4983-b70f-c883bfe123ba-kube-api-access-hhgbd" (OuterVolumeSpecName: "kube-api-access-hhgbd") pod "3db905f0-53de-4983-b70f-c883bfe123ba" (UID: "3db905f0-53de-4983-b70f-c883bfe123ba"). InnerVolumeSpecName "kube-api-access-hhgbd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:46:12 crc kubenswrapper[5008]: I0129 15:46:12.378731 5008 scope.go:117] "RemoveContainer" containerID="aa91505cf8b4d23056bc4bbc41262f55839afe4692887dc71784f0fbc58a28a6" Jan 29 15:46:12 crc kubenswrapper[5008]: I0129 15:46:12.454690 5008 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3db905f0-53de-4983-b70f-c883bfe123ba-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 29 15:46:12 crc kubenswrapper[5008]: I0129 15:46:12.454722 5008 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hhgbd\" (UniqueName: \"kubernetes.io/projected/3db905f0-53de-4983-b70f-c883bfe123ba-kube-api-access-hhgbd\") on node \"crc\" DevicePath \"\"" Jan 29 15:46:12 crc kubenswrapper[5008]: I0129 15:46:12.454732 5008 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3db905f0-53de-4983-b70f-c883bfe123ba-config\") on node \"crc\" DevicePath \"\"" Jan 29 15:46:12 crc kubenswrapper[5008]: I0129 15:46:12.470794 5008 scope.go:117] "RemoveContainer" containerID="252ca65842c9d7357ac65b037452a00da92ce644c45e1b9f0b6e067af34afb31" Jan 29 15:46:12 crc kubenswrapper[5008]: I0129 15:46:12.489688 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78dd6ddcc-s5tkh" event={"ID":"3db905f0-53de-4983-b70f-c883bfe123ba","Type":"ContainerDied","Data":"30c8df1cacab6aea0ebe156b76659c8cb48d207b8fd5bb6861527a9757db6348"} Jan 29 15:46:12 crc kubenswrapper[5008]: I0129 15:46:12.489709 5008 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-s5tkh" Jan 29 15:46:12 crc kubenswrapper[5008]: I0129 15:46:12.537672 5008 scope.go:117] "RemoveContainer" containerID="5c142c008e193f2bb446f8c2889a9aba1d36db2e12bc749c5dffba8460d0aa0d" Jan 29 15:46:12 crc kubenswrapper[5008]: W0129 15:46:12.547489 5008 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb37ef43d_23ae_4a9c_af60_e616882400c3.slice/crio-ac71b5bf97b9cf8921f573ccae642ba919cab6ddc9a98574602966d2545b52f1 WatchSource:0}: Error finding container ac71b5bf97b9cf8921f573ccae642ba919cab6ddc9a98574602966d2545b52f1: Status 404 returned error can't find the container with id ac71b5bf97b9cf8921f573ccae642ba919cab6ddc9a98574602966d2545b52f1 Jan 29 15:46:12 crc kubenswrapper[5008]: I0129 15:46:12.557086 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Jan 29 15:46:12 crc kubenswrapper[5008]: I0129 15:46:12.593305 5008 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-s5tkh"] Jan 29 15:46:12 crc kubenswrapper[5008]: I0129 15:46:12.609584 5008 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-s5tkh"] Jan 29 15:46:12 crc kubenswrapper[5008]: I0129 15:46:12.693504 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Jan 29 15:46:12 crc kubenswrapper[5008]: I0129 15:46:12.794941 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-bw9wr"] Jan 29 15:46:12 crc kubenswrapper[5008]: I0129 15:46:12.800106 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Jan 29 15:46:12 crc kubenswrapper[5008]: I0129 15:46:12.977837 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 29 15:46:13 crc kubenswrapper[5008]: I0129 15:46:13.093164 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Jan 29 15:46:13 crc kubenswrapper[5008]: W0129 15:46:13.139064 5008 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podea8d28cd_76d6_4a6e_b6bd_a0e5f0fc2106.slice/crio-c125429a61b706e12625bef274378b043b0f932bfda0c2755b53e7ee232b5f0e WatchSource:0}: Error finding container c125429a61b706e12625bef274378b043b0f932bfda0c2755b53e7ee232b5f0e: Status 404 returned error can't find the container with id c125429a61b706e12625bef274378b043b0f932bfda0c2755b53e7ee232b5f0e Jan 29 15:46:13 crc kubenswrapper[5008]: I0129 15:46:13.194079 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-k5zwb"] Jan 29 15:46:13 crc kubenswrapper[5008]: I0129 15:46:13.235187 5008 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-d4fhx" Jan 29 15:46:13 crc kubenswrapper[5008]: I0129 15:46:13.339013 5008 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3db905f0-53de-4983-b70f-c883bfe123ba" path="/var/lib/kubelet/pods/3db905f0-53de-4983-b70f-c883bfe123ba/volumes" Jan 29 15:46:13 crc kubenswrapper[5008]: I0129 15:46:13.374270 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b128f8df-0b1b-4062-9c3d-fd0f1d2e8078-config\") pod \"b128f8df-0b1b-4062-9c3d-fd0f1d2e8078\" (UID: \"b128f8df-0b1b-4062-9c3d-fd0f1d2e8078\") " Jan 29 15:46:13 crc kubenswrapper[5008]: I0129 15:46:13.374412 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sqfht\" (UniqueName: \"kubernetes.io/projected/b128f8df-0b1b-4062-9c3d-fd0f1d2e8078-kube-api-access-sqfht\") pod \"b128f8df-0b1b-4062-9c3d-fd0f1d2e8078\" (UID: \"b128f8df-0b1b-4062-9c3d-fd0f1d2e8078\") " Jan 29 15:46:13 crc kubenswrapper[5008]: I0129 15:46:13.375028 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b128f8df-0b1b-4062-9c3d-fd0f1d2e8078-config" (OuterVolumeSpecName: "config") pod "b128f8df-0b1b-4062-9c3d-fd0f1d2e8078" (UID: "b128f8df-0b1b-4062-9c3d-fd0f1d2e8078"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:46:13 crc kubenswrapper[5008]: I0129 15:46:13.432980 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b128f8df-0b1b-4062-9c3d-fd0f1d2e8078-kube-api-access-sqfht" (OuterVolumeSpecName: "kube-api-access-sqfht") pod "b128f8df-0b1b-4062-9c3d-fd0f1d2e8078" (UID: "b128f8df-0b1b-4062-9c3d-fd0f1d2e8078"). InnerVolumeSpecName "kube-api-access-sqfht". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:46:13 crc kubenswrapper[5008]: I0129 15:46:13.476842 5008 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sqfht\" (UniqueName: \"kubernetes.io/projected/b128f8df-0b1b-4062-9c3d-fd0f1d2e8078-kube-api-access-sqfht\") on node \"crc\" DevicePath \"\"" Jan 29 15:46:13 crc kubenswrapper[5008]: I0129 15:46:13.476928 5008 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b128f8df-0b1b-4062-9c3d-fd0f1d2e8078-config\") on node \"crc\" DevicePath \"\"" Jan 29 15:46:13 crc kubenswrapper[5008]: I0129 15:46:13.505820 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"2c8d6871-1129-4597-8a1e-94006a17448a","Type":"ContainerStarted","Data":"de23257238bb8ce8aeab1bd141180cc6d2ae7c211dfd51f10facebf0c4eb8ac7"} Jan 29 15:46:13 crc kubenswrapper[5008]: I0129 15:46:13.509821 5008 generic.go:334] "Generic (PLEG): container finished" podID="eaa396b6-206d-4e0f-8983-ee9ac16c910a" containerID="ff4985a668c8ef886a12f2fd99e8abf04774b488c8fa43886cb72f524385e4cb" exitCode=0 Jan 29 15:46:13 crc kubenswrapper[5008]: I0129 15:46:13.509947 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-vs5xd" event={"ID":"eaa396b6-206d-4e0f-8983-ee9ac16c910a","Type":"ContainerDied","Data":"ff4985a668c8ef886a12f2fd99e8abf04774b488c8fa43886cb72f524385e4cb"} Jan 29 15:46:13 crc kubenswrapper[5008]: I0129 15:46:13.511695 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"2691fca5-fe1e-4796-bf43-7135e9d5a198","Type":"ContainerStarted","Data":"7986044eeb1cbc11c730082d941ee043dc7374de8a33bf15addb097a4c50eaac"} Jan 29 15:46:13 crc kubenswrapper[5008]: I0129 15:46:13.513277 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"ea8d28cd-76d6-4a6e-b6bd-a0e5f0fc2106","Type":"ContainerStarted","Data":"c125429a61b706e12625bef274378b043b0f932bfda0c2755b53e7ee232b5f0e"} Jan 29 15:46:13 crc kubenswrapper[5008]: I0129 15:46:13.518084 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"b37ef43d-23ae-4a9c-af60-e616882400c3","Type":"ContainerStarted","Data":"ac71b5bf97b9cf8921f573ccae642ba919cab6ddc9a98574602966d2545b52f1"} Jan 29 15:46:13 crc kubenswrapper[5008]: I0129 15:46:13.520737 5008 generic.go:334] "Generic (PLEG): container finished" podID="d528ee94-b499-4f20-8603-6dcc9e8b0361" containerID="074d5cb2df57c15195252921a34c3156f30decbbef34cf2601f7fc1b8f4751b1" exitCode=0 Jan 29 15:46:13 crc kubenswrapper[5008]: I0129 15:46:13.520857 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-7pwkf" event={"ID":"d528ee94-b499-4f20-8603-6dcc9e8b0361","Type":"ContainerDied","Data":"074d5cb2df57c15195252921a34c3156f30decbbef34cf2601f7fc1b8f4751b1"} Jan 29 15:46:13 crc kubenswrapper[5008]: I0129 15:46:13.523836 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-bw9wr" event={"ID":"0dd702c8-269b-4fb6-a3a7-03adf93d916a","Type":"ContainerStarted","Data":"3cb944a8731235c1cca254dffa2e6f80c60f7af805b3b716839e7a4b6a0131d1"} Jan 29 15:46:13 crc kubenswrapper[5008]: I0129 15:46:13.525663 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-675f4bcbfc-d4fhx" event={"ID":"b128f8df-0b1b-4062-9c3d-fd0f1d2e8078","Type":"ContainerDied","Data":"7dcfe1c84af859609b7cd8621d352272c552ebce1b442395a6dd0d1578eb8603"} Jan 29 15:46:13 crc kubenswrapper[5008]: I0129 15:46:13.525845 5008 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-d4fhx" Jan 29 15:46:13 crc kubenswrapper[5008]: I0129 15:46:13.540856 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"a2958b99-a5fe-447a-93cc-64bade998854","Type":"ContainerStarted","Data":"d7c2a679600cd5acbad60649171c7cd134a1e58fbdc25f09279c839e2d796043"} Jan 29 15:46:13 crc kubenswrapper[5008]: I0129 15:46:13.566963 5008 generic.go:334] "Generic (PLEG): container finished" podID="decefe5c-189e-43f8-88b2-f93a00567c3e" containerID="e32fe63a0f361be2992d303fb8560c37887275468835e55857ba8a6b44bc5268" exitCode=0 Jan 29 15:46:13 crc kubenswrapper[5008]: I0129 15:46:13.567026 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9l2c6" event={"ID":"decefe5c-189e-43f8-88b2-f93a00567c3e","Type":"ContainerDied","Data":"e32fe63a0f361be2992d303fb8560c37887275468835e55857ba8a6b44bc5268"} Jan 29 15:46:13 crc kubenswrapper[5008]: I0129 15:46:13.575102 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-k5zwb" event={"ID":"fb07a603-1696-4378-8d99-382d5bc152da","Type":"ContainerStarted","Data":"79074bae8ec62d3b676b76d5840f804063a918226c2f886466e6cceb9fb6bd34"} Jan 29 15:46:13 crc kubenswrapper[5008]: I0129 15:46:13.650866 5008 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-d4fhx"] Jan 29 15:46:13 crc kubenswrapper[5008]: I0129 15:46:13.664042 5008 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-d4fhx"] Jan 29 15:46:13 crc kubenswrapper[5008]: I0129 15:46:13.898254 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Jan 29 15:46:14 crc kubenswrapper[5008]: I0129 15:46:14.585967 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-vs5xd" event={"ID":"eaa396b6-206d-4e0f-8983-ee9ac16c910a","Type":"ContainerStarted","Data":"fce6b4dc39656ca4bbcce1eca3bc51906673b6595ed9fbcce86af693837a7c36"} Jan 29 15:46:14 crc kubenswrapper[5008]: I0129 15:46:14.586338 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-666b6646f7-vs5xd" Jan 29 15:46:14 crc kubenswrapper[5008]: I0129 15:46:14.591550 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"4dcd0990-beb1-445a-b387-b2b78c1a39d2","Type":"ContainerStarted","Data":"2c6fa5d16085f47a1816e6e7356d1268ade8fe801f24fc04ea91e56e48e6806c"} Jan 29 15:46:14 crc kubenswrapper[5008]: I0129 15:46:14.593488 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"8c8683a3-18f6-4242-9991-b542aed9143b","Type":"ContainerStarted","Data":"a8bec1298ff14291e2bcc81bb72e60423454e3549e3617dfc368a5ff2649831f"} Jan 29 15:46:14 crc kubenswrapper[5008]: I0129 15:46:14.596791 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-7pwkf" event={"ID":"d528ee94-b499-4f20-8603-6dcc9e8b0361","Type":"ContainerStarted","Data":"41e80ea40d300659d460b8dae3a7e24635694097a722b56e704158aae123525e"} Jan 29 15:46:14 crc kubenswrapper[5008]: I0129 15:46:14.597170 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-57d769cc4f-7pwkf" Jan 29 15:46:14 crc kubenswrapper[5008]: I0129 15:46:14.606772 5008 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-666b6646f7-vs5xd" podStartSLOduration=3.496525097 podStartE2EDuration="22.606755736s" podCreationTimestamp="2026-01-29 15:45:52 +0000 UTC" firstStartedPulling="2026-01-29 15:45:53.229660107 +0000 UTC m=+1096.902514344" lastFinishedPulling="2026-01-29 15:46:12.339890746 +0000 UTC m=+1116.012744983" observedRunningTime="2026-01-29 15:46:14.602983715 +0000 UTC m=+1118.275837952" watchObservedRunningTime="2026-01-29 15:46:14.606755736 +0000 UTC m=+1118.279609973" Jan 29 15:46:14 crc kubenswrapper[5008]: I0129 15:46:14.681972 5008 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-57d769cc4f-7pwkf" podStartSLOduration=3.747683529 podStartE2EDuration="22.68195117s" podCreationTimestamp="2026-01-29 15:45:52 +0000 UTC" firstStartedPulling="2026-01-29 15:45:53.405727157 +0000 UTC m=+1097.078581404" lastFinishedPulling="2026-01-29 15:46:12.339994808 +0000 UTC m=+1116.012849045" observedRunningTime="2026-01-29 15:46:14.672185864 +0000 UTC m=+1118.345040121" watchObservedRunningTime="2026-01-29 15:46:14.68195117 +0000 UTC m=+1118.354805407" Jan 29 15:46:15 crc kubenswrapper[5008]: I0129 15:46:15.331832 5008 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b128f8df-0b1b-4062-9c3d-fd0f1d2e8078" path="/var/lib/kubelet/pods/b128f8df-0b1b-4062-9c3d-fd0f1d2e8078/volumes" Jan 29 15:46:15 crc kubenswrapper[5008]: I0129 15:46:15.607197 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"4d502938-9e22-4a6c-951e-b476cb87ee8f","Type":"ContainerStarted","Data":"055c119d6c3b38d87cb3eb25681a7b20c4fff4007d8b206b5a33e53857505a8d"} Jan 29 15:46:21 crc kubenswrapper[5008]: I0129 15:46:21.655658 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9l2c6" event={"ID":"decefe5c-189e-43f8-88b2-f93a00567c3e","Type":"ContainerStarted","Data":"fe84ae8c70bf02c4e800e24fb21b8ef0fd34cc6225eaec2832f3c97a133d05fb"} Jan 29 15:46:21 crc kubenswrapper[5008]: I0129 15:46:21.686514 5008 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-9l2c6" podStartSLOduration=5.059012921 podStartE2EDuration="1m59.686493369s" podCreationTimestamp="2026-01-29 15:44:22 +0000 UTC" firstStartedPulling="2026-01-29 15:44:24.134923367 +0000 UTC m=+1007.807777614" lastFinishedPulling="2026-01-29 15:46:18.762403785 +0000 UTC m=+1122.435258062" observedRunningTime="2026-01-29 15:46:21.686055998 +0000 UTC m=+1125.358910245" watchObservedRunningTime="2026-01-29 15:46:21.686493369 +0000 UTC m=+1125.359347616" Jan 29 15:46:22 crc kubenswrapper[5008]: I0129 15:46:22.684812 5008 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-9l2c6" Jan 29 15:46:22 crc kubenswrapper[5008]: I0129 15:46:22.686322 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-9l2c6" Jan 29 15:46:22 crc kubenswrapper[5008]: I0129 15:46:22.727021 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-666b6646f7-vs5xd" Jan 29 15:46:22 crc kubenswrapper[5008]: I0129 15:46:22.978317 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-57d769cc4f-7pwkf" Jan 29 15:46:23 crc kubenswrapper[5008]: I0129 15:46:23.022927 5008 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-vs5xd"] Jan 29 15:46:23 crc kubenswrapper[5008]: I0129 15:46:23.671923 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"b37ef43d-23ae-4a9c-af60-e616882400c3","Type":"ContainerStarted","Data":"96461c78d0f5c7bcb23c8e1e5a587ad4eddb2cb92af7ccc28efdea87998e8286"} Jan 29 15:46:23 crc kubenswrapper[5008]: I0129 15:46:23.672352 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/memcached-0" Jan 29 15:46:23 crc kubenswrapper[5008]: I0129 15:46:23.673246 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"2c8d6871-1129-4597-8a1e-94006a17448a","Type":"ContainerStarted","Data":"5dfcdea1095ee2d3879ba921942b33575acdace6db8ae39b151b1c219157edc2"} Jan 29 15:46:23 crc kubenswrapper[5008]: I0129 15:46:23.674429 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-bw9wr" event={"ID":"0dd702c8-269b-4fb6-a3a7-03adf93d916a","Type":"ContainerStarted","Data":"4c011b9053a81c0b12cffe67218f924b7e8abcd04528d119ed8892ef660b7e19"} Jan 29 15:46:23 crc kubenswrapper[5008]: I0129 15:46:23.674818 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-bw9wr" Jan 29 15:46:23 crc kubenswrapper[5008]: I0129 15:46:23.676204 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"4d502938-9e22-4a6c-951e-b476cb87ee8f","Type":"ContainerStarted","Data":"7ef0619f99a70223bdd92df7c6c63223e72c780800e32fc056d88db81748337a"} Jan 29 15:46:23 crc kubenswrapper[5008]: I0129 15:46:23.677208 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"ea8d28cd-76d6-4a6e-b6bd-a0e5f0fc2106","Type":"ContainerStarted","Data":"6c80d9650f93ccf1ee0a61414781394325e3257f3cfa6be947295bed5e1e4e97"} Jan 29 15:46:23 crc kubenswrapper[5008]: I0129 15:46:23.678381 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"2691fca5-fe1e-4796-bf43-7135e9d5a198","Type":"ContainerStarted","Data":"9e1a6f84d62e1a65b8306defe6e32b9e1a35b50bcd62a48cbe68e10cb95676c7"} Jan 29 15:46:23 crc kubenswrapper[5008]: I0129 15:46:23.678504 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Jan 29 15:46:23 crc kubenswrapper[5008]: I0129 15:46:23.679889 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"a2958b99-a5fe-447a-93cc-64bade998854","Type":"ContainerStarted","Data":"4fb6ed72bca123054fb804f9974ec317326298fe7e9c9208c5b3b6c813fe0609"} Jan 29 15:46:23 crc kubenswrapper[5008]: I0129 15:46:23.681091 5008 generic.go:334] "Generic (PLEG): container finished" podID="fb07a603-1696-4378-8d99-382d5bc152da" containerID="a9c3df6ce45ce01e23a674a545d9a91984df9bf8df7e1312c315e20a4b729728" exitCode=0 Jan 29 15:46:23 crc kubenswrapper[5008]: I0129 15:46:23.681131 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-k5zwb" event={"ID":"fb07a603-1696-4378-8d99-382d5bc152da","Type":"ContainerDied","Data":"a9c3df6ce45ce01e23a674a545d9a91984df9bf8df7e1312c315e20a4b729728"} Jan 29 15:46:23 crc kubenswrapper[5008]: I0129 15:46:23.681323 5008 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-666b6646f7-vs5xd" podUID="eaa396b6-206d-4e0f-8983-ee9ac16c910a" containerName="dnsmasq-dns" containerID="cri-o://fce6b4dc39656ca4bbcce1eca3bc51906673b6595ed9fbcce86af693837a7c36" gracePeriod=10 Jan 29 15:46:23 crc kubenswrapper[5008]: I0129 15:46:23.690925 5008 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/memcached-0" podStartSLOduration=21.210634252 podStartE2EDuration="27.690907543s" podCreationTimestamp="2026-01-29 15:45:56 +0000 UTC" firstStartedPulling="2026-01-29 15:46:12.561232025 +0000 UTC m=+1116.234086262" lastFinishedPulling="2026-01-29 15:46:19.041505316 +0000 UTC m=+1122.714359553" observedRunningTime="2026-01-29 15:46:23.688484494 +0000 UTC m=+1127.361338731" watchObservedRunningTime="2026-01-29 15:46:23.690907543 +0000 UTC m=+1127.363761780" Jan 29 15:46:23 crc kubenswrapper[5008]: I0129 15:46:23.742523 5008 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-bw9wr" podStartSLOduration=13.419706567 podStartE2EDuration="21.742505084s" podCreationTimestamp="2026-01-29 15:46:02 +0000 UTC" firstStartedPulling="2026-01-29 15:46:12.881397432 +0000 UTC m=+1116.554251669" lastFinishedPulling="2026-01-29 15:46:21.204195949 +0000 UTC m=+1124.877050186" observedRunningTime="2026-01-29 15:46:23.73653448 +0000 UTC m=+1127.409388707" watchObservedRunningTime="2026-01-29 15:46:23.742505084 +0000 UTC m=+1127.415359341" Jan 29 15:46:23 crc kubenswrapper[5008]: I0129 15:46:23.821146 5008 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-9l2c6" podUID="decefe5c-189e-43f8-88b2-f93a00567c3e" containerName="registry-server" probeResult="failure" output=< Jan 29 15:46:23 crc kubenswrapper[5008]: timeout: failed to connect service ":50051" within 1s Jan 29 15:46:23 crc kubenswrapper[5008]: > Jan 29 15:46:23 crc kubenswrapper[5008]: I0129 15:46:23.825711 5008 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=16.410767842 podStartE2EDuration="25.825697792s" podCreationTimestamp="2026-01-29 15:45:58 +0000 UTC" firstStartedPulling="2026-01-29 15:46:12.99755086 +0000 UTC m=+1116.670405097" lastFinishedPulling="2026-01-29 15:46:22.4124808 +0000 UTC m=+1126.085335047" observedRunningTime="2026-01-29 15:46:23.822997898 +0000 UTC m=+1127.495852135" watchObservedRunningTime="2026-01-29 15:46:23.825697792 +0000 UTC m=+1127.498552029" Jan 29 15:46:24 crc kubenswrapper[5008]: I0129 15:46:24.690356 5008 generic.go:334] "Generic (PLEG): container finished" podID="eaa396b6-206d-4e0f-8983-ee9ac16c910a" containerID="fce6b4dc39656ca4bbcce1eca3bc51906673b6595ed9fbcce86af693837a7c36" exitCode=0 Jan 29 15:46:24 crc kubenswrapper[5008]: I0129 15:46:24.690442 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-vs5xd" event={"ID":"eaa396b6-206d-4e0f-8983-ee9ac16c910a","Type":"ContainerDied","Data":"fce6b4dc39656ca4bbcce1eca3bc51906673b6595ed9fbcce86af693837a7c36"} Jan 29 15:46:24 crc kubenswrapper[5008]: I0129 15:46:24.693597 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-k5zwb" event={"ID":"fb07a603-1696-4378-8d99-382d5bc152da","Type":"ContainerStarted","Data":"ad93a2696fbfbb039cb97f5b3d24bc4c2c2b3502def665e7cb6e28ff3061ad4a"} Jan 29 15:46:24 crc kubenswrapper[5008]: I0129 15:46:24.785429 5008 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-vs5xd" Jan 29 15:46:24 crc kubenswrapper[5008]: I0129 15:46:24.862349 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/eaa396b6-206d-4e0f-8983-ee9ac16c910a-dns-svc\") pod \"eaa396b6-206d-4e0f-8983-ee9ac16c910a\" (UID: \"eaa396b6-206d-4e0f-8983-ee9ac16c910a\") " Jan 29 15:46:24 crc kubenswrapper[5008]: I0129 15:46:24.862571 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/eaa396b6-206d-4e0f-8983-ee9ac16c910a-config\") pod \"eaa396b6-206d-4e0f-8983-ee9ac16c910a\" (UID: \"eaa396b6-206d-4e0f-8983-ee9ac16c910a\") " Jan 29 15:46:24 crc kubenswrapper[5008]: I0129 15:46:24.862604 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gt2jc\" (UniqueName: \"kubernetes.io/projected/eaa396b6-206d-4e0f-8983-ee9ac16c910a-kube-api-access-gt2jc\") pod \"eaa396b6-206d-4e0f-8983-ee9ac16c910a\" (UID: \"eaa396b6-206d-4e0f-8983-ee9ac16c910a\") " Jan 29 15:46:24 crc kubenswrapper[5008]: I0129 15:46:24.877171 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/eaa396b6-206d-4e0f-8983-ee9ac16c910a-kube-api-access-gt2jc" (OuterVolumeSpecName: "kube-api-access-gt2jc") pod "eaa396b6-206d-4e0f-8983-ee9ac16c910a" (UID: "eaa396b6-206d-4e0f-8983-ee9ac16c910a"). InnerVolumeSpecName "kube-api-access-gt2jc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:46:24 crc kubenswrapper[5008]: I0129 15:46:24.897359 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/eaa396b6-206d-4e0f-8983-ee9ac16c910a-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "eaa396b6-206d-4e0f-8983-ee9ac16c910a" (UID: "eaa396b6-206d-4e0f-8983-ee9ac16c910a"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:46:24 crc kubenswrapper[5008]: I0129 15:46:24.905495 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/eaa396b6-206d-4e0f-8983-ee9ac16c910a-config" (OuterVolumeSpecName: "config") pod "eaa396b6-206d-4e0f-8983-ee9ac16c910a" (UID: "eaa396b6-206d-4e0f-8983-ee9ac16c910a"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:46:24 crc kubenswrapper[5008]: I0129 15:46:24.964510 5008 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/eaa396b6-206d-4e0f-8983-ee9ac16c910a-config\") on node \"crc\" DevicePath \"\"" Jan 29 15:46:24 crc kubenswrapper[5008]: I0129 15:46:24.964545 5008 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gt2jc\" (UniqueName: \"kubernetes.io/projected/eaa396b6-206d-4e0f-8983-ee9ac16c910a-kube-api-access-gt2jc\") on node \"crc\" DevicePath \"\"" Jan 29 15:46:24 crc kubenswrapper[5008]: I0129 15:46:24.964554 5008 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/eaa396b6-206d-4e0f-8983-ee9ac16c910a-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 29 15:46:25 crc kubenswrapper[5008]: I0129 15:46:25.262666 5008 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-metrics-qkf4v"] Jan 29 15:46:25 crc kubenswrapper[5008]: E0129 15:46:25.263395 5008 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c82fc869-759d-4902-9aef-fdd69452b420" containerName="extract-utilities" Jan 29 15:46:25 crc kubenswrapper[5008]: I0129 15:46:25.263415 5008 state_mem.go:107] "Deleted CPUSet assignment" podUID="c82fc869-759d-4902-9aef-fdd69452b420" containerName="extract-utilities" Jan 29 15:46:25 crc kubenswrapper[5008]: E0129 15:46:25.263433 5008 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c82fc869-759d-4902-9aef-fdd69452b420" containerName="registry-server" Jan 29 15:46:25 crc kubenswrapper[5008]: I0129 15:46:25.263442 5008 state_mem.go:107] "Deleted CPUSet assignment" podUID="c82fc869-759d-4902-9aef-fdd69452b420" containerName="registry-server" Jan 29 15:46:25 crc kubenswrapper[5008]: E0129 15:46:25.263468 5008 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eaa396b6-206d-4e0f-8983-ee9ac16c910a" containerName="dnsmasq-dns" Jan 29 15:46:25 crc kubenswrapper[5008]: I0129 15:46:25.263476 5008 state_mem.go:107] "Deleted CPUSet assignment" podUID="eaa396b6-206d-4e0f-8983-ee9ac16c910a" containerName="dnsmasq-dns" Jan 29 15:46:25 crc kubenswrapper[5008]: E0129 15:46:25.263494 5008 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eaa396b6-206d-4e0f-8983-ee9ac16c910a" containerName="init" Jan 29 15:46:25 crc kubenswrapper[5008]: I0129 15:46:25.263501 5008 state_mem.go:107] "Deleted CPUSet assignment" podUID="eaa396b6-206d-4e0f-8983-ee9ac16c910a" containerName="init" Jan 29 15:46:25 crc kubenswrapper[5008]: E0129 15:46:25.263512 5008 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c82fc869-759d-4902-9aef-fdd69452b420" containerName="extract-content" Jan 29 15:46:25 crc kubenswrapper[5008]: I0129 15:46:25.263521 5008 state_mem.go:107] "Deleted CPUSet assignment" podUID="c82fc869-759d-4902-9aef-fdd69452b420" containerName="extract-content" Jan 29 15:46:25 crc kubenswrapper[5008]: I0129 15:46:25.263692 5008 memory_manager.go:354] "RemoveStaleState removing state" podUID="eaa396b6-206d-4e0f-8983-ee9ac16c910a" containerName="dnsmasq-dns" Jan 29 15:46:25 crc kubenswrapper[5008]: I0129 15:46:25.263708 5008 memory_manager.go:354] "RemoveStaleState removing state" podUID="c82fc869-759d-4902-9aef-fdd69452b420" containerName="registry-server" Jan 29 15:46:25 crc kubenswrapper[5008]: I0129 15:46:25.264354 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-qkf4v" Jan 29 15:46:25 crc kubenswrapper[5008]: I0129 15:46:25.266731 5008 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-metrics-config" Jan 29 15:46:25 crc kubenswrapper[5008]: I0129 15:46:25.288226 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-qkf4v"] Jan 29 15:46:25 crc kubenswrapper[5008]: I0129 15:46:25.372265 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/90c13843-e314-4465-af68-367fc8d59731-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-qkf4v\" (UID: \"90c13843-e314-4465-af68-367fc8d59731\") " pod="openstack/ovn-controller-metrics-qkf4v" Jan 29 15:46:25 crc kubenswrapper[5008]: I0129 15:46:25.373120 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/90c13843-e314-4465-af68-367fc8d59731-combined-ca-bundle\") pod \"ovn-controller-metrics-qkf4v\" (UID: \"90c13843-e314-4465-af68-367fc8d59731\") " pod="openstack/ovn-controller-metrics-qkf4v" Jan 29 15:46:25 crc kubenswrapper[5008]: I0129 15:46:25.373189 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zv8zr\" (UniqueName: \"kubernetes.io/projected/90c13843-e314-4465-af68-367fc8d59731-kube-api-access-zv8zr\") pod \"ovn-controller-metrics-qkf4v\" (UID: \"90c13843-e314-4465-af68-367fc8d59731\") " pod="openstack/ovn-controller-metrics-qkf4v" Jan 29 15:46:25 crc kubenswrapper[5008]: I0129 15:46:25.373297 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/90c13843-e314-4465-af68-367fc8d59731-ovs-rundir\") pod \"ovn-controller-metrics-qkf4v\" (UID: \"90c13843-e314-4465-af68-367fc8d59731\") " pod="openstack/ovn-controller-metrics-qkf4v" Jan 29 15:46:25 crc kubenswrapper[5008]: I0129 15:46:25.373447 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/90c13843-e314-4465-af68-367fc8d59731-config\") pod \"ovn-controller-metrics-qkf4v\" (UID: \"90c13843-e314-4465-af68-367fc8d59731\") " pod="openstack/ovn-controller-metrics-qkf4v" Jan 29 15:46:25 crc kubenswrapper[5008]: I0129 15:46:25.373502 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/90c13843-e314-4465-af68-367fc8d59731-ovn-rundir\") pod \"ovn-controller-metrics-qkf4v\" (UID: \"90c13843-e314-4465-af68-367fc8d59731\") " pod="openstack/ovn-controller-metrics-qkf4v" Jan 29 15:46:25 crc kubenswrapper[5008]: I0129 15:46:25.406684 5008 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5bf47b49b7-676z4"] Jan 29 15:46:25 crc kubenswrapper[5008]: I0129 15:46:25.411191 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5bf47b49b7-676z4" Jan 29 15:46:25 crc kubenswrapper[5008]: I0129 15:46:25.412852 5008 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-nb" Jan 29 15:46:25 crc kubenswrapper[5008]: I0129 15:46:25.419490 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5bf47b49b7-676z4"] Jan 29 15:46:25 crc kubenswrapper[5008]: I0129 15:46:25.475572 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/90c13843-e314-4465-af68-367fc8d59731-combined-ca-bundle\") pod \"ovn-controller-metrics-qkf4v\" (UID: \"90c13843-e314-4465-af68-367fc8d59731\") " pod="openstack/ovn-controller-metrics-qkf4v" Jan 29 15:46:25 crc kubenswrapper[5008]: I0129 15:46:25.475628 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zv8zr\" (UniqueName: \"kubernetes.io/projected/90c13843-e314-4465-af68-367fc8d59731-kube-api-access-zv8zr\") pod \"ovn-controller-metrics-qkf4v\" (UID: \"90c13843-e314-4465-af68-367fc8d59731\") " pod="openstack/ovn-controller-metrics-qkf4v" Jan 29 15:46:25 crc kubenswrapper[5008]: I0129 15:46:25.475668 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6fd1d492-c335-4318-8eb9-bf8140f43b70-ovsdbserver-nb\") pod \"dnsmasq-dns-5bf47b49b7-676z4\" (UID: \"6fd1d492-c335-4318-8eb9-bf8140f43b70\") " pod="openstack/dnsmasq-dns-5bf47b49b7-676z4" Jan 29 15:46:25 crc kubenswrapper[5008]: I0129 15:46:25.475693 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/90c13843-e314-4465-af68-367fc8d59731-ovs-rundir\") pod \"ovn-controller-metrics-qkf4v\" (UID: \"90c13843-e314-4465-af68-367fc8d59731\") " pod="openstack/ovn-controller-metrics-qkf4v" Jan 29 15:46:25 crc kubenswrapper[5008]: I0129 15:46:25.475711 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6fd1d492-c335-4318-8eb9-bf8140f43b70-config\") pod \"dnsmasq-dns-5bf47b49b7-676z4\" (UID: \"6fd1d492-c335-4318-8eb9-bf8140f43b70\") " pod="openstack/dnsmasq-dns-5bf47b49b7-676z4" Jan 29 15:46:25 crc kubenswrapper[5008]: I0129 15:46:25.475728 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r8r8s\" (UniqueName: \"kubernetes.io/projected/6fd1d492-c335-4318-8eb9-bf8140f43b70-kube-api-access-r8r8s\") pod \"dnsmasq-dns-5bf47b49b7-676z4\" (UID: \"6fd1d492-c335-4318-8eb9-bf8140f43b70\") " pod="openstack/dnsmasq-dns-5bf47b49b7-676z4" Jan 29 15:46:25 crc kubenswrapper[5008]: I0129 15:46:25.475749 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6fd1d492-c335-4318-8eb9-bf8140f43b70-dns-svc\") pod \"dnsmasq-dns-5bf47b49b7-676z4\" (UID: \"6fd1d492-c335-4318-8eb9-bf8140f43b70\") " pod="openstack/dnsmasq-dns-5bf47b49b7-676z4" Jan 29 15:46:25 crc kubenswrapper[5008]: I0129 15:46:25.475800 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/90c13843-e314-4465-af68-367fc8d59731-config\") pod \"ovn-controller-metrics-qkf4v\" (UID: \"90c13843-e314-4465-af68-367fc8d59731\") " pod="openstack/ovn-controller-metrics-qkf4v" Jan 29 15:46:25 crc kubenswrapper[5008]: I0129 15:46:25.475826 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/90c13843-e314-4465-af68-367fc8d59731-ovn-rundir\") pod \"ovn-controller-metrics-qkf4v\" (UID: \"90c13843-e314-4465-af68-367fc8d59731\") " pod="openstack/ovn-controller-metrics-qkf4v" Jan 29 15:46:25 crc kubenswrapper[5008]: I0129 15:46:25.475870 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/90c13843-e314-4465-af68-367fc8d59731-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-qkf4v\" (UID: \"90c13843-e314-4465-af68-367fc8d59731\") " pod="openstack/ovn-controller-metrics-qkf4v" Jan 29 15:46:25 crc kubenswrapper[5008]: I0129 15:46:25.476638 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/90c13843-e314-4465-af68-367fc8d59731-ovs-rundir\") pod \"ovn-controller-metrics-qkf4v\" (UID: \"90c13843-e314-4465-af68-367fc8d59731\") " pod="openstack/ovn-controller-metrics-qkf4v" Jan 29 15:46:25 crc kubenswrapper[5008]: I0129 15:46:25.476660 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/90c13843-e314-4465-af68-367fc8d59731-ovn-rundir\") pod \"ovn-controller-metrics-qkf4v\" (UID: \"90c13843-e314-4465-af68-367fc8d59731\") " pod="openstack/ovn-controller-metrics-qkf4v" Jan 29 15:46:25 crc kubenswrapper[5008]: I0129 15:46:25.476908 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/90c13843-e314-4465-af68-367fc8d59731-config\") pod \"ovn-controller-metrics-qkf4v\" (UID: \"90c13843-e314-4465-af68-367fc8d59731\") " pod="openstack/ovn-controller-metrics-qkf4v" Jan 29 15:46:25 crc kubenswrapper[5008]: I0129 15:46:25.480577 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/90c13843-e314-4465-af68-367fc8d59731-combined-ca-bundle\") pod \"ovn-controller-metrics-qkf4v\" (UID: \"90c13843-e314-4465-af68-367fc8d59731\") " pod="openstack/ovn-controller-metrics-qkf4v" Jan 29 15:46:25 crc kubenswrapper[5008]: I0129 15:46:25.480587 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/90c13843-e314-4465-af68-367fc8d59731-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-qkf4v\" (UID: \"90c13843-e314-4465-af68-367fc8d59731\") " pod="openstack/ovn-controller-metrics-qkf4v" Jan 29 15:46:25 crc kubenswrapper[5008]: I0129 15:46:25.510502 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zv8zr\" (UniqueName: \"kubernetes.io/projected/90c13843-e314-4465-af68-367fc8d59731-kube-api-access-zv8zr\") pod \"ovn-controller-metrics-qkf4v\" (UID: \"90c13843-e314-4465-af68-367fc8d59731\") " pod="openstack/ovn-controller-metrics-qkf4v" Jan 29 15:46:25 crc kubenswrapper[5008]: I0129 15:46:25.577439 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6fd1d492-c335-4318-8eb9-bf8140f43b70-ovsdbserver-nb\") pod \"dnsmasq-dns-5bf47b49b7-676z4\" (UID: \"6fd1d492-c335-4318-8eb9-bf8140f43b70\") " pod="openstack/dnsmasq-dns-5bf47b49b7-676z4" Jan 29 15:46:25 crc kubenswrapper[5008]: I0129 15:46:25.577521 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6fd1d492-c335-4318-8eb9-bf8140f43b70-config\") pod \"dnsmasq-dns-5bf47b49b7-676z4\" (UID: \"6fd1d492-c335-4318-8eb9-bf8140f43b70\") " pod="openstack/dnsmasq-dns-5bf47b49b7-676z4" Jan 29 15:46:25 crc kubenswrapper[5008]: I0129 15:46:25.577553 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r8r8s\" (UniqueName: \"kubernetes.io/projected/6fd1d492-c335-4318-8eb9-bf8140f43b70-kube-api-access-r8r8s\") pod \"dnsmasq-dns-5bf47b49b7-676z4\" (UID: \"6fd1d492-c335-4318-8eb9-bf8140f43b70\") " pod="openstack/dnsmasq-dns-5bf47b49b7-676z4" Jan 29 15:46:25 crc kubenswrapper[5008]: I0129 15:46:25.577582 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6fd1d492-c335-4318-8eb9-bf8140f43b70-dns-svc\") pod \"dnsmasq-dns-5bf47b49b7-676z4\" (UID: \"6fd1d492-c335-4318-8eb9-bf8140f43b70\") " pod="openstack/dnsmasq-dns-5bf47b49b7-676z4" Jan 29 15:46:25 crc kubenswrapper[5008]: I0129 15:46:25.578388 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6fd1d492-c335-4318-8eb9-bf8140f43b70-ovsdbserver-nb\") pod \"dnsmasq-dns-5bf47b49b7-676z4\" (UID: \"6fd1d492-c335-4318-8eb9-bf8140f43b70\") " pod="openstack/dnsmasq-dns-5bf47b49b7-676z4" Jan 29 15:46:25 crc kubenswrapper[5008]: I0129 15:46:25.578562 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6fd1d492-c335-4318-8eb9-bf8140f43b70-dns-svc\") pod \"dnsmasq-dns-5bf47b49b7-676z4\" (UID: \"6fd1d492-c335-4318-8eb9-bf8140f43b70\") " pod="openstack/dnsmasq-dns-5bf47b49b7-676z4" Jan 29 15:46:25 crc kubenswrapper[5008]: I0129 15:46:25.579841 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6fd1d492-c335-4318-8eb9-bf8140f43b70-config\") pod \"dnsmasq-dns-5bf47b49b7-676z4\" (UID: \"6fd1d492-c335-4318-8eb9-bf8140f43b70\") " pod="openstack/dnsmasq-dns-5bf47b49b7-676z4" Jan 29 15:46:25 crc kubenswrapper[5008]: I0129 15:46:25.597634 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r8r8s\" (UniqueName: \"kubernetes.io/projected/6fd1d492-c335-4318-8eb9-bf8140f43b70-kube-api-access-r8r8s\") pod \"dnsmasq-dns-5bf47b49b7-676z4\" (UID: \"6fd1d492-c335-4318-8eb9-bf8140f43b70\") " pod="openstack/dnsmasq-dns-5bf47b49b7-676z4" Jan 29 15:46:25 crc kubenswrapper[5008]: I0129 15:46:25.602053 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-qkf4v" Jan 29 15:46:25 crc kubenswrapper[5008]: I0129 15:46:25.707897 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-k5zwb" event={"ID":"fb07a603-1696-4378-8d99-382d5bc152da","Type":"ContainerStarted","Data":"7e0ade73e0587d08ed6b9c03fa1da934522b72d8922cde7b3b3e88a4f6b44af7"} Jan 29 15:46:25 crc kubenswrapper[5008]: I0129 15:46:25.707957 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-k5zwb" Jan 29 15:46:25 crc kubenswrapper[5008]: I0129 15:46:25.707977 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-k5zwb" Jan 29 15:46:25 crc kubenswrapper[5008]: I0129 15:46:25.714611 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-vs5xd" event={"ID":"eaa396b6-206d-4e0f-8983-ee9ac16c910a","Type":"ContainerDied","Data":"309fd497280f26c9fefa297dd5016a654c256866d10ab9c20a829153df0b8be3"} Jan 29 15:46:25 crc kubenswrapper[5008]: I0129 15:46:25.714656 5008 scope.go:117] "RemoveContainer" containerID="fce6b4dc39656ca4bbcce1eca3bc51906673b6595ed9fbcce86af693837a7c36" Jan 29 15:46:25 crc kubenswrapper[5008]: I0129 15:46:25.714836 5008 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-vs5xd" Jan 29 15:46:25 crc kubenswrapper[5008]: I0129 15:46:25.736399 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5bf47b49b7-676z4" Jan 29 15:46:25 crc kubenswrapper[5008]: I0129 15:46:25.737037 5008 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-ovs-k5zwb" podStartSLOduration=14.739579183 podStartE2EDuration="22.737019498s" podCreationTimestamp="2026-01-29 15:46:03 +0000 UTC" firstStartedPulling="2026-01-29 15:46:13.206819736 +0000 UTC m=+1116.879673973" lastFinishedPulling="2026-01-29 15:46:21.204260041 +0000 UTC m=+1124.877114288" observedRunningTime="2026-01-29 15:46:25.735937602 +0000 UTC m=+1129.408791839" watchObservedRunningTime="2026-01-29 15:46:25.737019498 +0000 UTC m=+1129.409873735" Jan 29 15:46:25 crc kubenswrapper[5008]: I0129 15:46:25.751864 5008 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5bf47b49b7-676z4"] Jan 29 15:46:25 crc kubenswrapper[5008]: I0129 15:46:25.758024 5008 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-vs5xd"] Jan 29 15:46:25 crc kubenswrapper[5008]: I0129 15:46:25.765072 5008 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-vs5xd"] Jan 29 15:46:25 crc kubenswrapper[5008]: I0129 15:46:25.782400 5008 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-8554648995-znv2j"] Jan 29 15:46:25 crc kubenswrapper[5008]: I0129 15:46:25.786569 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8554648995-znv2j" Jan 29 15:46:25 crc kubenswrapper[5008]: I0129 15:46:25.790077 5008 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-sb" Jan 29 15:46:25 crc kubenswrapper[5008]: I0129 15:46:25.793189 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-8554648995-znv2j"] Jan 29 15:46:25 crc kubenswrapper[5008]: I0129 15:46:25.881552 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/551951b1-6601-4b58-ab3c-aa03c962e65d-ovsdbserver-sb\") pod \"dnsmasq-dns-8554648995-znv2j\" (UID: \"551951b1-6601-4b58-ab3c-aa03c962e65d\") " pod="openstack/dnsmasq-dns-8554648995-znv2j" Jan 29 15:46:25 crc kubenswrapper[5008]: I0129 15:46:25.881642 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/551951b1-6601-4b58-ab3c-aa03c962e65d-ovsdbserver-nb\") pod \"dnsmasq-dns-8554648995-znv2j\" (UID: \"551951b1-6601-4b58-ab3c-aa03c962e65d\") " pod="openstack/dnsmasq-dns-8554648995-znv2j" Jan 29 15:46:25 crc kubenswrapper[5008]: I0129 15:46:25.881754 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qhjbr\" (UniqueName: \"kubernetes.io/projected/551951b1-6601-4b58-ab3c-aa03c962e65d-kube-api-access-qhjbr\") pod \"dnsmasq-dns-8554648995-znv2j\" (UID: \"551951b1-6601-4b58-ab3c-aa03c962e65d\") " pod="openstack/dnsmasq-dns-8554648995-znv2j" Jan 29 15:46:25 crc kubenswrapper[5008]: I0129 15:46:25.881778 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/551951b1-6601-4b58-ab3c-aa03c962e65d-config\") pod \"dnsmasq-dns-8554648995-znv2j\" (UID: \"551951b1-6601-4b58-ab3c-aa03c962e65d\") " pod="openstack/dnsmasq-dns-8554648995-znv2j" Jan 29 15:46:25 crc kubenswrapper[5008]: I0129 15:46:25.881924 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/551951b1-6601-4b58-ab3c-aa03c962e65d-dns-svc\") pod \"dnsmasq-dns-8554648995-znv2j\" (UID: \"551951b1-6601-4b58-ab3c-aa03c962e65d\") " pod="openstack/dnsmasq-dns-8554648995-znv2j" Jan 29 15:46:25 crc kubenswrapper[5008]: I0129 15:46:25.900317 5008 scope.go:117] "RemoveContainer" containerID="ff4985a668c8ef886a12f2fd99e8abf04774b488c8fa43886cb72f524385e4cb" Jan 29 15:46:25 crc kubenswrapper[5008]: I0129 15:46:25.983701 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qhjbr\" (UniqueName: \"kubernetes.io/projected/551951b1-6601-4b58-ab3c-aa03c962e65d-kube-api-access-qhjbr\") pod \"dnsmasq-dns-8554648995-znv2j\" (UID: \"551951b1-6601-4b58-ab3c-aa03c962e65d\") " pod="openstack/dnsmasq-dns-8554648995-znv2j" Jan 29 15:46:25 crc kubenswrapper[5008]: I0129 15:46:25.984088 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/551951b1-6601-4b58-ab3c-aa03c962e65d-config\") pod \"dnsmasq-dns-8554648995-znv2j\" (UID: \"551951b1-6601-4b58-ab3c-aa03c962e65d\") " pod="openstack/dnsmasq-dns-8554648995-znv2j" Jan 29 15:46:25 crc kubenswrapper[5008]: I0129 15:46:25.984125 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/551951b1-6601-4b58-ab3c-aa03c962e65d-dns-svc\") pod \"dnsmasq-dns-8554648995-znv2j\" (UID: \"551951b1-6601-4b58-ab3c-aa03c962e65d\") " pod="openstack/dnsmasq-dns-8554648995-znv2j" Jan 29 15:46:25 crc kubenswrapper[5008]: I0129 15:46:25.984175 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/551951b1-6601-4b58-ab3c-aa03c962e65d-ovsdbserver-sb\") pod \"dnsmasq-dns-8554648995-znv2j\" (UID: \"551951b1-6601-4b58-ab3c-aa03c962e65d\") " pod="openstack/dnsmasq-dns-8554648995-znv2j" Jan 29 15:46:25 crc kubenswrapper[5008]: I0129 15:46:25.984213 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/551951b1-6601-4b58-ab3c-aa03c962e65d-ovsdbserver-nb\") pod \"dnsmasq-dns-8554648995-znv2j\" (UID: \"551951b1-6601-4b58-ab3c-aa03c962e65d\") " pod="openstack/dnsmasq-dns-8554648995-znv2j" Jan 29 15:46:25 crc kubenswrapper[5008]: I0129 15:46:25.985207 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/551951b1-6601-4b58-ab3c-aa03c962e65d-dns-svc\") pod \"dnsmasq-dns-8554648995-znv2j\" (UID: \"551951b1-6601-4b58-ab3c-aa03c962e65d\") " pod="openstack/dnsmasq-dns-8554648995-znv2j" Jan 29 15:46:25 crc kubenswrapper[5008]: I0129 15:46:25.985292 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/551951b1-6601-4b58-ab3c-aa03c962e65d-ovsdbserver-nb\") pod \"dnsmasq-dns-8554648995-znv2j\" (UID: \"551951b1-6601-4b58-ab3c-aa03c962e65d\") " pod="openstack/dnsmasq-dns-8554648995-znv2j" Jan 29 15:46:25 crc kubenswrapper[5008]: I0129 15:46:25.985406 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/551951b1-6601-4b58-ab3c-aa03c962e65d-ovsdbserver-sb\") pod \"dnsmasq-dns-8554648995-znv2j\" (UID: \"551951b1-6601-4b58-ab3c-aa03c962e65d\") " pod="openstack/dnsmasq-dns-8554648995-znv2j" Jan 29 15:46:25 crc kubenswrapper[5008]: I0129 15:46:25.985961 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/551951b1-6601-4b58-ab3c-aa03c962e65d-config\") pod \"dnsmasq-dns-8554648995-znv2j\" (UID: \"551951b1-6601-4b58-ab3c-aa03c962e65d\") " pod="openstack/dnsmasq-dns-8554648995-znv2j" Jan 29 15:46:26 crc kubenswrapper[5008]: I0129 15:46:26.003557 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qhjbr\" (UniqueName: \"kubernetes.io/projected/551951b1-6601-4b58-ab3c-aa03c962e65d-kube-api-access-qhjbr\") pod \"dnsmasq-dns-8554648995-znv2j\" (UID: \"551951b1-6601-4b58-ab3c-aa03c962e65d\") " pod="openstack/dnsmasq-dns-8554648995-znv2j" Jan 29 15:46:26 crc kubenswrapper[5008]: I0129 15:46:26.102543 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8554648995-znv2j" Jan 29 15:46:26 crc kubenswrapper[5008]: I0129 15:46:26.449030 5008 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5bf47b49b7-676z4"] Jan 29 15:46:26 crc kubenswrapper[5008]: W0129 15:46:26.457898 5008 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6fd1d492_c335_4318_8eb9_bf8140f43b70.slice/crio-bc5c912ef7f1d4f332ceee6db68924660445e5eccec993a762814ffa92dc97e9 WatchSource:0}: Error finding container bc5c912ef7f1d4f332ceee6db68924660445e5eccec993a762814ffa92dc97e9: Status 404 returned error can't find the container with id bc5c912ef7f1d4f332ceee6db68924660445e5eccec993a762814ffa92dc97e9 Jan 29 15:46:26 crc kubenswrapper[5008]: I0129 15:46:26.526236 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-qkf4v"] Jan 29 15:46:26 crc kubenswrapper[5008]: W0129 15:46:26.531084 5008 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod90c13843_e314_4465_af68_367fc8d59731.slice/crio-d10764ec963d308cc55bc9cf86c88bce06d1b7ae5ee8b026ef43b023e89f1805 WatchSource:0}: Error finding container d10764ec963d308cc55bc9cf86c88bce06d1b7ae5ee8b026ef43b023e89f1805: Status 404 returned error can't find the container with id d10764ec963d308cc55bc9cf86c88bce06d1b7ae5ee8b026ef43b023e89f1805 Jan 29 15:46:26 crc kubenswrapper[5008]: I0129 15:46:26.615202 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-8554648995-znv2j"] Jan 29 15:46:26 crc kubenswrapper[5008]: W0129 15:46:26.644210 5008 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod551951b1_6601_4b58_ab3c_aa03c962e65d.slice/crio-6830a4e592ccf7b5b08a72566d9d3f5dc6e7b0b1bdbcf42341ded46c73a34940 WatchSource:0}: Error finding container 6830a4e592ccf7b5b08a72566d9d3f5dc6e7b0b1bdbcf42341ded46c73a34940: Status 404 returned error can't find the container with id 6830a4e592ccf7b5b08a72566d9d3f5dc6e7b0b1bdbcf42341ded46c73a34940 Jan 29 15:46:26 crc kubenswrapper[5008]: I0129 15:46:26.724106 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"4d502938-9e22-4a6c-951e-b476cb87ee8f","Type":"ContainerStarted","Data":"5431f0774042099b394cbe05efcc819174663c681943e8592d97aeb438d5eae6"} Jan 29 15:46:26 crc kubenswrapper[5008]: I0129 15:46:26.726897 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"ea8d28cd-76d6-4a6e-b6bd-a0e5f0fc2106","Type":"ContainerStarted","Data":"af60a5c9f2d5dee8ca8e1563f1892d8501f2e081ae5f8239ebe76fdf7298ba51"} Jan 29 15:46:26 crc kubenswrapper[5008]: I0129 15:46:26.728814 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8554648995-znv2j" event={"ID":"551951b1-6601-4b58-ab3c-aa03c962e65d","Type":"ContainerStarted","Data":"6830a4e592ccf7b5b08a72566d9d3f5dc6e7b0b1bdbcf42341ded46c73a34940"} Jan 29 15:46:26 crc kubenswrapper[5008]: I0129 15:46:26.731161 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5bf47b49b7-676z4" event={"ID":"6fd1d492-c335-4318-8eb9-bf8140f43b70","Type":"ContainerStarted","Data":"bc5c912ef7f1d4f332ceee6db68924660445e5eccec993a762814ffa92dc97e9"} Jan 29 15:46:26 crc kubenswrapper[5008]: I0129 15:46:26.736160 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-qkf4v" event={"ID":"90c13843-e314-4465-af68-367fc8d59731","Type":"ContainerStarted","Data":"d10764ec963d308cc55bc9cf86c88bce06d1b7ae5ee8b026ef43b023e89f1805"} Jan 29 15:46:26 crc kubenswrapper[5008]: I0129 15:46:26.745677 5008 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-nb-0" podStartSLOduration=13.774593186 podStartE2EDuration="24.745662336s" podCreationTimestamp="2026-01-29 15:46:02 +0000 UTC" firstStartedPulling="2026-01-29 15:46:15.021327463 +0000 UTC m=+1118.694181710" lastFinishedPulling="2026-01-29 15:46:25.992396623 +0000 UTC m=+1129.665250860" observedRunningTime="2026-01-29 15:46:26.743168885 +0000 UTC m=+1130.416023122" watchObservedRunningTime="2026-01-29 15:46:26.745662336 +0000 UTC m=+1130.418516573" Jan 29 15:46:26 crc kubenswrapper[5008]: I0129 15:46:26.771863 5008 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-sb-0" podStartSLOduration=9.920807456 podStartE2EDuration="22.771841611s" podCreationTimestamp="2026-01-29 15:46:04 +0000 UTC" firstStartedPulling="2026-01-29 15:46:13.141571003 +0000 UTC m=+1116.814425240" lastFinishedPulling="2026-01-29 15:46:25.992605158 +0000 UTC m=+1129.665459395" observedRunningTime="2026-01-29 15:46:26.765136589 +0000 UTC m=+1130.437990846" watchObservedRunningTime="2026-01-29 15:46:26.771841611 +0000 UTC m=+1130.444695868" Jan 29 15:46:26 crc kubenswrapper[5008]: I0129 15:46:26.798113 5008 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-sb-0" Jan 29 15:46:26 crc kubenswrapper[5008]: I0129 15:46:26.849817 5008 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-sb-0" Jan 29 15:46:27 crc kubenswrapper[5008]: I0129 15:46:27.095603 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/memcached-0" Jan 29 15:46:27 crc kubenswrapper[5008]: I0129 15:46:27.340339 5008 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="eaa396b6-206d-4e0f-8983-ee9ac16c910a" path="/var/lib/kubelet/pods/eaa396b6-206d-4e0f-8983-ee9ac16c910a/volumes" Jan 29 15:46:27 crc kubenswrapper[5008]: I0129 15:46:27.746758 5008 generic.go:334] "Generic (PLEG): container finished" podID="551951b1-6601-4b58-ab3c-aa03c962e65d" containerID="2b40c44564e987f20174f64ac60acdae94665df690bdf09a0b0f3a38b7da3092" exitCode=0 Jan 29 15:46:27 crc kubenswrapper[5008]: I0129 15:46:27.746859 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8554648995-znv2j" event={"ID":"551951b1-6601-4b58-ab3c-aa03c962e65d","Type":"ContainerDied","Data":"2b40c44564e987f20174f64ac60acdae94665df690bdf09a0b0f3a38b7da3092"} Jan 29 15:46:27 crc kubenswrapper[5008]: I0129 15:46:27.748736 5008 generic.go:334] "Generic (PLEG): container finished" podID="6fd1d492-c335-4318-8eb9-bf8140f43b70" containerID="4f407748b4b1147fb96c147c6104479ab174b2b946fa496bb5cba49a602159b3" exitCode=0 Jan 29 15:46:27 crc kubenswrapper[5008]: I0129 15:46:27.748789 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5bf47b49b7-676z4" event={"ID":"6fd1d492-c335-4318-8eb9-bf8140f43b70","Type":"ContainerDied","Data":"4f407748b4b1147fb96c147c6104479ab174b2b946fa496bb5cba49a602159b3"} Jan 29 15:46:27 crc kubenswrapper[5008]: I0129 15:46:27.750917 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-qkf4v" event={"ID":"90c13843-e314-4465-af68-367fc8d59731","Type":"ContainerStarted","Data":"f7a25b072fa4182b25996d1c152c76441aa99f4d320197ae565130accb56e11d"} Jan 29 15:46:27 crc kubenswrapper[5008]: I0129 15:46:27.751827 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-sb-0" Jan 29 15:46:27 crc kubenswrapper[5008]: I0129 15:46:27.837809 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-sb-0" Jan 29 15:46:27 crc kubenswrapper[5008]: I0129 15:46:27.859690 5008 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-metrics-qkf4v" podStartSLOduration=2.85967024 podStartE2EDuration="2.85967024s" podCreationTimestamp="2026-01-29 15:46:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 15:46:27.815311214 +0000 UTC m=+1131.488165481" watchObservedRunningTime="2026-01-29 15:46:27.85967024 +0000 UTC m=+1131.532524477" Jan 29 15:46:28 crc kubenswrapper[5008]: I0129 15:46:28.134007 5008 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-nb-0" Jan 29 15:46:28 crc kubenswrapper[5008]: I0129 15:46:28.179318 5008 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-nb-0" Jan 29 15:46:28 crc kubenswrapper[5008]: I0129 15:46:28.231341 5008 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5bf47b49b7-676z4" Jan 29 15:46:28 crc kubenswrapper[5008]: I0129 15:46:28.371163 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6fd1d492-c335-4318-8eb9-bf8140f43b70-config\") pod \"6fd1d492-c335-4318-8eb9-bf8140f43b70\" (UID: \"6fd1d492-c335-4318-8eb9-bf8140f43b70\") " Jan 29 15:46:28 crc kubenswrapper[5008]: I0129 15:46:28.371302 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6fd1d492-c335-4318-8eb9-bf8140f43b70-ovsdbserver-nb\") pod \"6fd1d492-c335-4318-8eb9-bf8140f43b70\" (UID: \"6fd1d492-c335-4318-8eb9-bf8140f43b70\") " Jan 29 15:46:28 crc kubenswrapper[5008]: I0129 15:46:28.371361 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r8r8s\" (UniqueName: \"kubernetes.io/projected/6fd1d492-c335-4318-8eb9-bf8140f43b70-kube-api-access-r8r8s\") pod \"6fd1d492-c335-4318-8eb9-bf8140f43b70\" (UID: \"6fd1d492-c335-4318-8eb9-bf8140f43b70\") " Jan 29 15:46:28 crc kubenswrapper[5008]: I0129 15:46:28.371409 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6fd1d492-c335-4318-8eb9-bf8140f43b70-dns-svc\") pod \"6fd1d492-c335-4318-8eb9-bf8140f43b70\" (UID: \"6fd1d492-c335-4318-8eb9-bf8140f43b70\") " Jan 29 15:46:28 crc kubenswrapper[5008]: I0129 15:46:28.379193 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6fd1d492-c335-4318-8eb9-bf8140f43b70-kube-api-access-r8r8s" (OuterVolumeSpecName: "kube-api-access-r8r8s") pod "6fd1d492-c335-4318-8eb9-bf8140f43b70" (UID: "6fd1d492-c335-4318-8eb9-bf8140f43b70"). InnerVolumeSpecName "kube-api-access-r8r8s". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:46:28 crc kubenswrapper[5008]: I0129 15:46:28.392169 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6fd1d492-c335-4318-8eb9-bf8140f43b70-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "6fd1d492-c335-4318-8eb9-bf8140f43b70" (UID: "6fd1d492-c335-4318-8eb9-bf8140f43b70"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:46:28 crc kubenswrapper[5008]: I0129 15:46:28.397403 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6fd1d492-c335-4318-8eb9-bf8140f43b70-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "6fd1d492-c335-4318-8eb9-bf8140f43b70" (UID: "6fd1d492-c335-4318-8eb9-bf8140f43b70"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:46:28 crc kubenswrapper[5008]: I0129 15:46:28.411495 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6fd1d492-c335-4318-8eb9-bf8140f43b70-config" (OuterVolumeSpecName: "config") pod "6fd1d492-c335-4318-8eb9-bf8140f43b70" (UID: "6fd1d492-c335-4318-8eb9-bf8140f43b70"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:46:28 crc kubenswrapper[5008]: I0129 15:46:28.473527 5008 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6fd1d492-c335-4318-8eb9-bf8140f43b70-config\") on node \"crc\" DevicePath \"\"" Jan 29 15:46:28 crc kubenswrapper[5008]: I0129 15:46:28.473568 5008 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6fd1d492-c335-4318-8eb9-bf8140f43b70-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 29 15:46:28 crc kubenswrapper[5008]: I0129 15:46:28.473580 5008 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r8r8s\" (UniqueName: \"kubernetes.io/projected/6fd1d492-c335-4318-8eb9-bf8140f43b70-kube-api-access-r8r8s\") on node \"crc\" DevicePath \"\"" Jan 29 15:46:28 crc kubenswrapper[5008]: I0129 15:46:28.473588 5008 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6fd1d492-c335-4318-8eb9-bf8140f43b70-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 29 15:46:28 crc kubenswrapper[5008]: I0129 15:46:28.759078 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8554648995-znv2j" event={"ID":"551951b1-6601-4b58-ab3c-aa03c962e65d","Type":"ContainerStarted","Data":"38684768ef3bf132eafbfafd8a54383320bc339a0e2d483f6d09264bc7219316"} Jan 29 15:46:28 crc kubenswrapper[5008]: I0129 15:46:28.760058 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-8554648995-znv2j" Jan 29 15:46:28 crc kubenswrapper[5008]: I0129 15:46:28.761561 5008 generic.go:334] "Generic (PLEG): container finished" podID="2c8d6871-1129-4597-8a1e-94006a17448a" containerID="5dfcdea1095ee2d3879ba921942b33575acdace6db8ae39b151b1c219157edc2" exitCode=0 Jan 29 15:46:28 crc kubenswrapper[5008]: I0129 15:46:28.761616 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"2c8d6871-1129-4597-8a1e-94006a17448a","Type":"ContainerDied","Data":"5dfcdea1095ee2d3879ba921942b33575acdace6db8ae39b151b1c219157edc2"} Jan 29 15:46:28 crc kubenswrapper[5008]: I0129 15:46:28.764286 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5bf47b49b7-676z4" event={"ID":"6fd1d492-c335-4318-8eb9-bf8140f43b70","Type":"ContainerDied","Data":"bc5c912ef7f1d4f332ceee6db68924660445e5eccec993a762814ffa92dc97e9"} Jan 29 15:46:28 crc kubenswrapper[5008]: I0129 15:46:28.764326 5008 scope.go:117] "RemoveContainer" containerID="4f407748b4b1147fb96c147c6104479ab174b2b946fa496bb5cba49a602159b3" Jan 29 15:46:28 crc kubenswrapper[5008]: I0129 15:46:28.764426 5008 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5bf47b49b7-676z4" Jan 29 15:46:28 crc kubenswrapper[5008]: I0129 15:46:28.769461 5008 generic.go:334] "Generic (PLEG): container finished" podID="a2958b99-a5fe-447a-93cc-64bade998854" containerID="4fb6ed72bca123054fb804f9974ec317326298fe7e9c9208c5b3b6c813fe0609" exitCode=0 Jan 29 15:46:28 crc kubenswrapper[5008]: I0129 15:46:28.770055 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"a2958b99-a5fe-447a-93cc-64bade998854","Type":"ContainerDied","Data":"4fb6ed72bca123054fb804f9974ec317326298fe7e9c9208c5b3b6c813fe0609"} Jan 29 15:46:28 crc kubenswrapper[5008]: I0129 15:46:28.771302 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-nb-0" Jan 29 15:46:28 crc kubenswrapper[5008]: I0129 15:46:28.803881 5008 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-8554648995-znv2j" podStartSLOduration=3.803865975 podStartE2EDuration="3.803865975s" podCreationTimestamp="2026-01-29 15:46:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 15:46:28.802373439 +0000 UTC m=+1132.475227686" watchObservedRunningTime="2026-01-29 15:46:28.803865975 +0000 UTC m=+1132.476720212" Jan 29 15:46:28 crc kubenswrapper[5008]: I0129 15:46:28.923282 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-nb-0" Jan 29 15:46:28 crc kubenswrapper[5008]: I0129 15:46:28.949215 5008 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-8554648995-znv2j"] Jan 29 15:46:28 crc kubenswrapper[5008]: I0129 15:46:28.990468 5008 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-b8fbc5445-jlh8x"] Jan 29 15:46:28 crc kubenswrapper[5008]: E0129 15:46:28.997106 5008 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6fd1d492-c335-4318-8eb9-bf8140f43b70" containerName="init" Jan 29 15:46:28 crc kubenswrapper[5008]: I0129 15:46:28.997163 5008 state_mem.go:107] "Deleted CPUSet assignment" podUID="6fd1d492-c335-4318-8eb9-bf8140f43b70" containerName="init" Jan 29 15:46:28 crc kubenswrapper[5008]: I0129 15:46:28.997516 5008 memory_manager.go:354] "RemoveStaleState removing state" podUID="6fd1d492-c335-4318-8eb9-bf8140f43b70" containerName="init" Jan 29 15:46:29 crc kubenswrapper[5008]: I0129 15:46:29.013172 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-b8fbc5445-jlh8x" Jan 29 15:46:29 crc kubenswrapper[5008]: I0129 15:46:29.064927 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-b8fbc5445-jlh8x"] Jan 29 15:46:29 crc kubenswrapper[5008]: I0129 15:46:29.065889 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Jan 29 15:46:29 crc kubenswrapper[5008]: I0129 15:46:29.069725 5008 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5bf47b49b7-676z4"] Jan 29 15:46:29 crc kubenswrapper[5008]: I0129 15:46:29.080228 5008 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5bf47b49b7-676z4"] Jan 29 15:46:29 crc kubenswrapper[5008]: I0129 15:46:29.083725 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/536998c7-ad3f-4b4c-ad9e-342343eded97-config\") pod \"dnsmasq-dns-b8fbc5445-jlh8x\" (UID: \"536998c7-ad3f-4b4c-ad9e-342343eded97\") " pod="openstack/dnsmasq-dns-b8fbc5445-jlh8x" Jan 29 15:46:29 crc kubenswrapper[5008]: I0129 15:46:29.083776 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/536998c7-ad3f-4b4c-ad9e-342343eded97-dns-svc\") pod \"dnsmasq-dns-b8fbc5445-jlh8x\" (UID: \"536998c7-ad3f-4b4c-ad9e-342343eded97\") " pod="openstack/dnsmasq-dns-b8fbc5445-jlh8x" Jan 29 15:46:29 crc kubenswrapper[5008]: I0129 15:46:29.083829 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qsqq2\" (UniqueName: \"kubernetes.io/projected/536998c7-ad3f-4b4c-ad9e-342343eded97-kube-api-access-qsqq2\") pod \"dnsmasq-dns-b8fbc5445-jlh8x\" (UID: \"536998c7-ad3f-4b4c-ad9e-342343eded97\") " pod="openstack/dnsmasq-dns-b8fbc5445-jlh8x" Jan 29 15:46:29 crc kubenswrapper[5008]: I0129 15:46:29.083867 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/536998c7-ad3f-4b4c-ad9e-342343eded97-ovsdbserver-nb\") pod \"dnsmasq-dns-b8fbc5445-jlh8x\" (UID: \"536998c7-ad3f-4b4c-ad9e-342343eded97\") " pod="openstack/dnsmasq-dns-b8fbc5445-jlh8x" Jan 29 15:46:29 crc kubenswrapper[5008]: I0129 15:46:29.083886 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/536998c7-ad3f-4b4c-ad9e-342343eded97-ovsdbserver-sb\") pod \"dnsmasq-dns-b8fbc5445-jlh8x\" (UID: \"536998c7-ad3f-4b4c-ad9e-342343eded97\") " pod="openstack/dnsmasq-dns-b8fbc5445-jlh8x" Jan 29 15:46:29 crc kubenswrapper[5008]: I0129 15:46:29.188062 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/536998c7-ad3f-4b4c-ad9e-342343eded97-config\") pod \"dnsmasq-dns-b8fbc5445-jlh8x\" (UID: \"536998c7-ad3f-4b4c-ad9e-342343eded97\") " pod="openstack/dnsmasq-dns-b8fbc5445-jlh8x" Jan 29 15:46:29 crc kubenswrapper[5008]: I0129 15:46:29.188145 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/536998c7-ad3f-4b4c-ad9e-342343eded97-dns-svc\") pod \"dnsmasq-dns-b8fbc5445-jlh8x\" (UID: \"536998c7-ad3f-4b4c-ad9e-342343eded97\") " pod="openstack/dnsmasq-dns-b8fbc5445-jlh8x" Jan 29 15:46:29 crc kubenswrapper[5008]: I0129 15:46:29.188194 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qsqq2\" (UniqueName: \"kubernetes.io/projected/536998c7-ad3f-4b4c-ad9e-342343eded97-kube-api-access-qsqq2\") pod \"dnsmasq-dns-b8fbc5445-jlh8x\" (UID: \"536998c7-ad3f-4b4c-ad9e-342343eded97\") " pod="openstack/dnsmasq-dns-b8fbc5445-jlh8x" Jan 29 15:46:29 crc kubenswrapper[5008]: I0129 15:46:29.188261 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/536998c7-ad3f-4b4c-ad9e-342343eded97-ovsdbserver-nb\") pod \"dnsmasq-dns-b8fbc5445-jlh8x\" (UID: \"536998c7-ad3f-4b4c-ad9e-342343eded97\") " pod="openstack/dnsmasq-dns-b8fbc5445-jlh8x" Jan 29 15:46:29 crc kubenswrapper[5008]: I0129 15:46:29.188286 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/536998c7-ad3f-4b4c-ad9e-342343eded97-ovsdbserver-sb\") pod \"dnsmasq-dns-b8fbc5445-jlh8x\" (UID: \"536998c7-ad3f-4b4c-ad9e-342343eded97\") " pod="openstack/dnsmasq-dns-b8fbc5445-jlh8x" Jan 29 15:46:29 crc kubenswrapper[5008]: I0129 15:46:29.189182 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/536998c7-ad3f-4b4c-ad9e-342343eded97-ovsdbserver-sb\") pod \"dnsmasq-dns-b8fbc5445-jlh8x\" (UID: \"536998c7-ad3f-4b4c-ad9e-342343eded97\") " pod="openstack/dnsmasq-dns-b8fbc5445-jlh8x" Jan 29 15:46:29 crc kubenswrapper[5008]: I0129 15:46:29.189853 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/536998c7-ad3f-4b4c-ad9e-342343eded97-config\") pod \"dnsmasq-dns-b8fbc5445-jlh8x\" (UID: \"536998c7-ad3f-4b4c-ad9e-342343eded97\") " pod="openstack/dnsmasq-dns-b8fbc5445-jlh8x" Jan 29 15:46:29 crc kubenswrapper[5008]: I0129 15:46:29.190885 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/536998c7-ad3f-4b4c-ad9e-342343eded97-dns-svc\") pod \"dnsmasq-dns-b8fbc5445-jlh8x\" (UID: \"536998c7-ad3f-4b4c-ad9e-342343eded97\") " pod="openstack/dnsmasq-dns-b8fbc5445-jlh8x" Jan 29 15:46:29 crc kubenswrapper[5008]: I0129 15:46:29.191252 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/536998c7-ad3f-4b4c-ad9e-342343eded97-ovsdbserver-nb\") pod \"dnsmasq-dns-b8fbc5445-jlh8x\" (UID: \"536998c7-ad3f-4b4c-ad9e-342343eded97\") " pod="openstack/dnsmasq-dns-b8fbc5445-jlh8x" Jan 29 15:46:29 crc kubenswrapper[5008]: I0129 15:46:29.222551 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qsqq2\" (UniqueName: \"kubernetes.io/projected/536998c7-ad3f-4b4c-ad9e-342343eded97-kube-api-access-qsqq2\") pod \"dnsmasq-dns-b8fbc5445-jlh8x\" (UID: \"536998c7-ad3f-4b4c-ad9e-342343eded97\") " pod="openstack/dnsmasq-dns-b8fbc5445-jlh8x" Jan 29 15:46:29 crc kubenswrapper[5008]: I0129 15:46:29.231037 5008 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-northd-0"] Jan 29 15:46:29 crc kubenswrapper[5008]: I0129 15:46:29.232963 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Jan 29 15:46:29 crc kubenswrapper[5008]: I0129 15:46:29.240383 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovnnorthd-ovndbs" Jan 29 15:46:29 crc kubenswrapper[5008]: I0129 15:46:29.240552 5008 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-config" Jan 29 15:46:29 crc kubenswrapper[5008]: I0129 15:46:29.240701 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovnnorthd-ovnnorthd-dockercfg-zn5dg" Jan 29 15:46:29 crc kubenswrapper[5008]: I0129 15:46:29.240837 5008 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-scripts" Jan 29 15:46:29 crc kubenswrapper[5008]: I0129 15:46:29.268089 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Jan 29 15:46:29 crc kubenswrapper[5008]: I0129 15:46:29.336925 5008 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6fd1d492-c335-4318-8eb9-bf8140f43b70" path="/var/lib/kubelet/pods/6fd1d492-c335-4318-8eb9-bf8140f43b70/volumes" Jan 29 15:46:29 crc kubenswrapper[5008]: I0129 15:46:29.368765 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-b8fbc5445-jlh8x" Jan 29 15:46:29 crc kubenswrapper[5008]: I0129 15:46:29.390750 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f251affb-8e6d-445d-996c-da5e3fc29de8-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"f251affb-8e6d-445d-996c-da5e3fc29de8\") " pod="openstack/ovn-northd-0" Jan 29 15:46:29 crc kubenswrapper[5008]: I0129 15:46:29.390817 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/f251affb-8e6d-445d-996c-da5e3fc29de8-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"f251affb-8e6d-445d-996c-da5e3fc29de8\") " pod="openstack/ovn-northd-0" Jan 29 15:46:29 crc kubenswrapper[5008]: I0129 15:46:29.390896 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-75cqs\" (UniqueName: \"kubernetes.io/projected/f251affb-8e6d-445d-996c-da5e3fc29de8-kube-api-access-75cqs\") pod \"ovn-northd-0\" (UID: \"f251affb-8e6d-445d-996c-da5e3fc29de8\") " pod="openstack/ovn-northd-0" Jan 29 15:46:29 crc kubenswrapper[5008]: I0129 15:46:29.390926 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f251affb-8e6d-445d-996c-da5e3fc29de8-scripts\") pod \"ovn-northd-0\" (UID: \"f251affb-8e6d-445d-996c-da5e3fc29de8\") " pod="openstack/ovn-northd-0" Jan 29 15:46:29 crc kubenswrapper[5008]: I0129 15:46:29.391007 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/f251affb-8e6d-445d-996c-da5e3fc29de8-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"f251affb-8e6d-445d-996c-da5e3fc29de8\") " pod="openstack/ovn-northd-0" Jan 29 15:46:29 crc kubenswrapper[5008]: I0129 15:46:29.391034 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/f251affb-8e6d-445d-996c-da5e3fc29de8-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"f251affb-8e6d-445d-996c-da5e3fc29de8\") " pod="openstack/ovn-northd-0" Jan 29 15:46:29 crc kubenswrapper[5008]: I0129 15:46:29.391062 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f251affb-8e6d-445d-996c-da5e3fc29de8-config\") pod \"ovn-northd-0\" (UID: \"f251affb-8e6d-445d-996c-da5e3fc29de8\") " pod="openstack/ovn-northd-0" Jan 29 15:46:29 crc kubenswrapper[5008]: I0129 15:46:29.492833 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f251affb-8e6d-445d-996c-da5e3fc29de8-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"f251affb-8e6d-445d-996c-da5e3fc29de8\") " pod="openstack/ovn-northd-0" Jan 29 15:46:29 crc kubenswrapper[5008]: I0129 15:46:29.492880 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/f251affb-8e6d-445d-996c-da5e3fc29de8-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"f251affb-8e6d-445d-996c-da5e3fc29de8\") " pod="openstack/ovn-northd-0" Jan 29 15:46:29 crc kubenswrapper[5008]: I0129 15:46:29.492953 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-75cqs\" (UniqueName: \"kubernetes.io/projected/f251affb-8e6d-445d-996c-da5e3fc29de8-kube-api-access-75cqs\") pod \"ovn-northd-0\" (UID: \"f251affb-8e6d-445d-996c-da5e3fc29de8\") " pod="openstack/ovn-northd-0" Jan 29 15:46:29 crc kubenswrapper[5008]: I0129 15:46:29.492980 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f251affb-8e6d-445d-996c-da5e3fc29de8-scripts\") pod \"ovn-northd-0\" (UID: \"f251affb-8e6d-445d-996c-da5e3fc29de8\") " pod="openstack/ovn-northd-0" Jan 29 15:46:29 crc kubenswrapper[5008]: I0129 15:46:29.493049 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/f251affb-8e6d-445d-996c-da5e3fc29de8-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"f251affb-8e6d-445d-996c-da5e3fc29de8\") " pod="openstack/ovn-northd-0" Jan 29 15:46:29 crc kubenswrapper[5008]: I0129 15:46:29.493072 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/f251affb-8e6d-445d-996c-da5e3fc29de8-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"f251affb-8e6d-445d-996c-da5e3fc29de8\") " pod="openstack/ovn-northd-0" Jan 29 15:46:29 crc kubenswrapper[5008]: I0129 15:46:29.493096 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f251affb-8e6d-445d-996c-da5e3fc29de8-config\") pod \"ovn-northd-0\" (UID: \"f251affb-8e6d-445d-996c-da5e3fc29de8\") " pod="openstack/ovn-northd-0" Jan 29 15:46:29 crc kubenswrapper[5008]: I0129 15:46:29.493558 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/f251affb-8e6d-445d-996c-da5e3fc29de8-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"f251affb-8e6d-445d-996c-da5e3fc29de8\") " pod="openstack/ovn-northd-0" Jan 29 15:46:29 crc kubenswrapper[5008]: I0129 15:46:29.494344 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f251affb-8e6d-445d-996c-da5e3fc29de8-config\") pod \"ovn-northd-0\" (UID: \"f251affb-8e6d-445d-996c-da5e3fc29de8\") " pod="openstack/ovn-northd-0" Jan 29 15:46:29 crc kubenswrapper[5008]: I0129 15:46:29.494381 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f251affb-8e6d-445d-996c-da5e3fc29de8-scripts\") pod \"ovn-northd-0\" (UID: \"f251affb-8e6d-445d-996c-da5e3fc29de8\") " pod="openstack/ovn-northd-0" Jan 29 15:46:29 crc kubenswrapper[5008]: I0129 15:46:29.498259 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/f251affb-8e6d-445d-996c-da5e3fc29de8-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"f251affb-8e6d-445d-996c-da5e3fc29de8\") " pod="openstack/ovn-northd-0" Jan 29 15:46:29 crc kubenswrapper[5008]: I0129 15:46:29.515236 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/f251affb-8e6d-445d-996c-da5e3fc29de8-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"f251affb-8e6d-445d-996c-da5e3fc29de8\") " pod="openstack/ovn-northd-0" Jan 29 15:46:29 crc kubenswrapper[5008]: I0129 15:46:29.517466 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f251affb-8e6d-445d-996c-da5e3fc29de8-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"f251affb-8e6d-445d-996c-da5e3fc29de8\") " pod="openstack/ovn-northd-0" Jan 29 15:46:29 crc kubenswrapper[5008]: I0129 15:46:29.523525 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-75cqs\" (UniqueName: \"kubernetes.io/projected/f251affb-8e6d-445d-996c-da5e3fc29de8-kube-api-access-75cqs\") pod \"ovn-northd-0\" (UID: \"f251affb-8e6d-445d-996c-da5e3fc29de8\") " pod="openstack/ovn-northd-0" Jan 29 15:46:29 crc kubenswrapper[5008]: I0129 15:46:29.550913 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Jan 29 15:46:29 crc kubenswrapper[5008]: I0129 15:46:29.675340 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-b8fbc5445-jlh8x"] Jan 29 15:46:29 crc kubenswrapper[5008]: I0129 15:46:29.783700 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"2c8d6871-1129-4597-8a1e-94006a17448a","Type":"ContainerStarted","Data":"00ad3225217a2d81792204c71772618c8cad067cc008f067de2957088e135a12"} Jan 29 15:46:29 crc kubenswrapper[5008]: I0129 15:46:29.791569 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-b8fbc5445-jlh8x" event={"ID":"536998c7-ad3f-4b4c-ad9e-342343eded97","Type":"ContainerStarted","Data":"e0537e06f45058060e30f1ea912f4b791f0f50a83a241274268db34f9a3ef7fc"} Jan 29 15:46:29 crc kubenswrapper[5008]: I0129 15:46:29.804050 5008 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-cell1-galera-0" podStartSLOduration=26.316264828 podStartE2EDuration="34.804034587s" podCreationTimestamp="2026-01-29 15:45:55 +0000 UTC" firstStartedPulling="2026-01-29 15:46:12.716556593 +0000 UTC m=+1116.389410830" lastFinishedPulling="2026-01-29 15:46:21.204326352 +0000 UTC m=+1124.877180589" observedRunningTime="2026-01-29 15:46:29.803483014 +0000 UTC m=+1133.476337261" watchObservedRunningTime="2026-01-29 15:46:29.804034587 +0000 UTC m=+1133.476888824" Jan 29 15:46:29 crc kubenswrapper[5008]: I0129 15:46:29.815544 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"a2958b99-a5fe-447a-93cc-64bade998854","Type":"ContainerStarted","Data":"dda80631261104253e7f9951ab5c6feb34248b19f89fcd3f70d7ff4a902f88e3"} Jan 29 15:46:29 crc kubenswrapper[5008]: I0129 15:46:29.857621 5008 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-galera-0" podStartSLOduration=27.449704975 podStartE2EDuration="35.857602697s" podCreationTimestamp="2026-01-29 15:45:54 +0000 UTC" firstStartedPulling="2026-01-29 15:46:12.946273535 +0000 UTC m=+1116.619127772" lastFinishedPulling="2026-01-29 15:46:21.354171237 +0000 UTC m=+1125.027025494" observedRunningTime="2026-01-29 15:46:29.851486558 +0000 UTC m=+1133.524340805" watchObservedRunningTime="2026-01-29 15:46:29.857602697 +0000 UTC m=+1133.530456934" Jan 29 15:46:30 crc kubenswrapper[5008]: I0129 15:46:30.080514 5008 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-storage-0"] Jan 29 15:46:30 crc kubenswrapper[5008]: I0129 15:46:30.116376 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Jan 29 15:46:30 crc kubenswrapper[5008]: I0129 15:46:30.119318 5008 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-files" Jan 29 15:46:30 crc kubenswrapper[5008]: I0129 15:46:30.120511 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-swift-dockercfg-dmwfl" Jan 29 15:46:30 crc kubenswrapper[5008]: I0129 15:46:30.120791 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-conf" Jan 29 15:46:30 crc kubenswrapper[5008]: I0129 15:46:30.120928 5008 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-storage-config-data" Jan 29 15:46:30 crc kubenswrapper[5008]: I0129 15:46:30.130372 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Jan 29 15:46:30 crc kubenswrapper[5008]: I0129 15:46:30.140442 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Jan 29 15:46:30 crc kubenswrapper[5008]: I0129 15:46:30.213684 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/7d8596d3-fe9a-4e1a-969b-2a40a90e437d-etc-swift\") pod \"swift-storage-0\" (UID: \"7d8596d3-fe9a-4e1a-969b-2a40a90e437d\") " pod="openstack/swift-storage-0" Jan 29 15:46:30 crc kubenswrapper[5008]: I0129 15:46:30.213770 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/7d8596d3-fe9a-4e1a-969b-2a40a90e437d-cache\") pod \"swift-storage-0\" (UID: \"7d8596d3-fe9a-4e1a-969b-2a40a90e437d\") " pod="openstack/swift-storage-0" Jan 29 15:46:30 crc kubenswrapper[5008]: I0129 15:46:30.213825 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"swift-storage-0\" (UID: \"7d8596d3-fe9a-4e1a-969b-2a40a90e437d\") " pod="openstack/swift-storage-0" Jan 29 15:46:30 crc kubenswrapper[5008]: I0129 15:46:30.213868 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6h8nx\" (UniqueName: \"kubernetes.io/projected/7d8596d3-fe9a-4e1a-969b-2a40a90e437d-kube-api-access-6h8nx\") pod \"swift-storage-0\" (UID: \"7d8596d3-fe9a-4e1a-969b-2a40a90e437d\") " pod="openstack/swift-storage-0" Jan 29 15:46:30 crc kubenswrapper[5008]: I0129 15:46:30.213909 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/7d8596d3-fe9a-4e1a-969b-2a40a90e437d-lock\") pod \"swift-storage-0\" (UID: \"7d8596d3-fe9a-4e1a-969b-2a40a90e437d\") " pod="openstack/swift-storage-0" Jan 29 15:46:30 crc kubenswrapper[5008]: I0129 15:46:30.213926 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7d8596d3-fe9a-4e1a-969b-2a40a90e437d-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"7d8596d3-fe9a-4e1a-969b-2a40a90e437d\") " pod="openstack/swift-storage-0" Jan 29 15:46:30 crc kubenswrapper[5008]: I0129 15:46:30.315937 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/7d8596d3-fe9a-4e1a-969b-2a40a90e437d-etc-swift\") pod \"swift-storage-0\" (UID: \"7d8596d3-fe9a-4e1a-969b-2a40a90e437d\") " pod="openstack/swift-storage-0" Jan 29 15:46:30 crc kubenswrapper[5008]: I0129 15:46:30.316040 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/7d8596d3-fe9a-4e1a-969b-2a40a90e437d-cache\") pod \"swift-storage-0\" (UID: \"7d8596d3-fe9a-4e1a-969b-2a40a90e437d\") " pod="openstack/swift-storage-0" Jan 29 15:46:30 crc kubenswrapper[5008]: I0129 15:46:30.316101 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"swift-storage-0\" (UID: \"7d8596d3-fe9a-4e1a-969b-2a40a90e437d\") " pod="openstack/swift-storage-0" Jan 29 15:46:30 crc kubenswrapper[5008]: I0129 15:46:30.316152 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6h8nx\" (UniqueName: \"kubernetes.io/projected/7d8596d3-fe9a-4e1a-969b-2a40a90e437d-kube-api-access-6h8nx\") pod \"swift-storage-0\" (UID: \"7d8596d3-fe9a-4e1a-969b-2a40a90e437d\") " pod="openstack/swift-storage-0" Jan 29 15:46:30 crc kubenswrapper[5008]: I0129 15:46:30.316195 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/7d8596d3-fe9a-4e1a-969b-2a40a90e437d-lock\") pod \"swift-storage-0\" (UID: \"7d8596d3-fe9a-4e1a-969b-2a40a90e437d\") " pod="openstack/swift-storage-0" Jan 29 15:46:30 crc kubenswrapper[5008]: I0129 15:46:30.316215 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7d8596d3-fe9a-4e1a-969b-2a40a90e437d-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"7d8596d3-fe9a-4e1a-969b-2a40a90e437d\") " pod="openstack/swift-storage-0" Jan 29 15:46:30 crc kubenswrapper[5008]: E0129 15:46:30.316707 5008 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 29 15:46:30 crc kubenswrapper[5008]: E0129 15:46:30.317115 5008 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Jan 29 15:46:30 crc kubenswrapper[5008]: E0129 15:46:30.317254 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/7d8596d3-fe9a-4e1a-969b-2a40a90e437d-etc-swift podName:7d8596d3-fe9a-4e1a-969b-2a40a90e437d nodeName:}" failed. No retries permitted until 2026-01-29 15:46:30.817235377 +0000 UTC m=+1134.490089614 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/7d8596d3-fe9a-4e1a-969b-2a40a90e437d-etc-swift") pod "swift-storage-0" (UID: "7d8596d3-fe9a-4e1a-969b-2a40a90e437d") : configmap "swift-ring-files" not found Jan 29 15:46:30 crc kubenswrapper[5008]: I0129 15:46:30.317090 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/7d8596d3-fe9a-4e1a-969b-2a40a90e437d-cache\") pod \"swift-storage-0\" (UID: \"7d8596d3-fe9a-4e1a-969b-2a40a90e437d\") " pod="openstack/swift-storage-0" Jan 29 15:46:30 crc kubenswrapper[5008]: I0129 15:46:30.317317 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/7d8596d3-fe9a-4e1a-969b-2a40a90e437d-lock\") pod \"swift-storage-0\" (UID: \"7d8596d3-fe9a-4e1a-969b-2a40a90e437d\") " pod="openstack/swift-storage-0" Jan 29 15:46:30 crc kubenswrapper[5008]: I0129 15:46:30.317777 5008 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"swift-storage-0\" (UID: \"7d8596d3-fe9a-4e1a-969b-2a40a90e437d\") device mount path \"/mnt/openstack/pv06\"" pod="openstack/swift-storage-0" Jan 29 15:46:30 crc kubenswrapper[5008]: I0129 15:46:30.322818 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7d8596d3-fe9a-4e1a-969b-2a40a90e437d-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"7d8596d3-fe9a-4e1a-969b-2a40a90e437d\") " pod="openstack/swift-storage-0" Jan 29 15:46:30 crc kubenswrapper[5008]: I0129 15:46:30.334607 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6h8nx\" (UniqueName: \"kubernetes.io/projected/7d8596d3-fe9a-4e1a-969b-2a40a90e437d-kube-api-access-6h8nx\") pod \"swift-storage-0\" (UID: \"7d8596d3-fe9a-4e1a-969b-2a40a90e437d\") " pod="openstack/swift-storage-0" Jan 29 15:46:30 crc kubenswrapper[5008]: I0129 15:46:30.338951 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"swift-storage-0\" (UID: \"7d8596d3-fe9a-4e1a-969b-2a40a90e437d\") " pod="openstack/swift-storage-0" Jan 29 15:46:30 crc kubenswrapper[5008]: I0129 15:46:30.614672 5008 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-ring-rebalance-phmts"] Jan 29 15:46:30 crc kubenswrapper[5008]: I0129 15:46:30.615886 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-phmts" Jan 29 15:46:30 crc kubenswrapper[5008]: I0129 15:46:30.618146 5008 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-config-data" Jan 29 15:46:30 crc kubenswrapper[5008]: I0129 15:46:30.621137 5008 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-scripts" Jan 29 15:46:30 crc kubenswrapper[5008]: I0129 15:46:30.627599 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-phmts"] Jan 29 15:46:30 crc kubenswrapper[5008]: I0129 15:46:30.628273 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Jan 29 15:46:30 crc kubenswrapper[5008]: I0129 15:46:30.723382 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/5b273a50-b2db-40d5-b4b4-6494206c606d-scripts\") pod \"swift-ring-rebalance-phmts\" (UID: \"5b273a50-b2db-40d5-b4b4-6494206c606d\") " pod="openstack/swift-ring-rebalance-phmts" Jan 29 15:46:30 crc kubenswrapper[5008]: I0129 15:46:30.723478 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/5b273a50-b2db-40d5-b4b4-6494206c606d-swiftconf\") pod \"swift-ring-rebalance-phmts\" (UID: \"5b273a50-b2db-40d5-b4b4-6494206c606d\") " pod="openstack/swift-ring-rebalance-phmts" Jan 29 15:46:30 crc kubenswrapper[5008]: I0129 15:46:30.723534 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5b273a50-b2db-40d5-b4b4-6494206c606d-combined-ca-bundle\") pod \"swift-ring-rebalance-phmts\" (UID: \"5b273a50-b2db-40d5-b4b4-6494206c606d\") " pod="openstack/swift-ring-rebalance-phmts" Jan 29 15:46:30 crc kubenswrapper[5008]: I0129 15:46:30.723565 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/5b273a50-b2db-40d5-b4b4-6494206c606d-etc-swift\") pod \"swift-ring-rebalance-phmts\" (UID: \"5b273a50-b2db-40d5-b4b4-6494206c606d\") " pod="openstack/swift-ring-rebalance-phmts" Jan 29 15:46:30 crc kubenswrapper[5008]: I0129 15:46:30.723601 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gxr66\" (UniqueName: \"kubernetes.io/projected/5b273a50-b2db-40d5-b4b4-6494206c606d-kube-api-access-gxr66\") pod \"swift-ring-rebalance-phmts\" (UID: \"5b273a50-b2db-40d5-b4b4-6494206c606d\") " pod="openstack/swift-ring-rebalance-phmts" Jan 29 15:46:30 crc kubenswrapper[5008]: I0129 15:46:30.723637 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/5b273a50-b2db-40d5-b4b4-6494206c606d-ring-data-devices\") pod \"swift-ring-rebalance-phmts\" (UID: \"5b273a50-b2db-40d5-b4b4-6494206c606d\") " pod="openstack/swift-ring-rebalance-phmts" Jan 29 15:46:30 crc kubenswrapper[5008]: I0129 15:46:30.723662 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/5b273a50-b2db-40d5-b4b4-6494206c606d-dispersionconf\") pod \"swift-ring-rebalance-phmts\" (UID: \"5b273a50-b2db-40d5-b4b4-6494206c606d\") " pod="openstack/swift-ring-rebalance-phmts" Jan 29 15:46:30 crc kubenswrapper[5008]: I0129 15:46:30.826762 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/5b273a50-b2db-40d5-b4b4-6494206c606d-scripts\") pod \"swift-ring-rebalance-phmts\" (UID: \"5b273a50-b2db-40d5-b4b4-6494206c606d\") " pod="openstack/swift-ring-rebalance-phmts" Jan 29 15:46:30 crc kubenswrapper[5008]: I0129 15:46:30.826847 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/7d8596d3-fe9a-4e1a-969b-2a40a90e437d-etc-swift\") pod \"swift-storage-0\" (UID: \"7d8596d3-fe9a-4e1a-969b-2a40a90e437d\") " pod="openstack/swift-storage-0" Jan 29 15:46:30 crc kubenswrapper[5008]: I0129 15:46:30.826875 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/5b273a50-b2db-40d5-b4b4-6494206c606d-swiftconf\") pod \"swift-ring-rebalance-phmts\" (UID: \"5b273a50-b2db-40d5-b4b4-6494206c606d\") " pod="openstack/swift-ring-rebalance-phmts" Jan 29 15:46:30 crc kubenswrapper[5008]: I0129 15:46:30.826929 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5b273a50-b2db-40d5-b4b4-6494206c606d-combined-ca-bundle\") pod \"swift-ring-rebalance-phmts\" (UID: \"5b273a50-b2db-40d5-b4b4-6494206c606d\") " pod="openstack/swift-ring-rebalance-phmts" Jan 29 15:46:30 crc kubenswrapper[5008]: I0129 15:46:30.826953 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/5b273a50-b2db-40d5-b4b4-6494206c606d-etc-swift\") pod \"swift-ring-rebalance-phmts\" (UID: \"5b273a50-b2db-40d5-b4b4-6494206c606d\") " pod="openstack/swift-ring-rebalance-phmts" Jan 29 15:46:30 crc kubenswrapper[5008]: I0129 15:46:30.826986 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gxr66\" (UniqueName: \"kubernetes.io/projected/5b273a50-b2db-40d5-b4b4-6494206c606d-kube-api-access-gxr66\") pod \"swift-ring-rebalance-phmts\" (UID: \"5b273a50-b2db-40d5-b4b4-6494206c606d\") " pod="openstack/swift-ring-rebalance-phmts" Jan 29 15:46:30 crc kubenswrapper[5008]: I0129 15:46:30.827019 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/5b273a50-b2db-40d5-b4b4-6494206c606d-ring-data-devices\") pod \"swift-ring-rebalance-phmts\" (UID: \"5b273a50-b2db-40d5-b4b4-6494206c606d\") " pod="openstack/swift-ring-rebalance-phmts" Jan 29 15:46:30 crc kubenswrapper[5008]: I0129 15:46:30.827050 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/5b273a50-b2db-40d5-b4b4-6494206c606d-dispersionconf\") pod \"swift-ring-rebalance-phmts\" (UID: \"5b273a50-b2db-40d5-b4b4-6494206c606d\") " pod="openstack/swift-ring-rebalance-phmts" Jan 29 15:46:30 crc kubenswrapper[5008]: I0129 15:46:30.828729 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/5b273a50-b2db-40d5-b4b4-6494206c606d-scripts\") pod \"swift-ring-rebalance-phmts\" (UID: \"5b273a50-b2db-40d5-b4b4-6494206c606d\") " pod="openstack/swift-ring-rebalance-phmts" Jan 29 15:46:30 crc kubenswrapper[5008]: I0129 15:46:30.829043 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/5b273a50-b2db-40d5-b4b4-6494206c606d-etc-swift\") pod \"swift-ring-rebalance-phmts\" (UID: \"5b273a50-b2db-40d5-b4b4-6494206c606d\") " pod="openstack/swift-ring-rebalance-phmts" Jan 29 15:46:30 crc kubenswrapper[5008]: I0129 15:46:30.829874 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/5b273a50-b2db-40d5-b4b4-6494206c606d-ring-data-devices\") pod \"swift-ring-rebalance-phmts\" (UID: \"5b273a50-b2db-40d5-b4b4-6494206c606d\") " pod="openstack/swift-ring-rebalance-phmts" Jan 29 15:46:30 crc kubenswrapper[5008]: I0129 15:46:30.830302 5008 generic.go:334] "Generic (PLEG): container finished" podID="536998c7-ad3f-4b4c-ad9e-342343eded97" containerID="01f240842a9d581bbdd4e45548c395b54d038ece16a8256fdcca28f72896aa94" exitCode=0 Jan 29 15:46:30 crc kubenswrapper[5008]: I0129 15:46:30.830388 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-b8fbc5445-jlh8x" event={"ID":"536998c7-ad3f-4b4c-ad9e-342343eded97","Type":"ContainerDied","Data":"01f240842a9d581bbdd4e45548c395b54d038ece16a8256fdcca28f72896aa94"} Jan 29 15:46:30 crc kubenswrapper[5008]: E0129 15:46:30.830507 5008 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 29 15:46:30 crc kubenswrapper[5008]: E0129 15:46:30.830522 5008 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Jan 29 15:46:30 crc kubenswrapper[5008]: E0129 15:46:30.830563 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/7d8596d3-fe9a-4e1a-969b-2a40a90e437d-etc-swift podName:7d8596d3-fe9a-4e1a-969b-2a40a90e437d nodeName:}" failed. No retries permitted until 2026-01-29 15:46:31.830548158 +0000 UTC m=+1135.503402485 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/7d8596d3-fe9a-4e1a-969b-2a40a90e437d-etc-swift") pod "swift-storage-0" (UID: "7d8596d3-fe9a-4e1a-969b-2a40a90e437d") : configmap "swift-ring-files" not found Jan 29 15:46:30 crc kubenswrapper[5008]: I0129 15:46:30.840406 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/5b273a50-b2db-40d5-b4b4-6494206c606d-dispersionconf\") pod \"swift-ring-rebalance-phmts\" (UID: \"5b273a50-b2db-40d5-b4b4-6494206c606d\") " pod="openstack/swift-ring-rebalance-phmts" Jan 29 15:46:30 crc kubenswrapper[5008]: I0129 15:46:30.845251 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/5b273a50-b2db-40d5-b4b4-6494206c606d-swiftconf\") pod \"swift-ring-rebalance-phmts\" (UID: \"5b273a50-b2db-40d5-b4b4-6494206c606d\") " pod="openstack/swift-ring-rebalance-phmts" Jan 29 15:46:30 crc kubenswrapper[5008]: I0129 15:46:30.853943 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"f251affb-8e6d-445d-996c-da5e3fc29de8","Type":"ContainerStarted","Data":"83fbf9241c85af5076899607dbf81b72b96fef0c2ab74ad22bb0bb59dd9ae067"} Jan 29 15:46:30 crc kubenswrapper[5008]: I0129 15:46:30.854252 5008 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-8554648995-znv2j" podUID="551951b1-6601-4b58-ab3c-aa03c962e65d" containerName="dnsmasq-dns" containerID="cri-o://38684768ef3bf132eafbfafd8a54383320bc339a0e2d483f6d09264bc7219316" gracePeriod=10 Jan 29 15:46:30 crc kubenswrapper[5008]: I0129 15:46:30.855479 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5b273a50-b2db-40d5-b4b4-6494206c606d-combined-ca-bundle\") pod \"swift-ring-rebalance-phmts\" (UID: \"5b273a50-b2db-40d5-b4b4-6494206c606d\") " pod="openstack/swift-ring-rebalance-phmts" Jan 29 15:46:30 crc kubenswrapper[5008]: I0129 15:46:30.866012 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gxr66\" (UniqueName: \"kubernetes.io/projected/5b273a50-b2db-40d5-b4b4-6494206c606d-kube-api-access-gxr66\") pod \"swift-ring-rebalance-phmts\" (UID: \"5b273a50-b2db-40d5-b4b4-6494206c606d\") " pod="openstack/swift-ring-rebalance-phmts" Jan 29 15:46:30 crc kubenswrapper[5008]: I0129 15:46:30.979605 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-phmts" Jan 29 15:46:31 crc kubenswrapper[5008]: I0129 15:46:31.426370 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-phmts"] Jan 29 15:46:31 crc kubenswrapper[5008]: I0129 15:46:31.843732 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/7d8596d3-fe9a-4e1a-969b-2a40a90e437d-etc-swift\") pod \"swift-storage-0\" (UID: \"7d8596d3-fe9a-4e1a-969b-2a40a90e437d\") " pod="openstack/swift-storage-0" Jan 29 15:46:31 crc kubenswrapper[5008]: E0129 15:46:31.843977 5008 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 29 15:46:31 crc kubenswrapper[5008]: E0129 15:46:31.844006 5008 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Jan 29 15:46:31 crc kubenswrapper[5008]: E0129 15:46:31.844074 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/7d8596d3-fe9a-4e1a-969b-2a40a90e437d-etc-swift podName:7d8596d3-fe9a-4e1a-969b-2a40a90e437d nodeName:}" failed. No retries permitted until 2026-01-29 15:46:33.844052354 +0000 UTC m=+1137.516906591 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/7d8596d3-fe9a-4e1a-969b-2a40a90e437d-etc-swift") pod "swift-storage-0" (UID: "7d8596d3-fe9a-4e1a-969b-2a40a90e437d") : configmap "swift-ring-files" not found Jan 29 15:46:31 crc kubenswrapper[5008]: I0129 15:46:31.863489 5008 generic.go:334] "Generic (PLEG): container finished" podID="551951b1-6601-4b58-ab3c-aa03c962e65d" containerID="38684768ef3bf132eafbfafd8a54383320bc339a0e2d483f6d09264bc7219316" exitCode=0 Jan 29 15:46:31 crc kubenswrapper[5008]: I0129 15:46:31.863565 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8554648995-znv2j" event={"ID":"551951b1-6601-4b58-ab3c-aa03c962e65d","Type":"ContainerDied","Data":"38684768ef3bf132eafbfafd8a54383320bc339a0e2d483f6d09264bc7219316"} Jan 29 15:46:31 crc kubenswrapper[5008]: I0129 15:46:31.864422 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-phmts" event={"ID":"5b273a50-b2db-40d5-b4b4-6494206c606d","Type":"ContainerStarted","Data":"a2a98c18f51d01224109abefa4392158329836c967e1403808990bd7b1c85a20"} Jan 29 15:46:32 crc kubenswrapper[5008]: I0129 15:46:32.736770 5008 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-9l2c6" Jan 29 15:46:32 crc kubenswrapper[5008]: I0129 15:46:32.786109 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-9l2c6" Jan 29 15:46:32 crc kubenswrapper[5008]: I0129 15:46:32.973843 5008 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-9l2c6"] Jan 29 15:46:33 crc kubenswrapper[5008]: I0129 15:46:33.875529 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/7d8596d3-fe9a-4e1a-969b-2a40a90e437d-etc-swift\") pod \"swift-storage-0\" (UID: \"7d8596d3-fe9a-4e1a-969b-2a40a90e437d\") " pod="openstack/swift-storage-0" Jan 29 15:46:33 crc kubenswrapper[5008]: E0129 15:46:33.875840 5008 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 29 15:46:33 crc kubenswrapper[5008]: E0129 15:46:33.876079 5008 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Jan 29 15:46:33 crc kubenswrapper[5008]: E0129 15:46:33.876161 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/7d8596d3-fe9a-4e1a-969b-2a40a90e437d-etc-swift podName:7d8596d3-fe9a-4e1a-969b-2a40a90e437d nodeName:}" failed. No retries permitted until 2026-01-29 15:46:37.87613559 +0000 UTC m=+1141.548989867 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/7d8596d3-fe9a-4e1a-969b-2a40a90e437d-etc-swift") pod "swift-storage-0" (UID: "7d8596d3-fe9a-4e1a-969b-2a40a90e437d") : configmap "swift-ring-files" not found Jan 29 15:46:33 crc kubenswrapper[5008]: I0129 15:46:33.879346 5008 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-9l2c6" podUID="decefe5c-189e-43f8-88b2-f93a00567c3e" containerName="registry-server" containerID="cri-o://fe84ae8c70bf02c4e800e24fb21b8ef0fd34cc6225eaec2832f3c97a133d05fb" gracePeriod=2 Jan 29 15:46:35 crc kubenswrapper[5008]: I0129 15:46:35.430350 5008 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-galera-0" Jan 29 15:46:35 crc kubenswrapper[5008]: I0129 15:46:35.430737 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-galera-0" Jan 29 15:46:36 crc kubenswrapper[5008]: I0129 15:46:36.110463 5008 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-8554648995-znv2j" podUID="551951b1-6601-4b58-ab3c-aa03c962e65d" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.111:5353: connect: connection refused" Jan 29 15:46:36 crc kubenswrapper[5008]: I0129 15:46:36.748578 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-cell1-galera-0" Jan 29 15:46:36 crc kubenswrapper[5008]: I0129 15:46:36.748639 5008 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-cell1-galera-0" Jan 29 15:46:36 crc kubenswrapper[5008]: I0129 15:46:36.908392 5008 generic.go:334] "Generic (PLEG): container finished" podID="decefe5c-189e-43f8-88b2-f93a00567c3e" containerID="fe84ae8c70bf02c4e800e24fb21b8ef0fd34cc6225eaec2832f3c97a133d05fb" exitCode=0 Jan 29 15:46:36 crc kubenswrapper[5008]: I0129 15:46:36.908500 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9l2c6" event={"ID":"decefe5c-189e-43f8-88b2-f93a00567c3e","Type":"ContainerDied","Data":"fe84ae8c70bf02c4e800e24fb21b8ef0fd34cc6225eaec2832f3c97a133d05fb"} Jan 29 15:46:36 crc kubenswrapper[5008]: I0129 15:46:36.910814 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-b8fbc5445-jlh8x" event={"ID":"536998c7-ad3f-4b4c-ad9e-342343eded97","Type":"ContainerStarted","Data":"ce100ea2fe5691613542967271b16e95f2aec9ffb301642d42302c1d83db5831"} Jan 29 15:46:36 crc kubenswrapper[5008]: I0129 15:46:36.911041 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-b8fbc5445-jlh8x" Jan 29 15:46:36 crc kubenswrapper[5008]: I0129 15:46:36.950384 5008 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-b8fbc5445-jlh8x" podStartSLOduration=8.950361325 podStartE2EDuration="8.950361325s" podCreationTimestamp="2026-01-29 15:46:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 15:46:36.939692047 +0000 UTC m=+1140.612546314" watchObservedRunningTime="2026-01-29 15:46:36.950361325 +0000 UTC m=+1140.623215572" Jan 29 15:46:37 crc kubenswrapper[5008]: I0129 15:46:37.066743 5008 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8554648995-znv2j" Jan 29 15:46:37 crc kubenswrapper[5008]: I0129 15:46:37.138339 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qhjbr\" (UniqueName: \"kubernetes.io/projected/551951b1-6601-4b58-ab3c-aa03c962e65d-kube-api-access-qhjbr\") pod \"551951b1-6601-4b58-ab3c-aa03c962e65d\" (UID: \"551951b1-6601-4b58-ab3c-aa03c962e65d\") " Jan 29 15:46:37 crc kubenswrapper[5008]: I0129 15:46:37.138399 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/551951b1-6601-4b58-ab3c-aa03c962e65d-config\") pod \"551951b1-6601-4b58-ab3c-aa03c962e65d\" (UID: \"551951b1-6601-4b58-ab3c-aa03c962e65d\") " Jan 29 15:46:37 crc kubenswrapper[5008]: I0129 15:46:37.138494 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/551951b1-6601-4b58-ab3c-aa03c962e65d-dns-svc\") pod \"551951b1-6601-4b58-ab3c-aa03c962e65d\" (UID: \"551951b1-6601-4b58-ab3c-aa03c962e65d\") " Jan 29 15:46:37 crc kubenswrapper[5008]: I0129 15:46:37.138538 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/551951b1-6601-4b58-ab3c-aa03c962e65d-ovsdbserver-nb\") pod \"551951b1-6601-4b58-ab3c-aa03c962e65d\" (UID: \"551951b1-6601-4b58-ab3c-aa03c962e65d\") " Jan 29 15:46:37 crc kubenswrapper[5008]: I0129 15:46:37.138574 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/551951b1-6601-4b58-ab3c-aa03c962e65d-ovsdbserver-sb\") pod \"551951b1-6601-4b58-ab3c-aa03c962e65d\" (UID: \"551951b1-6601-4b58-ab3c-aa03c962e65d\") " Jan 29 15:46:37 crc kubenswrapper[5008]: I0129 15:46:37.185557 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/551951b1-6601-4b58-ab3c-aa03c962e65d-kube-api-access-qhjbr" (OuterVolumeSpecName: "kube-api-access-qhjbr") pod "551951b1-6601-4b58-ab3c-aa03c962e65d" (UID: "551951b1-6601-4b58-ab3c-aa03c962e65d"). InnerVolumeSpecName "kube-api-access-qhjbr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:46:37 crc kubenswrapper[5008]: I0129 15:46:37.199661 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/551951b1-6601-4b58-ab3c-aa03c962e65d-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "551951b1-6601-4b58-ab3c-aa03c962e65d" (UID: "551951b1-6601-4b58-ab3c-aa03c962e65d"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:46:37 crc kubenswrapper[5008]: I0129 15:46:37.204809 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/551951b1-6601-4b58-ab3c-aa03c962e65d-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "551951b1-6601-4b58-ab3c-aa03c962e65d" (UID: "551951b1-6601-4b58-ab3c-aa03c962e65d"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:46:37 crc kubenswrapper[5008]: I0129 15:46:37.206039 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/551951b1-6601-4b58-ab3c-aa03c962e65d-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "551951b1-6601-4b58-ab3c-aa03c962e65d" (UID: "551951b1-6601-4b58-ab3c-aa03c962e65d"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:46:37 crc kubenswrapper[5008]: I0129 15:46:37.221094 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/551951b1-6601-4b58-ab3c-aa03c962e65d-config" (OuterVolumeSpecName: "config") pod "551951b1-6601-4b58-ab3c-aa03c962e65d" (UID: "551951b1-6601-4b58-ab3c-aa03c962e65d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:46:37 crc kubenswrapper[5008]: I0129 15:46:37.240076 5008 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qhjbr\" (UniqueName: \"kubernetes.io/projected/551951b1-6601-4b58-ab3c-aa03c962e65d-kube-api-access-qhjbr\") on node \"crc\" DevicePath \"\"" Jan 29 15:46:37 crc kubenswrapper[5008]: I0129 15:46:37.240112 5008 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/551951b1-6601-4b58-ab3c-aa03c962e65d-config\") on node \"crc\" DevicePath \"\"" Jan 29 15:46:37 crc kubenswrapper[5008]: I0129 15:46:37.240123 5008 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/551951b1-6601-4b58-ab3c-aa03c962e65d-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 29 15:46:37 crc kubenswrapper[5008]: I0129 15:46:37.240131 5008 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/551951b1-6601-4b58-ab3c-aa03c962e65d-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 29 15:46:37 crc kubenswrapper[5008]: I0129 15:46:37.240140 5008 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/551951b1-6601-4b58-ab3c-aa03c962e65d-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 29 15:46:37 crc kubenswrapper[5008]: I0129 15:46:37.255153 5008 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-9l2c6" Jan 29 15:46:37 crc kubenswrapper[5008]: I0129 15:46:37.343307 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/decefe5c-189e-43f8-88b2-f93a00567c3e-catalog-content\") pod \"decefe5c-189e-43f8-88b2-f93a00567c3e\" (UID: \"decefe5c-189e-43f8-88b2-f93a00567c3e\") " Jan 29 15:46:37 crc kubenswrapper[5008]: I0129 15:46:37.343389 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gkwsn\" (UniqueName: \"kubernetes.io/projected/decefe5c-189e-43f8-88b2-f93a00567c3e-kube-api-access-gkwsn\") pod \"decefe5c-189e-43f8-88b2-f93a00567c3e\" (UID: \"decefe5c-189e-43f8-88b2-f93a00567c3e\") " Jan 29 15:46:37 crc kubenswrapper[5008]: I0129 15:46:37.343489 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/decefe5c-189e-43f8-88b2-f93a00567c3e-utilities\") pod \"decefe5c-189e-43f8-88b2-f93a00567c3e\" (UID: \"decefe5c-189e-43f8-88b2-f93a00567c3e\") " Jan 29 15:46:37 crc kubenswrapper[5008]: I0129 15:46:37.347510 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/decefe5c-189e-43f8-88b2-f93a00567c3e-utilities" (OuterVolumeSpecName: "utilities") pod "decefe5c-189e-43f8-88b2-f93a00567c3e" (UID: "decefe5c-189e-43f8-88b2-f93a00567c3e"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 15:46:37 crc kubenswrapper[5008]: I0129 15:46:37.349719 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/decefe5c-189e-43f8-88b2-f93a00567c3e-kube-api-access-gkwsn" (OuterVolumeSpecName: "kube-api-access-gkwsn") pod "decefe5c-189e-43f8-88b2-f93a00567c3e" (UID: "decefe5c-189e-43f8-88b2-f93a00567c3e"). InnerVolumeSpecName "kube-api-access-gkwsn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:46:37 crc kubenswrapper[5008]: I0129 15:46:37.410877 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/decefe5c-189e-43f8-88b2-f93a00567c3e-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "decefe5c-189e-43f8-88b2-f93a00567c3e" (UID: "decefe5c-189e-43f8-88b2-f93a00567c3e"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 15:46:37 crc kubenswrapper[5008]: I0129 15:46:37.445949 5008 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/decefe5c-189e-43f8-88b2-f93a00567c3e-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 15:46:37 crc kubenswrapper[5008]: I0129 15:46:37.445989 5008 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gkwsn\" (UniqueName: \"kubernetes.io/projected/decefe5c-189e-43f8-88b2-f93a00567c3e-kube-api-access-gkwsn\") on node \"crc\" DevicePath \"\"" Jan 29 15:46:37 crc kubenswrapper[5008]: I0129 15:46:37.446001 5008 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/decefe5c-189e-43f8-88b2-f93a00567c3e-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 15:46:37 crc kubenswrapper[5008]: I0129 15:46:37.923496 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9l2c6" event={"ID":"decefe5c-189e-43f8-88b2-f93a00567c3e","Type":"ContainerDied","Data":"1e9043307f7a755489d3a239db58010b75203626c362242971f41c104845eeea"} Jan 29 15:46:37 crc kubenswrapper[5008]: I0129 15:46:37.923550 5008 scope.go:117] "RemoveContainer" containerID="fe84ae8c70bf02c4e800e24fb21b8ef0fd34cc6225eaec2832f3c97a133d05fb" Jan 29 15:46:37 crc kubenswrapper[5008]: I0129 15:46:37.923647 5008 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-9l2c6" Jan 29 15:46:37 crc kubenswrapper[5008]: I0129 15:46:37.927173 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8554648995-znv2j" event={"ID":"551951b1-6601-4b58-ab3c-aa03c962e65d","Type":"ContainerDied","Data":"6830a4e592ccf7b5b08a72566d9d3f5dc6e7b0b1bdbcf42341ded46c73a34940"} Jan 29 15:46:37 crc kubenswrapper[5008]: I0129 15:46:37.927201 5008 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8554648995-znv2j" Jan 29 15:46:37 crc kubenswrapper[5008]: I0129 15:46:37.954078 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/7d8596d3-fe9a-4e1a-969b-2a40a90e437d-etc-swift\") pod \"swift-storage-0\" (UID: \"7d8596d3-fe9a-4e1a-969b-2a40a90e437d\") " pod="openstack/swift-storage-0" Jan 29 15:46:37 crc kubenswrapper[5008]: E0129 15:46:37.956494 5008 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 29 15:46:37 crc kubenswrapper[5008]: E0129 15:46:37.956515 5008 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Jan 29 15:46:37 crc kubenswrapper[5008]: E0129 15:46:37.956561 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/7d8596d3-fe9a-4e1a-969b-2a40a90e437d-etc-swift podName:7d8596d3-fe9a-4e1a-969b-2a40a90e437d nodeName:}" failed. No retries permitted until 2026-01-29 15:46:45.956543364 +0000 UTC m=+1149.629397601 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/7d8596d3-fe9a-4e1a-969b-2a40a90e437d-etc-swift") pod "swift-storage-0" (UID: "7d8596d3-fe9a-4e1a-969b-2a40a90e437d") : configmap "swift-ring-files" not found Jan 29 15:46:37 crc kubenswrapper[5008]: I0129 15:46:37.970994 5008 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-8554648995-znv2j"] Jan 29 15:46:37 crc kubenswrapper[5008]: I0129 15:46:37.982517 5008 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-8554648995-znv2j"] Jan 29 15:46:37 crc kubenswrapper[5008]: I0129 15:46:37.991909 5008 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-9l2c6"] Jan 29 15:46:37 crc kubenswrapper[5008]: I0129 15:46:37.995382 5008 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-9l2c6"] Jan 29 15:46:38 crc kubenswrapper[5008]: I0129 15:46:38.144346 5008 scope.go:117] "RemoveContainer" containerID="e32fe63a0f361be2992d303fb8560c37887275468835e55857ba8a6b44bc5268" Jan 29 15:46:38 crc kubenswrapper[5008]: I0129 15:46:38.203717 5008 scope.go:117] "RemoveContainer" containerID="11de983cd2749bba71f06017a27d73e928c76c7f26d9aaaadf0259656de48de2" Jan 29 15:46:38 crc kubenswrapper[5008]: I0129 15:46:38.237217 5008 scope.go:117] "RemoveContainer" containerID="38684768ef3bf132eafbfafd8a54383320bc339a0e2d483f6d09264bc7219316" Jan 29 15:46:38 crc kubenswrapper[5008]: I0129 15:46:38.504017 5008 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-galera-0" Jan 29 15:46:38 crc kubenswrapper[5008]: I0129 15:46:38.579454 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-galera-0" Jan 29 15:46:38 crc kubenswrapper[5008]: I0129 15:46:38.944403 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"f251affb-8e6d-445d-996c-da5e3fc29de8","Type":"ContainerStarted","Data":"7aab69f3b27570d6bdb4523bdea817bf898ffe9d0a38ea095cae12c9cdcf973f"} Jan 29 15:46:39 crc kubenswrapper[5008]: I0129 15:46:39.335445 5008 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="551951b1-6601-4b58-ab3c-aa03c962e65d" path="/var/lib/kubelet/pods/551951b1-6601-4b58-ab3c-aa03c962e65d/volumes" Jan 29 15:46:39 crc kubenswrapper[5008]: I0129 15:46:39.336557 5008 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="decefe5c-189e-43f8-88b2-f93a00567c3e" path="/var/lib/kubelet/pods/decefe5c-189e-43f8-88b2-f93a00567c3e/volumes" Jan 29 15:46:40 crc kubenswrapper[5008]: I0129 15:46:40.204752 5008 scope.go:117] "RemoveContainer" containerID="2b40c44564e987f20174f64ac60acdae94665df690bdf09a0b0f3a38b7da3092" Jan 29 15:46:40 crc kubenswrapper[5008]: I0129 15:46:40.869660 5008 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-cell1-galera-0" Jan 29 15:46:40 crc kubenswrapper[5008]: I0129 15:46:40.961279 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-phmts" event={"ID":"5b273a50-b2db-40d5-b4b4-6494206c606d","Type":"ContainerStarted","Data":"bda0b4b24ad7358124acc7096a07129f2529fe34f4356b7cc8add641046f3880"} Jan 29 15:46:40 crc kubenswrapper[5008]: I0129 15:46:40.963981 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"f251affb-8e6d-445d-996c-da5e3fc29de8","Type":"ContainerStarted","Data":"809c8215ecf172cb8d1fff367b10f9cc603ad0233c0f9adb12149813077f000d"} Jan 29 15:46:40 crc kubenswrapper[5008]: I0129 15:46:40.964274 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-northd-0" Jan 29 15:46:40 crc kubenswrapper[5008]: I0129 15:46:40.978635 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-cell1-galera-0" Jan 29 15:46:41 crc kubenswrapper[5008]: I0129 15:46:41.001424 5008 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-ring-rebalance-phmts" podStartSLOduration=2.17650442 podStartE2EDuration="11.001398257s" podCreationTimestamp="2026-01-29 15:46:30 +0000 UTC" firstStartedPulling="2026-01-29 15:46:31.431457956 +0000 UTC m=+1135.104312193" lastFinishedPulling="2026-01-29 15:46:40.256351793 +0000 UTC m=+1143.929206030" observedRunningTime="2026-01-29 15:46:40.982871487 +0000 UTC m=+1144.655725764" watchObservedRunningTime="2026-01-29 15:46:41.001398257 +0000 UTC m=+1144.674252494" Jan 29 15:46:41 crc kubenswrapper[5008]: I0129 15:46:41.044424 5008 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-northd-0" podStartSLOduration=3.930948742 podStartE2EDuration="12.04440618s" podCreationTimestamp="2026-01-29 15:46:29 +0000 UTC" firstStartedPulling="2026-01-29 15:46:30.098587183 +0000 UTC m=+1133.771441440" lastFinishedPulling="2026-01-29 15:46:38.212044641 +0000 UTC m=+1141.884898878" observedRunningTime="2026-01-29 15:46:41.021884473 +0000 UTC m=+1144.694738720" watchObservedRunningTime="2026-01-29 15:46:41.04440618 +0000 UTC m=+1144.717260427" Jan 29 15:46:44 crc kubenswrapper[5008]: I0129 15:46:44.149547 5008 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-d79ml"] Jan 29 15:46:44 crc kubenswrapper[5008]: E0129 15:46:44.150330 5008 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="decefe5c-189e-43f8-88b2-f93a00567c3e" containerName="registry-server" Jan 29 15:46:44 crc kubenswrapper[5008]: I0129 15:46:44.150352 5008 state_mem.go:107] "Deleted CPUSet assignment" podUID="decefe5c-189e-43f8-88b2-f93a00567c3e" containerName="registry-server" Jan 29 15:46:44 crc kubenswrapper[5008]: E0129 15:46:44.150373 5008 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="551951b1-6601-4b58-ab3c-aa03c962e65d" containerName="dnsmasq-dns" Jan 29 15:46:44 crc kubenswrapper[5008]: I0129 15:46:44.150383 5008 state_mem.go:107] "Deleted CPUSet assignment" podUID="551951b1-6601-4b58-ab3c-aa03c962e65d" containerName="dnsmasq-dns" Jan 29 15:46:44 crc kubenswrapper[5008]: E0129 15:46:44.150397 5008 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="551951b1-6601-4b58-ab3c-aa03c962e65d" containerName="init" Jan 29 15:46:44 crc kubenswrapper[5008]: I0129 15:46:44.150407 5008 state_mem.go:107] "Deleted CPUSet assignment" podUID="551951b1-6601-4b58-ab3c-aa03c962e65d" containerName="init" Jan 29 15:46:44 crc kubenswrapper[5008]: E0129 15:46:44.150434 5008 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="decefe5c-189e-43f8-88b2-f93a00567c3e" containerName="extract-content" Jan 29 15:46:44 crc kubenswrapper[5008]: I0129 15:46:44.151020 5008 state_mem.go:107] "Deleted CPUSet assignment" podUID="decefe5c-189e-43f8-88b2-f93a00567c3e" containerName="extract-content" Jan 29 15:46:44 crc kubenswrapper[5008]: E0129 15:46:44.151044 5008 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="decefe5c-189e-43f8-88b2-f93a00567c3e" containerName="extract-utilities" Jan 29 15:46:44 crc kubenswrapper[5008]: I0129 15:46:44.151054 5008 state_mem.go:107] "Deleted CPUSet assignment" podUID="decefe5c-189e-43f8-88b2-f93a00567c3e" containerName="extract-utilities" Jan 29 15:46:44 crc kubenswrapper[5008]: I0129 15:46:44.151638 5008 memory_manager.go:354] "RemoveStaleState removing state" podUID="decefe5c-189e-43f8-88b2-f93a00567c3e" containerName="registry-server" Jan 29 15:46:44 crc kubenswrapper[5008]: I0129 15:46:44.151707 5008 memory_manager.go:354] "RemoveStaleState removing state" podUID="551951b1-6601-4b58-ab3c-aa03c962e65d" containerName="dnsmasq-dns" Jan 29 15:46:44 crc kubenswrapper[5008]: I0129 15:46:44.153098 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-d79ml" Jan 29 15:46:44 crc kubenswrapper[5008]: I0129 15:46:44.157014 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-mariadb-root-db-secret" Jan 29 15:46:44 crc kubenswrapper[5008]: I0129 15:46:44.165021 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-d79ml"] Jan 29 15:46:44 crc kubenswrapper[5008]: I0129 15:46:44.283250 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tnk2c\" (UniqueName: \"kubernetes.io/projected/907129fe-50cb-47ef-bbf6-db42cd2ad1ae-kube-api-access-tnk2c\") pod \"root-account-create-update-d79ml\" (UID: \"907129fe-50cb-47ef-bbf6-db42cd2ad1ae\") " pod="openstack/root-account-create-update-d79ml" Jan 29 15:46:44 crc kubenswrapper[5008]: I0129 15:46:44.283693 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/907129fe-50cb-47ef-bbf6-db42cd2ad1ae-operator-scripts\") pod \"root-account-create-update-d79ml\" (UID: \"907129fe-50cb-47ef-bbf6-db42cd2ad1ae\") " pod="openstack/root-account-create-update-d79ml" Jan 29 15:46:44 crc kubenswrapper[5008]: I0129 15:46:44.371015 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-b8fbc5445-jlh8x" Jan 29 15:46:44 crc kubenswrapper[5008]: I0129 15:46:44.386056 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tnk2c\" (UniqueName: \"kubernetes.io/projected/907129fe-50cb-47ef-bbf6-db42cd2ad1ae-kube-api-access-tnk2c\") pod \"root-account-create-update-d79ml\" (UID: \"907129fe-50cb-47ef-bbf6-db42cd2ad1ae\") " pod="openstack/root-account-create-update-d79ml" Jan 29 15:46:44 crc kubenswrapper[5008]: I0129 15:46:44.386321 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/907129fe-50cb-47ef-bbf6-db42cd2ad1ae-operator-scripts\") pod \"root-account-create-update-d79ml\" (UID: \"907129fe-50cb-47ef-bbf6-db42cd2ad1ae\") " pod="openstack/root-account-create-update-d79ml" Jan 29 15:46:44 crc kubenswrapper[5008]: I0129 15:46:44.387509 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/907129fe-50cb-47ef-bbf6-db42cd2ad1ae-operator-scripts\") pod \"root-account-create-update-d79ml\" (UID: \"907129fe-50cb-47ef-bbf6-db42cd2ad1ae\") " pod="openstack/root-account-create-update-d79ml" Jan 29 15:46:44 crc kubenswrapper[5008]: I0129 15:46:44.420333 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tnk2c\" (UniqueName: \"kubernetes.io/projected/907129fe-50cb-47ef-bbf6-db42cd2ad1ae-kube-api-access-tnk2c\") pod \"root-account-create-update-d79ml\" (UID: \"907129fe-50cb-47ef-bbf6-db42cd2ad1ae\") " pod="openstack/root-account-create-update-d79ml" Jan 29 15:46:44 crc kubenswrapper[5008]: I0129 15:46:44.444643 5008 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-7pwkf"] Jan 29 15:46:44 crc kubenswrapper[5008]: I0129 15:46:44.444914 5008 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-57d769cc4f-7pwkf" podUID="d528ee94-b499-4f20-8603-6dcc9e8b0361" containerName="dnsmasq-dns" containerID="cri-o://41e80ea40d300659d460b8dae3a7e24635694097a722b56e704158aae123525e" gracePeriod=10 Jan 29 15:46:44 crc kubenswrapper[5008]: I0129 15:46:44.478187 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-d79ml" Jan 29 15:46:45 crc kubenswrapper[5008]: I0129 15:46:45.020880 5008 generic.go:334] "Generic (PLEG): container finished" podID="d528ee94-b499-4f20-8603-6dcc9e8b0361" containerID="41e80ea40d300659d460b8dae3a7e24635694097a722b56e704158aae123525e" exitCode=0 Jan 29 15:46:45 crc kubenswrapper[5008]: I0129 15:46:45.020970 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-7pwkf" event={"ID":"d528ee94-b499-4f20-8603-6dcc9e8b0361","Type":"ContainerDied","Data":"41e80ea40d300659d460b8dae3a7e24635694097a722b56e704158aae123525e"} Jan 29 15:46:45 crc kubenswrapper[5008]: I0129 15:46:45.034270 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-d79ml"] Jan 29 15:46:45 crc kubenswrapper[5008]: I0129 15:46:45.120493 5008 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-7pwkf" Jan 29 15:46:45 crc kubenswrapper[5008]: I0129 15:46:45.309713 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d528ee94-b499-4f20-8603-6dcc9e8b0361-config\") pod \"d528ee94-b499-4f20-8603-6dcc9e8b0361\" (UID: \"d528ee94-b499-4f20-8603-6dcc9e8b0361\") " Jan 29 15:46:45 crc kubenswrapper[5008]: I0129 15:46:45.309881 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-75fqk\" (UniqueName: \"kubernetes.io/projected/d528ee94-b499-4f20-8603-6dcc9e8b0361-kube-api-access-75fqk\") pod \"d528ee94-b499-4f20-8603-6dcc9e8b0361\" (UID: \"d528ee94-b499-4f20-8603-6dcc9e8b0361\") " Jan 29 15:46:45 crc kubenswrapper[5008]: I0129 15:46:45.310041 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d528ee94-b499-4f20-8603-6dcc9e8b0361-dns-svc\") pod \"d528ee94-b499-4f20-8603-6dcc9e8b0361\" (UID: \"d528ee94-b499-4f20-8603-6dcc9e8b0361\") " Jan 29 15:46:45 crc kubenswrapper[5008]: I0129 15:46:45.316013 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d528ee94-b499-4f20-8603-6dcc9e8b0361-kube-api-access-75fqk" (OuterVolumeSpecName: "kube-api-access-75fqk") pod "d528ee94-b499-4f20-8603-6dcc9e8b0361" (UID: "d528ee94-b499-4f20-8603-6dcc9e8b0361"). InnerVolumeSpecName "kube-api-access-75fqk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:46:45 crc kubenswrapper[5008]: I0129 15:46:45.353690 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d528ee94-b499-4f20-8603-6dcc9e8b0361-config" (OuterVolumeSpecName: "config") pod "d528ee94-b499-4f20-8603-6dcc9e8b0361" (UID: "d528ee94-b499-4f20-8603-6dcc9e8b0361"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:46:45 crc kubenswrapper[5008]: I0129 15:46:45.353860 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d528ee94-b499-4f20-8603-6dcc9e8b0361-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "d528ee94-b499-4f20-8603-6dcc9e8b0361" (UID: "d528ee94-b499-4f20-8603-6dcc9e8b0361"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:46:45 crc kubenswrapper[5008]: I0129 15:46:45.412622 5008 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d528ee94-b499-4f20-8603-6dcc9e8b0361-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 29 15:46:45 crc kubenswrapper[5008]: I0129 15:46:45.412664 5008 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d528ee94-b499-4f20-8603-6dcc9e8b0361-config\") on node \"crc\" DevicePath \"\"" Jan 29 15:46:45 crc kubenswrapper[5008]: I0129 15:46:45.412676 5008 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-75fqk\" (UniqueName: \"kubernetes.io/projected/d528ee94-b499-4f20-8603-6dcc9e8b0361-kube-api-access-75fqk\") on node \"crc\" DevicePath \"\"" Jan 29 15:46:46 crc kubenswrapper[5008]: I0129 15:46:46.021835 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/7d8596d3-fe9a-4e1a-969b-2a40a90e437d-etc-swift\") pod \"swift-storage-0\" (UID: \"7d8596d3-fe9a-4e1a-969b-2a40a90e437d\") " pod="openstack/swift-storage-0" Jan 29 15:46:46 crc kubenswrapper[5008]: E0129 15:46:46.022356 5008 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 29 15:46:46 crc kubenswrapper[5008]: E0129 15:46:46.022373 5008 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Jan 29 15:46:46 crc kubenswrapper[5008]: E0129 15:46:46.022422 5008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/7d8596d3-fe9a-4e1a-969b-2a40a90e437d-etc-swift podName:7d8596d3-fe9a-4e1a-969b-2a40a90e437d nodeName:}" failed. No retries permitted until 2026-01-29 15:47:02.022404248 +0000 UTC m=+1165.695258485 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/7d8596d3-fe9a-4e1a-969b-2a40a90e437d-etc-swift") pod "swift-storage-0" (UID: "7d8596d3-fe9a-4e1a-969b-2a40a90e437d") : configmap "swift-ring-files" not found Jan 29 15:46:46 crc kubenswrapper[5008]: I0129 15:46:46.030907 5008 generic.go:334] "Generic (PLEG): container finished" podID="8c8683a3-18f6-4242-9991-b542aed9143b" containerID="a8bec1298ff14291e2bcc81bb72e60423454e3549e3617dfc368a5ff2649831f" exitCode=0 Jan 29 15:46:46 crc kubenswrapper[5008]: I0129 15:46:46.030969 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"8c8683a3-18f6-4242-9991-b542aed9143b","Type":"ContainerDied","Data":"a8bec1298ff14291e2bcc81bb72e60423454e3549e3617dfc368a5ff2649831f"} Jan 29 15:46:46 crc kubenswrapper[5008]: I0129 15:46:46.035841 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-7pwkf" event={"ID":"d528ee94-b499-4f20-8603-6dcc9e8b0361","Type":"ContainerDied","Data":"7e40b85878fc9eb94adb0dc672f4b4d3fd0475b78dd43bc83dd4dd513c313465"} Jan 29 15:46:46 crc kubenswrapper[5008]: I0129 15:46:46.035899 5008 scope.go:117] "RemoveContainer" containerID="41e80ea40d300659d460b8dae3a7e24635694097a722b56e704158aae123525e" Jan 29 15:46:46 crc kubenswrapper[5008]: I0129 15:46:46.035955 5008 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-7pwkf" Jan 29 15:46:46 crc kubenswrapper[5008]: I0129 15:46:46.049018 5008 generic.go:334] "Generic (PLEG): container finished" podID="4dcd0990-beb1-445a-b387-b2b78c1a39d2" containerID="2c6fa5d16085f47a1816e6e7356d1268ade8fe801f24fc04ea91e56e48e6806c" exitCode=0 Jan 29 15:46:46 crc kubenswrapper[5008]: I0129 15:46:46.049093 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"4dcd0990-beb1-445a-b387-b2b78c1a39d2","Type":"ContainerDied","Data":"2c6fa5d16085f47a1816e6e7356d1268ade8fe801f24fc04ea91e56e48e6806c"} Jan 29 15:46:46 crc kubenswrapper[5008]: I0129 15:46:46.051866 5008 generic.go:334] "Generic (PLEG): container finished" podID="907129fe-50cb-47ef-bbf6-db42cd2ad1ae" containerID="e93e17f1bada8f9ceb5d734c0b57f087df79c0ad461fa0d4048a7875532ded1d" exitCode=0 Jan 29 15:46:46 crc kubenswrapper[5008]: I0129 15:46:46.051902 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-d79ml" event={"ID":"907129fe-50cb-47ef-bbf6-db42cd2ad1ae","Type":"ContainerDied","Data":"e93e17f1bada8f9ceb5d734c0b57f087df79c0ad461fa0d4048a7875532ded1d"} Jan 29 15:46:46 crc kubenswrapper[5008]: I0129 15:46:46.051925 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-d79ml" event={"ID":"907129fe-50cb-47ef-bbf6-db42cd2ad1ae","Type":"ContainerStarted","Data":"ff2c646e70d92dcf4358d827ab57d652f752745f3e7a9b83004df897a827b555"} Jan 29 15:46:46 crc kubenswrapper[5008]: I0129 15:46:46.199831 5008 scope.go:117] "RemoveContainer" containerID="074d5cb2df57c15195252921a34c3156f30decbbef34cf2601f7fc1b8f4751b1" Jan 29 15:46:46 crc kubenswrapper[5008]: I0129 15:46:46.234269 5008 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-7pwkf"] Jan 29 15:46:46 crc kubenswrapper[5008]: I0129 15:46:46.240489 5008 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-7pwkf"] Jan 29 15:46:46 crc kubenswrapper[5008]: I0129 15:46:46.700373 5008 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-create-pggzk"] Jan 29 15:46:46 crc kubenswrapper[5008]: E0129 15:46:46.701996 5008 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d528ee94-b499-4f20-8603-6dcc9e8b0361" containerName="dnsmasq-dns" Jan 29 15:46:46 crc kubenswrapper[5008]: I0129 15:46:46.702101 5008 state_mem.go:107] "Deleted CPUSet assignment" podUID="d528ee94-b499-4f20-8603-6dcc9e8b0361" containerName="dnsmasq-dns" Jan 29 15:46:46 crc kubenswrapper[5008]: E0129 15:46:46.702190 5008 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d528ee94-b499-4f20-8603-6dcc9e8b0361" containerName="init" Jan 29 15:46:46 crc kubenswrapper[5008]: I0129 15:46:46.702254 5008 state_mem.go:107] "Deleted CPUSet assignment" podUID="d528ee94-b499-4f20-8603-6dcc9e8b0361" containerName="init" Jan 29 15:46:46 crc kubenswrapper[5008]: I0129 15:46:46.702529 5008 memory_manager.go:354] "RemoveStaleState removing state" podUID="d528ee94-b499-4f20-8603-6dcc9e8b0361" containerName="dnsmasq-dns" Jan 29 15:46:46 crc kubenswrapper[5008]: I0129 15:46:46.703234 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-pggzk" Jan 29 15:46:46 crc kubenswrapper[5008]: I0129 15:46:46.707357 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-pggzk"] Jan 29 15:46:46 crc kubenswrapper[5008]: I0129 15:46:46.811343 5008 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-e4e6-account-create-update-6vxmr"] Jan 29 15:46:46 crc kubenswrapper[5008]: I0129 15:46:46.815608 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-e4e6-account-create-update-6vxmr" Jan 29 15:46:46 crc kubenswrapper[5008]: I0129 15:46:46.822434 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-db-secret" Jan 29 15:46:46 crc kubenswrapper[5008]: I0129 15:46:46.852040 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/232739d0-09f9-4843-8c9f-fc19bc53763f-operator-scripts\") pod \"keystone-db-create-pggzk\" (UID: \"232739d0-09f9-4843-8c9f-fc19bc53763f\") " pod="openstack/keystone-db-create-pggzk" Jan 29 15:46:46 crc kubenswrapper[5008]: I0129 15:46:46.852642 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lzbff\" (UniqueName: \"kubernetes.io/projected/232739d0-09f9-4843-8c9f-fc19bc53763f-kube-api-access-lzbff\") pod \"keystone-db-create-pggzk\" (UID: \"232739d0-09f9-4843-8c9f-fc19bc53763f\") " pod="openstack/keystone-db-create-pggzk" Jan 29 15:46:46 crc kubenswrapper[5008]: I0129 15:46:46.866102 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-e4e6-account-create-update-6vxmr"] Jan 29 15:46:46 crc kubenswrapper[5008]: I0129 15:46:46.954388 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lzbff\" (UniqueName: \"kubernetes.io/projected/232739d0-09f9-4843-8c9f-fc19bc53763f-kube-api-access-lzbff\") pod \"keystone-db-create-pggzk\" (UID: \"232739d0-09f9-4843-8c9f-fc19bc53763f\") " pod="openstack/keystone-db-create-pggzk" Jan 29 15:46:46 crc kubenswrapper[5008]: I0129 15:46:46.954527 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/232739d0-09f9-4843-8c9f-fc19bc53763f-operator-scripts\") pod \"keystone-db-create-pggzk\" (UID: \"232739d0-09f9-4843-8c9f-fc19bc53763f\") " pod="openstack/keystone-db-create-pggzk" Jan 29 15:46:46 crc kubenswrapper[5008]: I0129 15:46:46.954563 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/30bc21a6-d1eb-4200-add0-523a33ffb2ff-operator-scripts\") pod \"keystone-e4e6-account-create-update-6vxmr\" (UID: \"30bc21a6-d1eb-4200-add0-523a33ffb2ff\") " pod="openstack/keystone-e4e6-account-create-update-6vxmr" Jan 29 15:46:46 crc kubenswrapper[5008]: I0129 15:46:46.954588 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gpwj8\" (UniqueName: \"kubernetes.io/projected/30bc21a6-d1eb-4200-add0-523a33ffb2ff-kube-api-access-gpwj8\") pod \"keystone-e4e6-account-create-update-6vxmr\" (UID: \"30bc21a6-d1eb-4200-add0-523a33ffb2ff\") " pod="openstack/keystone-e4e6-account-create-update-6vxmr" Jan 29 15:46:46 crc kubenswrapper[5008]: I0129 15:46:46.955322 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/232739d0-09f9-4843-8c9f-fc19bc53763f-operator-scripts\") pod \"keystone-db-create-pggzk\" (UID: \"232739d0-09f9-4843-8c9f-fc19bc53763f\") " pod="openstack/keystone-db-create-pggzk" Jan 29 15:46:46 crc kubenswrapper[5008]: I0129 15:46:46.973909 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lzbff\" (UniqueName: \"kubernetes.io/projected/232739d0-09f9-4843-8c9f-fc19bc53763f-kube-api-access-lzbff\") pod \"keystone-db-create-pggzk\" (UID: \"232739d0-09f9-4843-8c9f-fc19bc53763f\") " pod="openstack/keystone-db-create-pggzk" Jan 29 15:46:47 crc kubenswrapper[5008]: I0129 15:46:47.004880 5008 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-create-8tpqs"] Jan 29 15:46:47 crc kubenswrapper[5008]: I0129 15:46:47.006116 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-8tpqs" Jan 29 15:46:47 crc kubenswrapper[5008]: I0129 15:46:47.018333 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-8tpqs"] Jan 29 15:46:47 crc kubenswrapper[5008]: I0129 15:46:47.027472 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-pggzk" Jan 29 15:46:47 crc kubenswrapper[5008]: I0129 15:46:47.055811 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/30bc21a6-d1eb-4200-add0-523a33ffb2ff-operator-scripts\") pod \"keystone-e4e6-account-create-update-6vxmr\" (UID: \"30bc21a6-d1eb-4200-add0-523a33ffb2ff\") " pod="openstack/keystone-e4e6-account-create-update-6vxmr" Jan 29 15:46:47 crc kubenswrapper[5008]: I0129 15:46:47.056193 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gpwj8\" (UniqueName: \"kubernetes.io/projected/30bc21a6-d1eb-4200-add0-523a33ffb2ff-kube-api-access-gpwj8\") pod \"keystone-e4e6-account-create-update-6vxmr\" (UID: \"30bc21a6-d1eb-4200-add0-523a33ffb2ff\") " pod="openstack/keystone-e4e6-account-create-update-6vxmr" Jan 29 15:46:47 crc kubenswrapper[5008]: I0129 15:46:47.056765 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/30bc21a6-d1eb-4200-add0-523a33ffb2ff-operator-scripts\") pod \"keystone-e4e6-account-create-update-6vxmr\" (UID: \"30bc21a6-d1eb-4200-add0-523a33ffb2ff\") " pod="openstack/keystone-e4e6-account-create-update-6vxmr" Jan 29 15:46:47 crc kubenswrapper[5008]: I0129 15:46:47.072307 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"8c8683a3-18f6-4242-9991-b542aed9143b","Type":"ContainerStarted","Data":"17a9d85c4e86267ed17f122162314c4abf33109c5d7f30dc6ebf14f80d93172f"} Jan 29 15:46:47 crc kubenswrapper[5008]: I0129 15:46:47.072563 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Jan 29 15:46:47 crc kubenswrapper[5008]: I0129 15:46:47.077181 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"4dcd0990-beb1-445a-b387-b2b78c1a39d2","Type":"ContainerStarted","Data":"9b31c687c333d16fa1b4aaf245a078f04a0f3ed0c06a452ddad2c14ecb517683"} Jan 29 15:46:47 crc kubenswrapper[5008]: I0129 15:46:47.077720 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Jan 29 15:46:47 crc kubenswrapper[5008]: I0129 15:46:47.078291 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gpwj8\" (UniqueName: \"kubernetes.io/projected/30bc21a6-d1eb-4200-add0-523a33ffb2ff-kube-api-access-gpwj8\") pod \"keystone-e4e6-account-create-update-6vxmr\" (UID: \"30bc21a6-d1eb-4200-add0-523a33ffb2ff\") " pod="openstack/keystone-e4e6-account-create-update-6vxmr" Jan 29 15:46:47 crc kubenswrapper[5008]: I0129 15:46:47.116962 5008 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=42.241720357 podStartE2EDuration="55.116941979s" podCreationTimestamp="2026-01-29 15:45:52 +0000 UTC" firstStartedPulling="2026-01-29 15:45:59.397241888 +0000 UTC m=+1103.070096135" lastFinishedPulling="2026-01-29 15:46:12.27246352 +0000 UTC m=+1115.945317757" observedRunningTime="2026-01-29 15:46:47.103527545 +0000 UTC m=+1150.776381782" watchObservedRunningTime="2026-01-29 15:46:47.116941979 +0000 UTC m=+1150.789796216" Jan 29 15:46:47 crc kubenswrapper[5008]: I0129 15:46:47.118829 5008 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-4a04-account-create-update-2cfml"] Jan 29 15:46:47 crc kubenswrapper[5008]: I0129 15:46:47.119863 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-4a04-account-create-update-2cfml" Jan 29 15:46:47 crc kubenswrapper[5008]: I0129 15:46:47.123174 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-db-secret" Jan 29 15:46:47 crc kubenswrapper[5008]: I0129 15:46:47.140948 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-4a04-account-create-update-2cfml"] Jan 29 15:46:47 crc kubenswrapper[5008]: I0129 15:46:47.143268 5008 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=41.385315062 podStartE2EDuration="55.143247948s" podCreationTimestamp="2026-01-29 15:45:52 +0000 UTC" firstStartedPulling="2026-01-29 15:45:58.582046912 +0000 UTC m=+1102.254901149" lastFinishedPulling="2026-01-29 15:46:12.339979798 +0000 UTC m=+1116.012834035" observedRunningTime="2026-01-29 15:46:47.132764394 +0000 UTC m=+1150.805618641" watchObservedRunningTime="2026-01-29 15:46:47.143247948 +0000 UTC m=+1150.816102185" Jan 29 15:46:47 crc kubenswrapper[5008]: I0129 15:46:47.157755 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/08da0630-8fe2-4a33-be0c-d81bba67c32c-operator-scripts\") pod \"placement-db-create-8tpqs\" (UID: \"08da0630-8fe2-4a33-be0c-d81bba67c32c\") " pod="openstack/placement-db-create-8tpqs" Jan 29 15:46:47 crc kubenswrapper[5008]: I0129 15:46:47.158128 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-chrb2\" (UniqueName: \"kubernetes.io/projected/08da0630-8fe2-4a33-be0c-d81bba67c32c-kube-api-access-chrb2\") pod \"placement-db-create-8tpqs\" (UID: \"08da0630-8fe2-4a33-be0c-d81bba67c32c\") " pod="openstack/placement-db-create-8tpqs" Jan 29 15:46:47 crc kubenswrapper[5008]: I0129 15:46:47.159447 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-e4e6-account-create-update-6vxmr" Jan 29 15:46:47 crc kubenswrapper[5008]: I0129 15:46:47.261558 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-chrb2\" (UniqueName: \"kubernetes.io/projected/08da0630-8fe2-4a33-be0c-d81bba67c32c-kube-api-access-chrb2\") pod \"placement-db-create-8tpqs\" (UID: \"08da0630-8fe2-4a33-be0c-d81bba67c32c\") " pod="openstack/placement-db-create-8tpqs" Jan 29 15:46:47 crc kubenswrapper[5008]: I0129 15:46:47.261922 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kfz4q\" (UniqueName: \"kubernetes.io/projected/6fd141cd-e623-4692-892c-cf683275d378-kube-api-access-kfz4q\") pod \"placement-4a04-account-create-update-2cfml\" (UID: \"6fd141cd-e623-4692-892c-cf683275d378\") " pod="openstack/placement-4a04-account-create-update-2cfml" Jan 29 15:46:47 crc kubenswrapper[5008]: I0129 15:46:47.262029 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6fd141cd-e623-4692-892c-cf683275d378-operator-scripts\") pod \"placement-4a04-account-create-update-2cfml\" (UID: \"6fd141cd-e623-4692-892c-cf683275d378\") " pod="openstack/placement-4a04-account-create-update-2cfml" Jan 29 15:46:47 crc kubenswrapper[5008]: I0129 15:46:47.262086 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/08da0630-8fe2-4a33-be0c-d81bba67c32c-operator-scripts\") pod \"placement-db-create-8tpqs\" (UID: \"08da0630-8fe2-4a33-be0c-d81bba67c32c\") " pod="openstack/placement-db-create-8tpqs" Jan 29 15:46:47 crc kubenswrapper[5008]: I0129 15:46:47.262933 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/08da0630-8fe2-4a33-be0c-d81bba67c32c-operator-scripts\") pod \"placement-db-create-8tpqs\" (UID: \"08da0630-8fe2-4a33-be0c-d81bba67c32c\") " pod="openstack/placement-db-create-8tpqs" Jan 29 15:46:47 crc kubenswrapper[5008]: I0129 15:46:47.280562 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-chrb2\" (UniqueName: \"kubernetes.io/projected/08da0630-8fe2-4a33-be0c-d81bba67c32c-kube-api-access-chrb2\") pod \"placement-db-create-8tpqs\" (UID: \"08da0630-8fe2-4a33-be0c-d81bba67c32c\") " pod="openstack/placement-db-create-8tpqs" Jan 29 15:46:47 crc kubenswrapper[5008]: I0129 15:46:47.310642 5008 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-create-rvpz6"] Jan 29 15:46:47 crc kubenswrapper[5008]: I0129 15:46:47.311720 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-rvpz6" Jan 29 15:46:47 crc kubenswrapper[5008]: I0129 15:46:47.339762 5008 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d528ee94-b499-4f20-8603-6dcc9e8b0361" path="/var/lib/kubelet/pods/d528ee94-b499-4f20-8603-6dcc9e8b0361/volumes" Jan 29 15:46:47 crc kubenswrapper[5008]: I0129 15:46:47.340713 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-rvpz6"] Jan 29 15:46:47 crc kubenswrapper[5008]: I0129 15:46:47.340898 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-8tpqs" Jan 29 15:46:47 crc kubenswrapper[5008]: I0129 15:46:47.348210 5008 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-0e02-account-create-update-7n7jw"] Jan 29 15:46:47 crc kubenswrapper[5008]: I0129 15:46:47.349229 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-0e02-account-create-update-7n7jw" Jan 29 15:46:47 crc kubenswrapper[5008]: I0129 15:46:47.351094 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-db-secret" Jan 29 15:46:47 crc kubenswrapper[5008]: I0129 15:46:47.356674 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-0e02-account-create-update-7n7jw"] Jan 29 15:46:47 crc kubenswrapper[5008]: I0129 15:46:47.362726 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6fd141cd-e623-4692-892c-cf683275d378-operator-scripts\") pod \"placement-4a04-account-create-update-2cfml\" (UID: \"6fd141cd-e623-4692-892c-cf683275d378\") " pod="openstack/placement-4a04-account-create-update-2cfml" Jan 29 15:46:47 crc kubenswrapper[5008]: I0129 15:46:47.362811 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kfz4q\" (UniqueName: \"kubernetes.io/projected/6fd141cd-e623-4692-892c-cf683275d378-kube-api-access-kfz4q\") pod \"placement-4a04-account-create-update-2cfml\" (UID: \"6fd141cd-e623-4692-892c-cf683275d378\") " pod="openstack/placement-4a04-account-create-update-2cfml" Jan 29 15:46:47 crc kubenswrapper[5008]: I0129 15:46:47.365382 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6fd141cd-e623-4692-892c-cf683275d378-operator-scripts\") pod \"placement-4a04-account-create-update-2cfml\" (UID: \"6fd141cd-e623-4692-892c-cf683275d378\") " pod="openstack/placement-4a04-account-create-update-2cfml" Jan 29 15:46:47 crc kubenswrapper[5008]: I0129 15:46:47.393532 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kfz4q\" (UniqueName: \"kubernetes.io/projected/6fd141cd-e623-4692-892c-cf683275d378-kube-api-access-kfz4q\") pod \"placement-4a04-account-create-update-2cfml\" (UID: \"6fd141cd-e623-4692-892c-cf683275d378\") " pod="openstack/placement-4a04-account-create-update-2cfml" Jan 29 15:46:47 crc kubenswrapper[5008]: I0129 15:46:47.441361 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-4a04-account-create-update-2cfml" Jan 29 15:46:47 crc kubenswrapper[5008]: I0129 15:46:47.469030 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/328d3758-78bd-4a08-b91f-f2f4c9b8b645-operator-scripts\") pod \"glance-0e02-account-create-update-7n7jw\" (UID: \"328d3758-78bd-4a08-b91f-f2f4c9b8b645\") " pod="openstack/glance-0e02-account-create-update-7n7jw" Jan 29 15:46:47 crc kubenswrapper[5008]: I0129 15:46:47.469094 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-42nvh\" (UniqueName: \"kubernetes.io/projected/207579aa-feff-4069-8fcb-02c5b9cd107f-kube-api-access-42nvh\") pod \"glance-db-create-rvpz6\" (UID: \"207579aa-feff-4069-8fcb-02c5b9cd107f\") " pod="openstack/glance-db-create-rvpz6" Jan 29 15:46:47 crc kubenswrapper[5008]: I0129 15:46:47.469117 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/207579aa-feff-4069-8fcb-02c5b9cd107f-operator-scripts\") pod \"glance-db-create-rvpz6\" (UID: \"207579aa-feff-4069-8fcb-02c5b9cd107f\") " pod="openstack/glance-db-create-rvpz6" Jan 29 15:46:47 crc kubenswrapper[5008]: I0129 15:46:47.469146 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kwbdf\" (UniqueName: \"kubernetes.io/projected/328d3758-78bd-4a08-b91f-f2f4c9b8b645-kube-api-access-kwbdf\") pod \"glance-0e02-account-create-update-7n7jw\" (UID: \"328d3758-78bd-4a08-b91f-f2f4c9b8b645\") " pod="openstack/glance-0e02-account-create-update-7n7jw" Jan 29 15:46:47 crc kubenswrapper[5008]: I0129 15:46:47.537296 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-pggzk"] Jan 29 15:46:47 crc kubenswrapper[5008]: I0129 15:46:47.570823 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/328d3758-78bd-4a08-b91f-f2f4c9b8b645-operator-scripts\") pod \"glance-0e02-account-create-update-7n7jw\" (UID: \"328d3758-78bd-4a08-b91f-f2f4c9b8b645\") " pod="openstack/glance-0e02-account-create-update-7n7jw" Jan 29 15:46:47 crc kubenswrapper[5008]: I0129 15:46:47.570879 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-42nvh\" (UniqueName: \"kubernetes.io/projected/207579aa-feff-4069-8fcb-02c5b9cd107f-kube-api-access-42nvh\") pod \"glance-db-create-rvpz6\" (UID: \"207579aa-feff-4069-8fcb-02c5b9cd107f\") " pod="openstack/glance-db-create-rvpz6" Jan 29 15:46:47 crc kubenswrapper[5008]: I0129 15:46:47.570911 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/207579aa-feff-4069-8fcb-02c5b9cd107f-operator-scripts\") pod \"glance-db-create-rvpz6\" (UID: \"207579aa-feff-4069-8fcb-02c5b9cd107f\") " pod="openstack/glance-db-create-rvpz6" Jan 29 15:46:47 crc kubenswrapper[5008]: I0129 15:46:47.570930 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kwbdf\" (UniqueName: \"kubernetes.io/projected/328d3758-78bd-4a08-b91f-f2f4c9b8b645-kube-api-access-kwbdf\") pod \"glance-0e02-account-create-update-7n7jw\" (UID: \"328d3758-78bd-4a08-b91f-f2f4c9b8b645\") " pod="openstack/glance-0e02-account-create-update-7n7jw" Jan 29 15:46:47 crc kubenswrapper[5008]: I0129 15:46:47.571663 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/328d3758-78bd-4a08-b91f-f2f4c9b8b645-operator-scripts\") pod \"glance-0e02-account-create-update-7n7jw\" (UID: \"328d3758-78bd-4a08-b91f-f2f4c9b8b645\") " pod="openstack/glance-0e02-account-create-update-7n7jw" Jan 29 15:46:47 crc kubenswrapper[5008]: I0129 15:46:47.571901 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/207579aa-feff-4069-8fcb-02c5b9cd107f-operator-scripts\") pod \"glance-db-create-rvpz6\" (UID: \"207579aa-feff-4069-8fcb-02c5b9cd107f\") " pod="openstack/glance-db-create-rvpz6" Jan 29 15:46:47 crc kubenswrapper[5008]: I0129 15:46:47.589550 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-42nvh\" (UniqueName: \"kubernetes.io/projected/207579aa-feff-4069-8fcb-02c5b9cd107f-kube-api-access-42nvh\") pod \"glance-db-create-rvpz6\" (UID: \"207579aa-feff-4069-8fcb-02c5b9cd107f\") " pod="openstack/glance-db-create-rvpz6" Jan 29 15:46:47 crc kubenswrapper[5008]: I0129 15:46:47.598522 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kwbdf\" (UniqueName: \"kubernetes.io/projected/328d3758-78bd-4a08-b91f-f2f4c9b8b645-kube-api-access-kwbdf\") pod \"glance-0e02-account-create-update-7n7jw\" (UID: \"328d3758-78bd-4a08-b91f-f2f4c9b8b645\") " pod="openstack/glance-0e02-account-create-update-7n7jw" Jan 29 15:46:47 crc kubenswrapper[5008]: I0129 15:46:47.629755 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-rvpz6" Jan 29 15:46:47 crc kubenswrapper[5008]: I0129 15:46:47.663136 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-0e02-account-create-update-7n7jw" Jan 29 15:46:47 crc kubenswrapper[5008]: I0129 15:46:47.764960 5008 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-d79ml" Jan 29 15:46:47 crc kubenswrapper[5008]: I0129 15:46:47.765337 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-e4e6-account-create-update-6vxmr"] Jan 29 15:46:47 crc kubenswrapper[5008]: W0129 15:46:47.786583 5008 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod30bc21a6_d1eb_4200_add0_523a33ffb2ff.slice/crio-741b8610835b687ce7228b8db800b0dc8110ac47c80d2fbce50d6d4778f9b8c9 WatchSource:0}: Error finding container 741b8610835b687ce7228b8db800b0dc8110ac47c80d2fbce50d6d4778f9b8c9: Status 404 returned error can't find the container with id 741b8610835b687ce7228b8db800b0dc8110ac47c80d2fbce50d6d4778f9b8c9 Jan 29 15:46:47 crc kubenswrapper[5008]: I0129 15:46:47.815355 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-4a04-account-create-update-2cfml"] Jan 29 15:46:47 crc kubenswrapper[5008]: W0129 15:46:47.828540 5008 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6fd141cd_e623_4692_892c_cf683275d378.slice/crio-b1a1e0db87964e86d48d6437df60d02406d7d66a45aba8031eab4f31b63623ff WatchSource:0}: Error finding container b1a1e0db87964e86d48d6437df60d02406d7d66a45aba8031eab4f31b63623ff: Status 404 returned error can't find the container with id b1a1e0db87964e86d48d6437df60d02406d7d66a45aba8031eab4f31b63623ff Jan 29 15:46:47 crc kubenswrapper[5008]: I0129 15:46:47.859530 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-8tpqs"] Jan 29 15:46:47 crc kubenswrapper[5008]: I0129 15:46:47.875403 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tnk2c\" (UniqueName: \"kubernetes.io/projected/907129fe-50cb-47ef-bbf6-db42cd2ad1ae-kube-api-access-tnk2c\") pod \"907129fe-50cb-47ef-bbf6-db42cd2ad1ae\" (UID: \"907129fe-50cb-47ef-bbf6-db42cd2ad1ae\") " Jan 29 15:46:47 crc kubenswrapper[5008]: I0129 15:46:47.875433 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/907129fe-50cb-47ef-bbf6-db42cd2ad1ae-operator-scripts\") pod \"907129fe-50cb-47ef-bbf6-db42cd2ad1ae\" (UID: \"907129fe-50cb-47ef-bbf6-db42cd2ad1ae\") " Jan 29 15:46:47 crc kubenswrapper[5008]: I0129 15:46:47.876320 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/907129fe-50cb-47ef-bbf6-db42cd2ad1ae-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "907129fe-50cb-47ef-bbf6-db42cd2ad1ae" (UID: "907129fe-50cb-47ef-bbf6-db42cd2ad1ae"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:46:47 crc kubenswrapper[5008]: I0129 15:46:47.882466 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/907129fe-50cb-47ef-bbf6-db42cd2ad1ae-kube-api-access-tnk2c" (OuterVolumeSpecName: "kube-api-access-tnk2c") pod "907129fe-50cb-47ef-bbf6-db42cd2ad1ae" (UID: "907129fe-50cb-47ef-bbf6-db42cd2ad1ae"). InnerVolumeSpecName "kube-api-access-tnk2c". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:46:47 crc kubenswrapper[5008]: W0129 15:46:47.911274 5008 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod08da0630_8fe2_4a33_be0c_d81bba67c32c.slice/crio-6d0ad65014ebb39957c6339e270caadb75ebfe28c89252da30f9c9d630624877 WatchSource:0}: Error finding container 6d0ad65014ebb39957c6339e270caadb75ebfe28c89252da30f9c9d630624877: Status 404 returned error can't find the container with id 6d0ad65014ebb39957c6339e270caadb75ebfe28c89252da30f9c9d630624877 Jan 29 15:46:47 crc kubenswrapper[5008]: I0129 15:46:47.976977 5008 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tnk2c\" (UniqueName: \"kubernetes.io/projected/907129fe-50cb-47ef-bbf6-db42cd2ad1ae-kube-api-access-tnk2c\") on node \"crc\" DevicePath \"\"" Jan 29 15:46:47 crc kubenswrapper[5008]: I0129 15:46:47.977008 5008 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/907129fe-50cb-47ef-bbf6-db42cd2ad1ae-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 15:46:48 crc kubenswrapper[5008]: I0129 15:46:48.085145 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-4a04-account-create-update-2cfml" event={"ID":"6fd141cd-e623-4692-892c-cf683275d378","Type":"ContainerStarted","Data":"08622f8ad03658b22a0476180ef40d122a3ce215734ba57beccde8e385c5d87a"} Jan 29 15:46:48 crc kubenswrapper[5008]: I0129 15:46:48.085202 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-4a04-account-create-update-2cfml" event={"ID":"6fd141cd-e623-4692-892c-cf683275d378","Type":"ContainerStarted","Data":"b1a1e0db87964e86d48d6437df60d02406d7d66a45aba8031eab4f31b63623ff"} Jan 29 15:46:48 crc kubenswrapper[5008]: I0129 15:46:48.086510 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-d79ml" event={"ID":"907129fe-50cb-47ef-bbf6-db42cd2ad1ae","Type":"ContainerDied","Data":"ff2c646e70d92dcf4358d827ab57d652f752745f3e7a9b83004df897a827b555"} Jan 29 15:46:48 crc kubenswrapper[5008]: I0129 15:46:48.086544 5008 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ff2c646e70d92dcf4358d827ab57d652f752745f3e7a9b83004df897a827b555" Jan 29 15:46:48 crc kubenswrapper[5008]: I0129 15:46:48.086693 5008 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-d79ml" Jan 29 15:46:48 crc kubenswrapper[5008]: I0129 15:46:48.087876 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-8tpqs" event={"ID":"08da0630-8fe2-4a33-be0c-d81bba67c32c","Type":"ContainerStarted","Data":"c12146b73a51a5482b71661513ea3874dfe91fc50f839323c14bf1dbe55d4888"} Jan 29 15:46:48 crc kubenswrapper[5008]: I0129 15:46:48.087911 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-8tpqs" event={"ID":"08da0630-8fe2-4a33-be0c-d81bba67c32c","Type":"ContainerStarted","Data":"6d0ad65014ebb39957c6339e270caadb75ebfe28c89252da30f9c9d630624877"} Jan 29 15:46:48 crc kubenswrapper[5008]: I0129 15:46:48.089688 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-e4e6-account-create-update-6vxmr" event={"ID":"30bc21a6-d1eb-4200-add0-523a33ffb2ff","Type":"ContainerStarted","Data":"9c021d2423056bd1e8f0c03523a2b976398e77dc14de7fa3b22ff99a7e7bf44a"} Jan 29 15:46:48 crc kubenswrapper[5008]: I0129 15:46:48.089722 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-e4e6-account-create-update-6vxmr" event={"ID":"30bc21a6-d1eb-4200-add0-523a33ffb2ff","Type":"ContainerStarted","Data":"741b8610835b687ce7228b8db800b0dc8110ac47c80d2fbce50d6d4778f9b8c9"} Jan 29 15:46:48 crc kubenswrapper[5008]: I0129 15:46:48.091325 5008 generic.go:334] "Generic (PLEG): container finished" podID="5b273a50-b2db-40d5-b4b4-6494206c606d" containerID="bda0b4b24ad7358124acc7096a07129f2529fe34f4356b7cc8add641046f3880" exitCode=0 Jan 29 15:46:48 crc kubenswrapper[5008]: I0129 15:46:48.091407 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-phmts" event={"ID":"5b273a50-b2db-40d5-b4b4-6494206c606d","Type":"ContainerDied","Data":"bda0b4b24ad7358124acc7096a07129f2529fe34f4356b7cc8add641046f3880"} Jan 29 15:46:48 crc kubenswrapper[5008]: I0129 15:46:48.093726 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-pggzk" event={"ID":"232739d0-09f9-4843-8c9f-fc19bc53763f","Type":"ContainerStarted","Data":"a31808be1fa3bc4b89dfda7f79836da13bf6f5c2671c33471c5061bfc1edc1ea"} Jan 29 15:46:48 crc kubenswrapper[5008]: I0129 15:46:48.093763 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-pggzk" event={"ID":"232739d0-09f9-4843-8c9f-fc19bc53763f","Type":"ContainerStarted","Data":"b4483fe57166afcb40a3f3934546faf4535fed5a2e09681d32d851d4837ee7f9"} Jan 29 15:46:48 crc kubenswrapper[5008]: I0129 15:46:48.104071 5008 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-4a04-account-create-update-2cfml" podStartSLOduration=1.1040534850000001 podStartE2EDuration="1.104053485s" podCreationTimestamp="2026-01-29 15:46:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 15:46:48.102044967 +0000 UTC m=+1151.774899204" watchObservedRunningTime="2026-01-29 15:46:48.104053485 +0000 UTC m=+1151.776907722" Jan 29 15:46:48 crc kubenswrapper[5008]: I0129 15:46:48.149839 5008 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-db-create-8tpqs" podStartSLOduration=2.149811185 podStartE2EDuration="2.149811185s" podCreationTimestamp="2026-01-29 15:46:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 15:46:48.129843341 +0000 UTC m=+1151.802697578" watchObservedRunningTime="2026-01-29 15:46:48.149811185 +0000 UTC m=+1151.822665442" Jan 29 15:46:48 crc kubenswrapper[5008]: I0129 15:46:48.172999 5008 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-e4e6-account-create-update-6vxmr" podStartSLOduration=2.172979858 podStartE2EDuration="2.172979858s" podCreationTimestamp="2026-01-29 15:46:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 15:46:48.162659637 +0000 UTC m=+1151.835513874" watchObservedRunningTime="2026-01-29 15:46:48.172979858 +0000 UTC m=+1151.845834095" Jan 29 15:46:48 crc kubenswrapper[5008]: I0129 15:46:48.221831 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-rvpz6"] Jan 29 15:46:48 crc kubenswrapper[5008]: W0129 15:46:48.251231 5008 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod328d3758_78bd_4a08_b91f_f2f4c9b8b645.slice/crio-34754dadc6ea4db924da4974c7057a8848e9b28291233241bba0a76c9206a683 WatchSource:0}: Error finding container 34754dadc6ea4db924da4974c7057a8848e9b28291233241bba0a76c9206a683: Status 404 returned error can't find the container with id 34754dadc6ea4db924da4974c7057a8848e9b28291233241bba0a76c9206a683 Jan 29 15:46:48 crc kubenswrapper[5008]: I0129 15:46:48.272607 5008 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-db-create-pggzk" podStartSLOduration=2.272589534 podStartE2EDuration="2.272589534s" podCreationTimestamp="2026-01-29 15:46:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 15:46:48.214232198 +0000 UTC m=+1151.887086435" watchObservedRunningTime="2026-01-29 15:46:48.272589534 +0000 UTC m=+1151.945443771" Jan 29 15:46:48 crc kubenswrapper[5008]: I0129 15:46:48.323629 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-0e02-account-create-update-7n7jw"] Jan 29 15:46:49 crc kubenswrapper[5008]: I0129 15:46:49.106227 5008 generic.go:334] "Generic (PLEG): container finished" podID="30bc21a6-d1eb-4200-add0-523a33ffb2ff" containerID="9c021d2423056bd1e8f0c03523a2b976398e77dc14de7fa3b22ff99a7e7bf44a" exitCode=0 Jan 29 15:46:49 crc kubenswrapper[5008]: I0129 15:46:49.106600 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-e4e6-account-create-update-6vxmr" event={"ID":"30bc21a6-d1eb-4200-add0-523a33ffb2ff","Type":"ContainerDied","Data":"9c021d2423056bd1e8f0c03523a2b976398e77dc14de7fa3b22ff99a7e7bf44a"} Jan 29 15:46:49 crc kubenswrapper[5008]: I0129 15:46:49.110541 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-rvpz6" event={"ID":"207579aa-feff-4069-8fcb-02c5b9cd107f","Type":"ContainerStarted","Data":"d9b41e67155f529dbd273cfba785076257b2721a371f6a0e62d1c4355eb9512a"} Jan 29 15:46:49 crc kubenswrapper[5008]: I0129 15:46:49.110576 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-rvpz6" event={"ID":"207579aa-feff-4069-8fcb-02c5b9cd107f","Type":"ContainerStarted","Data":"9fe8adf0f447ec158390678253e2d815451e2613c777de440bc0dbb02a7556a8"} Jan 29 15:46:49 crc kubenswrapper[5008]: I0129 15:46:49.114511 5008 generic.go:334] "Generic (PLEG): container finished" podID="6fd141cd-e623-4692-892c-cf683275d378" containerID="08622f8ad03658b22a0476180ef40d122a3ce215734ba57beccde8e385c5d87a" exitCode=0 Jan 29 15:46:49 crc kubenswrapper[5008]: I0129 15:46:49.114681 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-4a04-account-create-update-2cfml" event={"ID":"6fd141cd-e623-4692-892c-cf683275d378","Type":"ContainerDied","Data":"08622f8ad03658b22a0476180ef40d122a3ce215734ba57beccde8e385c5d87a"} Jan 29 15:46:49 crc kubenswrapper[5008]: I0129 15:46:49.116688 5008 generic.go:334] "Generic (PLEG): container finished" podID="232739d0-09f9-4843-8c9f-fc19bc53763f" containerID="a31808be1fa3bc4b89dfda7f79836da13bf6f5c2671c33471c5061bfc1edc1ea" exitCode=0 Jan 29 15:46:49 crc kubenswrapper[5008]: I0129 15:46:49.116737 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-pggzk" event={"ID":"232739d0-09f9-4843-8c9f-fc19bc53763f","Type":"ContainerDied","Data":"a31808be1fa3bc4b89dfda7f79836da13bf6f5c2671c33471c5061bfc1edc1ea"} Jan 29 15:46:49 crc kubenswrapper[5008]: I0129 15:46:49.118735 5008 generic.go:334] "Generic (PLEG): container finished" podID="08da0630-8fe2-4a33-be0c-d81bba67c32c" containerID="c12146b73a51a5482b71661513ea3874dfe91fc50f839323c14bf1dbe55d4888" exitCode=0 Jan 29 15:46:49 crc kubenswrapper[5008]: I0129 15:46:49.118837 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-8tpqs" event={"ID":"08da0630-8fe2-4a33-be0c-d81bba67c32c","Type":"ContainerDied","Data":"c12146b73a51a5482b71661513ea3874dfe91fc50f839323c14bf1dbe55d4888"} Jan 29 15:46:49 crc kubenswrapper[5008]: I0129 15:46:49.126860 5008 generic.go:334] "Generic (PLEG): container finished" podID="328d3758-78bd-4a08-b91f-f2f4c9b8b645" containerID="d694dd74760c7fb5bcb25c24900b008d41d6e4127c92f70bb60fd3e6fc52c215" exitCode=0 Jan 29 15:46:49 crc kubenswrapper[5008]: I0129 15:46:49.126933 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-0e02-account-create-update-7n7jw" event={"ID":"328d3758-78bd-4a08-b91f-f2f4c9b8b645","Type":"ContainerDied","Data":"d694dd74760c7fb5bcb25c24900b008d41d6e4127c92f70bb60fd3e6fc52c215"} Jan 29 15:46:49 crc kubenswrapper[5008]: I0129 15:46:49.126966 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-0e02-account-create-update-7n7jw" event={"ID":"328d3758-78bd-4a08-b91f-f2f4c9b8b645","Type":"ContainerStarted","Data":"34754dadc6ea4db924da4974c7057a8848e9b28291233241bba0a76c9206a683"} Jan 29 15:46:49 crc kubenswrapper[5008]: I0129 15:46:49.196096 5008 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-db-create-rvpz6" podStartSLOduration=2.196079766 podStartE2EDuration="2.196079766s" podCreationTimestamp="2026-01-29 15:46:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 15:46:49.191515565 +0000 UTC m=+1152.864369792" watchObservedRunningTime="2026-01-29 15:46:49.196079766 +0000 UTC m=+1152.868934013" Jan 29 15:46:49 crc kubenswrapper[5008]: I0129 15:46:49.461894 5008 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-phmts" Jan 29 15:46:49 crc kubenswrapper[5008]: I0129 15:46:49.604839 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/5b273a50-b2db-40d5-b4b4-6494206c606d-ring-data-devices\") pod \"5b273a50-b2db-40d5-b4b4-6494206c606d\" (UID: \"5b273a50-b2db-40d5-b4b4-6494206c606d\") " Jan 29 15:46:49 crc kubenswrapper[5008]: I0129 15:46:49.605123 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/5b273a50-b2db-40d5-b4b4-6494206c606d-dispersionconf\") pod \"5b273a50-b2db-40d5-b4b4-6494206c606d\" (UID: \"5b273a50-b2db-40d5-b4b4-6494206c606d\") " Jan 29 15:46:49 crc kubenswrapper[5008]: I0129 15:46:49.605172 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5b273a50-b2db-40d5-b4b4-6494206c606d-combined-ca-bundle\") pod \"5b273a50-b2db-40d5-b4b4-6494206c606d\" (UID: \"5b273a50-b2db-40d5-b4b4-6494206c606d\") " Jan 29 15:46:49 crc kubenswrapper[5008]: I0129 15:46:49.605205 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/5b273a50-b2db-40d5-b4b4-6494206c606d-etc-swift\") pod \"5b273a50-b2db-40d5-b4b4-6494206c606d\" (UID: \"5b273a50-b2db-40d5-b4b4-6494206c606d\") " Jan 29 15:46:49 crc kubenswrapper[5008]: I0129 15:46:49.605270 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gxr66\" (UniqueName: \"kubernetes.io/projected/5b273a50-b2db-40d5-b4b4-6494206c606d-kube-api-access-gxr66\") pod \"5b273a50-b2db-40d5-b4b4-6494206c606d\" (UID: \"5b273a50-b2db-40d5-b4b4-6494206c606d\") " Jan 29 15:46:49 crc kubenswrapper[5008]: I0129 15:46:49.605328 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/5b273a50-b2db-40d5-b4b4-6494206c606d-scripts\") pod \"5b273a50-b2db-40d5-b4b4-6494206c606d\" (UID: \"5b273a50-b2db-40d5-b4b4-6494206c606d\") " Jan 29 15:46:49 crc kubenswrapper[5008]: I0129 15:46:49.605380 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/5b273a50-b2db-40d5-b4b4-6494206c606d-swiftconf\") pod \"5b273a50-b2db-40d5-b4b4-6494206c606d\" (UID: \"5b273a50-b2db-40d5-b4b4-6494206c606d\") " Jan 29 15:46:49 crc kubenswrapper[5008]: I0129 15:46:49.605615 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5b273a50-b2db-40d5-b4b4-6494206c606d-ring-data-devices" (OuterVolumeSpecName: "ring-data-devices") pod "5b273a50-b2db-40d5-b4b4-6494206c606d" (UID: "5b273a50-b2db-40d5-b4b4-6494206c606d"). InnerVolumeSpecName "ring-data-devices". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:46:49 crc kubenswrapper[5008]: I0129 15:46:49.605934 5008 reconciler_common.go:293] "Volume detached for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/5b273a50-b2db-40d5-b4b4-6494206c606d-ring-data-devices\") on node \"crc\" DevicePath \"\"" Jan 29 15:46:49 crc kubenswrapper[5008]: I0129 15:46:49.606880 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5b273a50-b2db-40d5-b4b4-6494206c606d-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "5b273a50-b2db-40d5-b4b4-6494206c606d" (UID: "5b273a50-b2db-40d5-b4b4-6494206c606d"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 15:46:49 crc kubenswrapper[5008]: I0129 15:46:49.615714 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b273a50-b2db-40d5-b4b4-6494206c606d-dispersionconf" (OuterVolumeSpecName: "dispersionconf") pod "5b273a50-b2db-40d5-b4b4-6494206c606d" (UID: "5b273a50-b2db-40d5-b4b4-6494206c606d"). InnerVolumeSpecName "dispersionconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:46:49 crc kubenswrapper[5008]: I0129 15:46:49.617056 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-northd-0" Jan 29 15:46:49 crc kubenswrapper[5008]: I0129 15:46:49.623025 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5b273a50-b2db-40d5-b4b4-6494206c606d-kube-api-access-gxr66" (OuterVolumeSpecName: "kube-api-access-gxr66") pod "5b273a50-b2db-40d5-b4b4-6494206c606d" (UID: "5b273a50-b2db-40d5-b4b4-6494206c606d"). InnerVolumeSpecName "kube-api-access-gxr66". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:46:49 crc kubenswrapper[5008]: I0129 15:46:49.630956 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5b273a50-b2db-40d5-b4b4-6494206c606d-scripts" (OuterVolumeSpecName: "scripts") pod "5b273a50-b2db-40d5-b4b4-6494206c606d" (UID: "5b273a50-b2db-40d5-b4b4-6494206c606d"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:46:49 crc kubenswrapper[5008]: I0129 15:46:49.634182 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b273a50-b2db-40d5-b4b4-6494206c606d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "5b273a50-b2db-40d5-b4b4-6494206c606d" (UID: "5b273a50-b2db-40d5-b4b4-6494206c606d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:46:49 crc kubenswrapper[5008]: I0129 15:46:49.636911 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b273a50-b2db-40d5-b4b4-6494206c606d-swiftconf" (OuterVolumeSpecName: "swiftconf") pod "5b273a50-b2db-40d5-b4b4-6494206c606d" (UID: "5b273a50-b2db-40d5-b4b4-6494206c606d"). InnerVolumeSpecName "swiftconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:46:49 crc kubenswrapper[5008]: I0129 15:46:49.718374 5008 reconciler_common.go:293] "Volume detached for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/5b273a50-b2db-40d5-b4b4-6494206c606d-dispersionconf\") on node \"crc\" DevicePath \"\"" Jan 29 15:46:49 crc kubenswrapper[5008]: I0129 15:46:49.718419 5008 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5b273a50-b2db-40d5-b4b4-6494206c606d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 15:46:49 crc kubenswrapper[5008]: I0129 15:46:49.718432 5008 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/5b273a50-b2db-40d5-b4b4-6494206c606d-etc-swift\") on node \"crc\" DevicePath \"\"" Jan 29 15:46:49 crc kubenswrapper[5008]: I0129 15:46:49.718445 5008 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gxr66\" (UniqueName: \"kubernetes.io/projected/5b273a50-b2db-40d5-b4b4-6494206c606d-kube-api-access-gxr66\") on node \"crc\" DevicePath \"\"" Jan 29 15:46:49 crc kubenswrapper[5008]: I0129 15:46:49.718460 5008 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/5b273a50-b2db-40d5-b4b4-6494206c606d-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 15:46:49 crc kubenswrapper[5008]: I0129 15:46:49.718472 5008 reconciler_common.go:293] "Volume detached for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/5b273a50-b2db-40d5-b4b4-6494206c606d-swiftconf\") on node \"crc\" DevicePath \"\"" Jan 29 15:46:50 crc kubenswrapper[5008]: I0129 15:46:50.136938 5008 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-phmts" Jan 29 15:46:50 crc kubenswrapper[5008]: I0129 15:46:50.136913 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-phmts" event={"ID":"5b273a50-b2db-40d5-b4b4-6494206c606d","Type":"ContainerDied","Data":"a2a98c18f51d01224109abefa4392158329836c967e1403808990bd7b1c85a20"} Jan 29 15:46:50 crc kubenswrapper[5008]: I0129 15:46:50.137061 5008 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a2a98c18f51d01224109abefa4392158329836c967e1403808990bd7b1c85a20" Jan 29 15:46:50 crc kubenswrapper[5008]: I0129 15:46:50.139066 5008 generic.go:334] "Generic (PLEG): container finished" podID="207579aa-feff-4069-8fcb-02c5b9cd107f" containerID="d9b41e67155f529dbd273cfba785076257b2721a371f6a0e62d1c4355eb9512a" exitCode=0 Jan 29 15:46:50 crc kubenswrapper[5008]: I0129 15:46:50.139128 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-rvpz6" event={"ID":"207579aa-feff-4069-8fcb-02c5b9cd107f","Type":"ContainerDied","Data":"d9b41e67155f529dbd273cfba785076257b2721a371f6a0e62d1c4355eb9512a"} Jan 29 15:46:50 crc kubenswrapper[5008]: I0129 15:46:50.458663 5008 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-d79ml"] Jan 29 15:46:50 crc kubenswrapper[5008]: I0129 15:46:50.463110 5008 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-d79ml"] Jan 29 15:46:50 crc kubenswrapper[5008]: I0129 15:46:50.571484 5008 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-e4e6-account-create-update-6vxmr" Jan 29 15:46:50 crc kubenswrapper[5008]: I0129 15:46:50.634435 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/30bc21a6-d1eb-4200-add0-523a33ffb2ff-operator-scripts\") pod \"30bc21a6-d1eb-4200-add0-523a33ffb2ff\" (UID: \"30bc21a6-d1eb-4200-add0-523a33ffb2ff\") " Jan 29 15:46:50 crc kubenswrapper[5008]: I0129 15:46:50.634502 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gpwj8\" (UniqueName: \"kubernetes.io/projected/30bc21a6-d1eb-4200-add0-523a33ffb2ff-kube-api-access-gpwj8\") pod \"30bc21a6-d1eb-4200-add0-523a33ffb2ff\" (UID: \"30bc21a6-d1eb-4200-add0-523a33ffb2ff\") " Jan 29 15:46:50 crc kubenswrapper[5008]: I0129 15:46:50.635828 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/30bc21a6-d1eb-4200-add0-523a33ffb2ff-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "30bc21a6-d1eb-4200-add0-523a33ffb2ff" (UID: "30bc21a6-d1eb-4200-add0-523a33ffb2ff"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:46:50 crc kubenswrapper[5008]: I0129 15:46:50.649460 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/30bc21a6-d1eb-4200-add0-523a33ffb2ff-kube-api-access-gpwj8" (OuterVolumeSpecName: "kube-api-access-gpwj8") pod "30bc21a6-d1eb-4200-add0-523a33ffb2ff" (UID: "30bc21a6-d1eb-4200-add0-523a33ffb2ff"). InnerVolumeSpecName "kube-api-access-gpwj8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:46:50 crc kubenswrapper[5008]: I0129 15:46:50.736640 5008 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/30bc21a6-d1eb-4200-add0-523a33ffb2ff-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 15:46:50 crc kubenswrapper[5008]: I0129 15:46:50.736676 5008 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gpwj8\" (UniqueName: \"kubernetes.io/projected/30bc21a6-d1eb-4200-add0-523a33ffb2ff-kube-api-access-gpwj8\") on node \"crc\" DevicePath \"\"" Jan 29 15:46:50 crc kubenswrapper[5008]: I0129 15:46:50.759456 5008 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-0e02-account-create-update-7n7jw" Jan 29 15:46:50 crc kubenswrapper[5008]: I0129 15:46:50.768141 5008 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-8tpqs" Jan 29 15:46:50 crc kubenswrapper[5008]: I0129 15:46:50.810867 5008 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-pggzk" Jan 29 15:46:50 crc kubenswrapper[5008]: I0129 15:46:50.822459 5008 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-4a04-account-create-update-2cfml" Jan 29 15:46:50 crc kubenswrapper[5008]: I0129 15:46:50.838559 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/328d3758-78bd-4a08-b91f-f2f4c9b8b645-operator-scripts\") pod \"328d3758-78bd-4a08-b91f-f2f4c9b8b645\" (UID: \"328d3758-78bd-4a08-b91f-f2f4c9b8b645\") " Jan 29 15:46:50 crc kubenswrapper[5008]: I0129 15:46:50.838778 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kwbdf\" (UniqueName: \"kubernetes.io/projected/328d3758-78bd-4a08-b91f-f2f4c9b8b645-kube-api-access-kwbdf\") pod \"328d3758-78bd-4a08-b91f-f2f4c9b8b645\" (UID: \"328d3758-78bd-4a08-b91f-f2f4c9b8b645\") " Jan 29 15:46:50 crc kubenswrapper[5008]: I0129 15:46:50.838888 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-chrb2\" (UniqueName: \"kubernetes.io/projected/08da0630-8fe2-4a33-be0c-d81bba67c32c-kube-api-access-chrb2\") pod \"08da0630-8fe2-4a33-be0c-d81bba67c32c\" (UID: \"08da0630-8fe2-4a33-be0c-d81bba67c32c\") " Jan 29 15:46:50 crc kubenswrapper[5008]: I0129 15:46:50.838991 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/08da0630-8fe2-4a33-be0c-d81bba67c32c-operator-scripts\") pod \"08da0630-8fe2-4a33-be0c-d81bba67c32c\" (UID: \"08da0630-8fe2-4a33-be0c-d81bba67c32c\") " Jan 29 15:46:50 crc kubenswrapper[5008]: I0129 15:46:50.841563 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/08da0630-8fe2-4a33-be0c-d81bba67c32c-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "08da0630-8fe2-4a33-be0c-d81bba67c32c" (UID: "08da0630-8fe2-4a33-be0c-d81bba67c32c"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:46:50 crc kubenswrapper[5008]: I0129 15:46:50.844976 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/328d3758-78bd-4a08-b91f-f2f4c9b8b645-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "328d3758-78bd-4a08-b91f-f2f4c9b8b645" (UID: "328d3758-78bd-4a08-b91f-f2f4c9b8b645"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:46:50 crc kubenswrapper[5008]: I0129 15:46:50.845585 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/328d3758-78bd-4a08-b91f-f2f4c9b8b645-kube-api-access-kwbdf" (OuterVolumeSpecName: "kube-api-access-kwbdf") pod "328d3758-78bd-4a08-b91f-f2f4c9b8b645" (UID: "328d3758-78bd-4a08-b91f-f2f4c9b8b645"). InnerVolumeSpecName "kube-api-access-kwbdf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:46:50 crc kubenswrapper[5008]: I0129 15:46:50.857046 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/08da0630-8fe2-4a33-be0c-d81bba67c32c-kube-api-access-chrb2" (OuterVolumeSpecName: "kube-api-access-chrb2") pod "08da0630-8fe2-4a33-be0c-d81bba67c32c" (UID: "08da0630-8fe2-4a33-be0c-d81bba67c32c"). InnerVolumeSpecName "kube-api-access-chrb2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:46:50 crc kubenswrapper[5008]: I0129 15:46:50.941838 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lzbff\" (UniqueName: \"kubernetes.io/projected/232739d0-09f9-4843-8c9f-fc19bc53763f-kube-api-access-lzbff\") pod \"232739d0-09f9-4843-8c9f-fc19bc53763f\" (UID: \"232739d0-09f9-4843-8c9f-fc19bc53763f\") " Jan 29 15:46:50 crc kubenswrapper[5008]: I0129 15:46:50.941948 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6fd141cd-e623-4692-892c-cf683275d378-operator-scripts\") pod \"6fd141cd-e623-4692-892c-cf683275d378\" (UID: \"6fd141cd-e623-4692-892c-cf683275d378\") " Jan 29 15:46:50 crc kubenswrapper[5008]: I0129 15:46:50.942011 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/232739d0-09f9-4843-8c9f-fc19bc53763f-operator-scripts\") pod \"232739d0-09f9-4843-8c9f-fc19bc53763f\" (UID: \"232739d0-09f9-4843-8c9f-fc19bc53763f\") " Jan 29 15:46:50 crc kubenswrapper[5008]: I0129 15:46:50.942056 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kfz4q\" (UniqueName: \"kubernetes.io/projected/6fd141cd-e623-4692-892c-cf683275d378-kube-api-access-kfz4q\") pod \"6fd141cd-e623-4692-892c-cf683275d378\" (UID: \"6fd141cd-e623-4692-892c-cf683275d378\") " Jan 29 15:46:50 crc kubenswrapper[5008]: I0129 15:46:50.942449 5008 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/328d3758-78bd-4a08-b91f-f2f4c9b8b645-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 15:46:50 crc kubenswrapper[5008]: I0129 15:46:50.942477 5008 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kwbdf\" (UniqueName: \"kubernetes.io/projected/328d3758-78bd-4a08-b91f-f2f4c9b8b645-kube-api-access-kwbdf\") on node \"crc\" DevicePath \"\"" Jan 29 15:46:50 crc kubenswrapper[5008]: I0129 15:46:50.942494 5008 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-chrb2\" (UniqueName: \"kubernetes.io/projected/08da0630-8fe2-4a33-be0c-d81bba67c32c-kube-api-access-chrb2\") on node \"crc\" DevicePath \"\"" Jan 29 15:46:50 crc kubenswrapper[5008]: I0129 15:46:50.942508 5008 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/08da0630-8fe2-4a33-be0c-d81bba67c32c-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 15:46:50 crc kubenswrapper[5008]: I0129 15:46:50.942734 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6fd141cd-e623-4692-892c-cf683275d378-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "6fd141cd-e623-4692-892c-cf683275d378" (UID: "6fd141cd-e623-4692-892c-cf683275d378"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:46:50 crc kubenswrapper[5008]: I0129 15:46:50.942748 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/232739d0-09f9-4843-8c9f-fc19bc53763f-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "232739d0-09f9-4843-8c9f-fc19bc53763f" (UID: "232739d0-09f9-4843-8c9f-fc19bc53763f"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:46:50 crc kubenswrapper[5008]: I0129 15:46:50.945898 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6fd141cd-e623-4692-892c-cf683275d378-kube-api-access-kfz4q" (OuterVolumeSpecName: "kube-api-access-kfz4q") pod "6fd141cd-e623-4692-892c-cf683275d378" (UID: "6fd141cd-e623-4692-892c-cf683275d378"). InnerVolumeSpecName "kube-api-access-kfz4q". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:46:50 crc kubenswrapper[5008]: I0129 15:46:50.945979 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/232739d0-09f9-4843-8c9f-fc19bc53763f-kube-api-access-lzbff" (OuterVolumeSpecName: "kube-api-access-lzbff") pod "232739d0-09f9-4843-8c9f-fc19bc53763f" (UID: "232739d0-09f9-4843-8c9f-fc19bc53763f"). InnerVolumeSpecName "kube-api-access-lzbff". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:46:51 crc kubenswrapper[5008]: I0129 15:46:51.044298 5008 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lzbff\" (UniqueName: \"kubernetes.io/projected/232739d0-09f9-4843-8c9f-fc19bc53763f-kube-api-access-lzbff\") on node \"crc\" DevicePath \"\"" Jan 29 15:46:51 crc kubenswrapper[5008]: I0129 15:46:51.044334 5008 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6fd141cd-e623-4692-892c-cf683275d378-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 15:46:51 crc kubenswrapper[5008]: I0129 15:46:51.044343 5008 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/232739d0-09f9-4843-8c9f-fc19bc53763f-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 15:46:51 crc kubenswrapper[5008]: I0129 15:46:51.044351 5008 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kfz4q\" (UniqueName: \"kubernetes.io/projected/6fd141cd-e623-4692-892c-cf683275d378-kube-api-access-kfz4q\") on node \"crc\" DevicePath \"\"" Jan 29 15:46:51 crc kubenswrapper[5008]: I0129 15:46:51.147666 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-4a04-account-create-update-2cfml" event={"ID":"6fd141cd-e623-4692-892c-cf683275d378","Type":"ContainerDied","Data":"b1a1e0db87964e86d48d6437df60d02406d7d66a45aba8031eab4f31b63623ff"} Jan 29 15:46:51 crc kubenswrapper[5008]: I0129 15:46:51.148151 5008 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b1a1e0db87964e86d48d6437df60d02406d7d66a45aba8031eab4f31b63623ff" Jan 29 15:46:51 crc kubenswrapper[5008]: I0129 15:46:51.147702 5008 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-4a04-account-create-update-2cfml" Jan 29 15:46:51 crc kubenswrapper[5008]: I0129 15:46:51.160396 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-pggzk" event={"ID":"232739d0-09f9-4843-8c9f-fc19bc53763f","Type":"ContainerDied","Data":"b4483fe57166afcb40a3f3934546faf4535fed5a2e09681d32d851d4837ee7f9"} Jan 29 15:46:51 crc kubenswrapper[5008]: I0129 15:46:51.160457 5008 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b4483fe57166afcb40a3f3934546faf4535fed5a2e09681d32d851d4837ee7f9" Jan 29 15:46:51 crc kubenswrapper[5008]: I0129 15:46:51.160410 5008 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-pggzk" Jan 29 15:46:51 crc kubenswrapper[5008]: I0129 15:46:51.163144 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-8tpqs" event={"ID":"08da0630-8fe2-4a33-be0c-d81bba67c32c","Type":"ContainerDied","Data":"6d0ad65014ebb39957c6339e270caadb75ebfe28c89252da30f9c9d630624877"} Jan 29 15:46:51 crc kubenswrapper[5008]: I0129 15:46:51.163195 5008 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6d0ad65014ebb39957c6339e270caadb75ebfe28c89252da30f9c9d630624877" Jan 29 15:46:51 crc kubenswrapper[5008]: I0129 15:46:51.163284 5008 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-8tpqs" Jan 29 15:46:51 crc kubenswrapper[5008]: I0129 15:46:51.165406 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-0e02-account-create-update-7n7jw" event={"ID":"328d3758-78bd-4a08-b91f-f2f4c9b8b645","Type":"ContainerDied","Data":"34754dadc6ea4db924da4974c7057a8848e9b28291233241bba0a76c9206a683"} Jan 29 15:46:51 crc kubenswrapper[5008]: I0129 15:46:51.165436 5008 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="34754dadc6ea4db924da4974c7057a8848e9b28291233241bba0a76c9206a683" Jan 29 15:46:51 crc kubenswrapper[5008]: I0129 15:46:51.165469 5008 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-0e02-account-create-update-7n7jw" Jan 29 15:46:51 crc kubenswrapper[5008]: I0129 15:46:51.168120 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-e4e6-account-create-update-6vxmr" event={"ID":"30bc21a6-d1eb-4200-add0-523a33ffb2ff","Type":"ContainerDied","Data":"741b8610835b687ce7228b8db800b0dc8110ac47c80d2fbce50d6d4778f9b8c9"} Jan 29 15:46:51 crc kubenswrapper[5008]: I0129 15:46:51.168163 5008 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-e4e6-account-create-update-6vxmr" Jan 29 15:46:51 crc kubenswrapper[5008]: I0129 15:46:51.168183 5008 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="741b8610835b687ce7228b8db800b0dc8110ac47c80d2fbce50d6d4778f9b8c9" Jan 29 15:46:51 crc kubenswrapper[5008]: I0129 15:46:51.334517 5008 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="907129fe-50cb-47ef-bbf6-db42cd2ad1ae" path="/var/lib/kubelet/pods/907129fe-50cb-47ef-bbf6-db42cd2ad1ae/volumes" Jan 29 15:46:51 crc kubenswrapper[5008]: I0129 15:46:51.521434 5008 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-rvpz6" Jan 29 15:46:51 crc kubenswrapper[5008]: I0129 15:46:51.653197 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/207579aa-feff-4069-8fcb-02c5b9cd107f-operator-scripts\") pod \"207579aa-feff-4069-8fcb-02c5b9cd107f\" (UID: \"207579aa-feff-4069-8fcb-02c5b9cd107f\") " Jan 29 15:46:51 crc kubenswrapper[5008]: I0129 15:46:51.653252 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-42nvh\" (UniqueName: \"kubernetes.io/projected/207579aa-feff-4069-8fcb-02c5b9cd107f-kube-api-access-42nvh\") pod \"207579aa-feff-4069-8fcb-02c5b9cd107f\" (UID: \"207579aa-feff-4069-8fcb-02c5b9cd107f\") " Jan 29 15:46:51 crc kubenswrapper[5008]: I0129 15:46:51.654072 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/207579aa-feff-4069-8fcb-02c5b9cd107f-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "207579aa-feff-4069-8fcb-02c5b9cd107f" (UID: "207579aa-feff-4069-8fcb-02c5b9cd107f"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:46:51 crc kubenswrapper[5008]: I0129 15:46:51.668265 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/207579aa-feff-4069-8fcb-02c5b9cd107f-kube-api-access-42nvh" (OuterVolumeSpecName: "kube-api-access-42nvh") pod "207579aa-feff-4069-8fcb-02c5b9cd107f" (UID: "207579aa-feff-4069-8fcb-02c5b9cd107f"). InnerVolumeSpecName "kube-api-access-42nvh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:46:51 crc kubenswrapper[5008]: I0129 15:46:51.754760 5008 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/207579aa-feff-4069-8fcb-02c5b9cd107f-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 15:46:51 crc kubenswrapper[5008]: I0129 15:46:51.754812 5008 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-42nvh\" (UniqueName: \"kubernetes.io/projected/207579aa-feff-4069-8fcb-02c5b9cd107f-kube-api-access-42nvh\") on node \"crc\" DevicePath \"\"" Jan 29 15:46:52 crc kubenswrapper[5008]: I0129 15:46:52.177168 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-rvpz6" event={"ID":"207579aa-feff-4069-8fcb-02c5b9cd107f","Type":"ContainerDied","Data":"9fe8adf0f447ec158390678253e2d815451e2613c777de440bc0dbb02a7556a8"} Jan 29 15:46:52 crc kubenswrapper[5008]: I0129 15:46:52.177222 5008 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9fe8adf0f447ec158390678253e2d815451e2613c777de440bc0dbb02a7556a8" Jan 29 15:46:52 crc kubenswrapper[5008]: I0129 15:46:52.177222 5008 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-rvpz6" Jan 29 15:46:53 crc kubenswrapper[5008]: I0129 15:46:53.357385 5008 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-bw9wr" podUID="0dd702c8-269b-4fb6-a3a7-03adf93d916a" containerName="ovn-controller" probeResult="failure" output=< Jan 29 15:46:53 crc kubenswrapper[5008]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Jan 29 15:46:53 crc kubenswrapper[5008]: > Jan 29 15:46:55 crc kubenswrapper[5008]: I0129 15:46:55.475266 5008 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-bxxx2"] Jan 29 15:46:55 crc kubenswrapper[5008]: E0129 15:46:55.476108 5008 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5b273a50-b2db-40d5-b4b4-6494206c606d" containerName="swift-ring-rebalance" Jan 29 15:46:55 crc kubenswrapper[5008]: I0129 15:46:55.476123 5008 state_mem.go:107] "Deleted CPUSet assignment" podUID="5b273a50-b2db-40d5-b4b4-6494206c606d" containerName="swift-ring-rebalance" Jan 29 15:46:55 crc kubenswrapper[5008]: E0129 15:46:55.476137 5008 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="328d3758-78bd-4a08-b91f-f2f4c9b8b645" containerName="mariadb-account-create-update" Jan 29 15:46:55 crc kubenswrapper[5008]: I0129 15:46:55.476144 5008 state_mem.go:107] "Deleted CPUSet assignment" podUID="328d3758-78bd-4a08-b91f-f2f4c9b8b645" containerName="mariadb-account-create-update" Jan 29 15:46:55 crc kubenswrapper[5008]: E0129 15:46:55.476174 5008 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="232739d0-09f9-4843-8c9f-fc19bc53763f" containerName="mariadb-database-create" Jan 29 15:46:55 crc kubenswrapper[5008]: I0129 15:46:55.476183 5008 state_mem.go:107] "Deleted CPUSet assignment" podUID="232739d0-09f9-4843-8c9f-fc19bc53763f" containerName="mariadb-database-create" Jan 29 15:46:55 crc kubenswrapper[5008]: E0129 15:46:55.476192 5008 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="207579aa-feff-4069-8fcb-02c5b9cd107f" containerName="mariadb-database-create" Jan 29 15:46:55 crc kubenswrapper[5008]: I0129 15:46:55.476199 5008 state_mem.go:107] "Deleted CPUSet assignment" podUID="207579aa-feff-4069-8fcb-02c5b9cd107f" containerName="mariadb-database-create" Jan 29 15:46:55 crc kubenswrapper[5008]: E0129 15:46:55.476211 5008 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="907129fe-50cb-47ef-bbf6-db42cd2ad1ae" containerName="mariadb-account-create-update" Jan 29 15:46:55 crc kubenswrapper[5008]: I0129 15:46:55.476218 5008 state_mem.go:107] "Deleted CPUSet assignment" podUID="907129fe-50cb-47ef-bbf6-db42cd2ad1ae" containerName="mariadb-account-create-update" Jan 29 15:46:55 crc kubenswrapper[5008]: E0129 15:46:55.476230 5008 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="08da0630-8fe2-4a33-be0c-d81bba67c32c" containerName="mariadb-database-create" Jan 29 15:46:55 crc kubenswrapper[5008]: I0129 15:46:55.476236 5008 state_mem.go:107] "Deleted CPUSet assignment" podUID="08da0630-8fe2-4a33-be0c-d81bba67c32c" containerName="mariadb-database-create" Jan 29 15:46:55 crc kubenswrapper[5008]: E0129 15:46:55.476249 5008 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6fd141cd-e623-4692-892c-cf683275d378" containerName="mariadb-account-create-update" Jan 29 15:46:55 crc kubenswrapper[5008]: I0129 15:46:55.476256 5008 state_mem.go:107] "Deleted CPUSet assignment" podUID="6fd141cd-e623-4692-892c-cf683275d378" containerName="mariadb-account-create-update" Jan 29 15:46:55 crc kubenswrapper[5008]: E0129 15:46:55.476271 5008 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="30bc21a6-d1eb-4200-add0-523a33ffb2ff" containerName="mariadb-account-create-update" Jan 29 15:46:55 crc kubenswrapper[5008]: I0129 15:46:55.476278 5008 state_mem.go:107] "Deleted CPUSet assignment" podUID="30bc21a6-d1eb-4200-add0-523a33ffb2ff" containerName="mariadb-account-create-update" Jan 29 15:46:55 crc kubenswrapper[5008]: I0129 15:46:55.476445 5008 memory_manager.go:354] "RemoveStaleState removing state" podUID="30bc21a6-d1eb-4200-add0-523a33ffb2ff" containerName="mariadb-account-create-update" Jan 29 15:46:55 crc kubenswrapper[5008]: I0129 15:46:55.476460 5008 memory_manager.go:354] "RemoveStaleState removing state" podUID="232739d0-09f9-4843-8c9f-fc19bc53763f" containerName="mariadb-database-create" Jan 29 15:46:55 crc kubenswrapper[5008]: I0129 15:46:55.476473 5008 memory_manager.go:354] "RemoveStaleState removing state" podUID="08da0630-8fe2-4a33-be0c-d81bba67c32c" containerName="mariadb-database-create" Jan 29 15:46:55 crc kubenswrapper[5008]: I0129 15:46:55.476483 5008 memory_manager.go:354] "RemoveStaleState removing state" podUID="207579aa-feff-4069-8fcb-02c5b9cd107f" containerName="mariadb-database-create" Jan 29 15:46:55 crc kubenswrapper[5008]: I0129 15:46:55.476493 5008 memory_manager.go:354] "RemoveStaleState removing state" podUID="907129fe-50cb-47ef-bbf6-db42cd2ad1ae" containerName="mariadb-account-create-update" Jan 29 15:46:55 crc kubenswrapper[5008]: I0129 15:46:55.476505 5008 memory_manager.go:354] "RemoveStaleState removing state" podUID="5b273a50-b2db-40d5-b4b4-6494206c606d" containerName="swift-ring-rebalance" Jan 29 15:46:55 crc kubenswrapper[5008]: I0129 15:46:55.476513 5008 memory_manager.go:354] "RemoveStaleState removing state" podUID="328d3758-78bd-4a08-b91f-f2f4c9b8b645" containerName="mariadb-account-create-update" Jan 29 15:46:55 crc kubenswrapper[5008]: I0129 15:46:55.476525 5008 memory_manager.go:354] "RemoveStaleState removing state" podUID="6fd141cd-e623-4692-892c-cf683275d378" containerName="mariadb-account-create-update" Jan 29 15:46:55 crc kubenswrapper[5008]: I0129 15:46:55.477289 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-bxxx2" Jan 29 15:46:55 crc kubenswrapper[5008]: I0129 15:46:55.479258 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-mariadb-root-db-secret" Jan 29 15:46:55 crc kubenswrapper[5008]: I0129 15:46:55.495247 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-bxxx2"] Jan 29 15:46:55 crc kubenswrapper[5008]: I0129 15:46:55.617176 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s5wjx\" (UniqueName: \"kubernetes.io/projected/98c93f6a-d803-4df3-8b35-191cbe683adf-kube-api-access-s5wjx\") pod \"root-account-create-update-bxxx2\" (UID: \"98c93f6a-d803-4df3-8b35-191cbe683adf\") " pod="openstack/root-account-create-update-bxxx2" Jan 29 15:46:55 crc kubenswrapper[5008]: I0129 15:46:55.617605 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/98c93f6a-d803-4df3-8b35-191cbe683adf-operator-scripts\") pod \"root-account-create-update-bxxx2\" (UID: \"98c93f6a-d803-4df3-8b35-191cbe683adf\") " pod="openstack/root-account-create-update-bxxx2" Jan 29 15:46:55 crc kubenswrapper[5008]: I0129 15:46:55.719715 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s5wjx\" (UniqueName: \"kubernetes.io/projected/98c93f6a-d803-4df3-8b35-191cbe683adf-kube-api-access-s5wjx\") pod \"root-account-create-update-bxxx2\" (UID: \"98c93f6a-d803-4df3-8b35-191cbe683adf\") " pod="openstack/root-account-create-update-bxxx2" Jan 29 15:46:55 crc kubenswrapper[5008]: I0129 15:46:55.719976 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/98c93f6a-d803-4df3-8b35-191cbe683adf-operator-scripts\") pod \"root-account-create-update-bxxx2\" (UID: \"98c93f6a-d803-4df3-8b35-191cbe683adf\") " pod="openstack/root-account-create-update-bxxx2" Jan 29 15:46:55 crc kubenswrapper[5008]: I0129 15:46:55.720683 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/98c93f6a-d803-4df3-8b35-191cbe683adf-operator-scripts\") pod \"root-account-create-update-bxxx2\" (UID: \"98c93f6a-d803-4df3-8b35-191cbe683adf\") " pod="openstack/root-account-create-update-bxxx2" Jan 29 15:46:55 crc kubenswrapper[5008]: I0129 15:46:55.740052 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s5wjx\" (UniqueName: \"kubernetes.io/projected/98c93f6a-d803-4df3-8b35-191cbe683adf-kube-api-access-s5wjx\") pod \"root-account-create-update-bxxx2\" (UID: \"98c93f6a-d803-4df3-8b35-191cbe683adf\") " pod="openstack/root-account-create-update-bxxx2" Jan 29 15:46:55 crc kubenswrapper[5008]: I0129 15:46:55.816850 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-bxxx2" Jan 29 15:46:56 crc kubenswrapper[5008]: I0129 15:46:56.249902 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-bxxx2"] Jan 29 15:46:56 crc kubenswrapper[5008]: W0129 15:46:56.255381 5008 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod98c93f6a_d803_4df3_8b35_191cbe683adf.slice/crio-b45ac6c0a52ac32bcd4c9908e0789f9ada50588c4ead8e40fd13649820fea074 WatchSource:0}: Error finding container b45ac6c0a52ac32bcd4c9908e0789f9ada50588c4ead8e40fd13649820fea074: Status 404 returned error can't find the container with id b45ac6c0a52ac32bcd4c9908e0789f9ada50588c4ead8e40fd13649820fea074 Jan 29 15:46:57 crc kubenswrapper[5008]: I0129 15:46:57.217841 5008 generic.go:334] "Generic (PLEG): container finished" podID="98c93f6a-d803-4df3-8b35-191cbe683adf" containerID="88e4435b5bfd1a79780b926cd500b5d39ca87b3e8a648cc8d9d789e4cf17dfd1" exitCode=0 Jan 29 15:46:57 crc kubenswrapper[5008]: I0129 15:46:57.217910 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-bxxx2" event={"ID":"98c93f6a-d803-4df3-8b35-191cbe683adf","Type":"ContainerDied","Data":"88e4435b5bfd1a79780b926cd500b5d39ca87b3e8a648cc8d9d789e4cf17dfd1"} Jan 29 15:46:57 crc kubenswrapper[5008]: I0129 15:46:57.218104 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-bxxx2" event={"ID":"98c93f6a-d803-4df3-8b35-191cbe683adf","Type":"ContainerStarted","Data":"b45ac6c0a52ac32bcd4c9908e0789f9ada50588c4ead8e40fd13649820fea074"} Jan 29 15:46:57 crc kubenswrapper[5008]: I0129 15:46:57.500311 5008 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-sync-n7wgw"] Jan 29 15:46:57 crc kubenswrapper[5008]: I0129 15:46:57.501331 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-n7wgw" Jan 29 15:46:57 crc kubenswrapper[5008]: I0129 15:46:57.503210 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-2qq6q" Jan 29 15:46:57 crc kubenswrapper[5008]: I0129 15:46:57.503595 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-config-data" Jan 29 15:46:57 crc kubenswrapper[5008]: I0129 15:46:57.514444 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-n7wgw"] Jan 29 15:46:57 crc kubenswrapper[5008]: I0129 15:46:57.652734 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9m6lk\" (UniqueName: \"kubernetes.io/projected/8277eb2b-44f8-4fd9-af92-1832e0272e0e-kube-api-access-9m6lk\") pod \"glance-db-sync-n7wgw\" (UID: \"8277eb2b-44f8-4fd9-af92-1832e0272e0e\") " pod="openstack/glance-db-sync-n7wgw" Jan 29 15:46:57 crc kubenswrapper[5008]: I0129 15:46:57.652832 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8277eb2b-44f8-4fd9-af92-1832e0272e0e-combined-ca-bundle\") pod \"glance-db-sync-n7wgw\" (UID: \"8277eb2b-44f8-4fd9-af92-1832e0272e0e\") " pod="openstack/glance-db-sync-n7wgw" Jan 29 15:46:57 crc kubenswrapper[5008]: I0129 15:46:57.652886 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8277eb2b-44f8-4fd9-af92-1832e0272e0e-config-data\") pod \"glance-db-sync-n7wgw\" (UID: \"8277eb2b-44f8-4fd9-af92-1832e0272e0e\") " pod="openstack/glance-db-sync-n7wgw" Jan 29 15:46:57 crc kubenswrapper[5008]: I0129 15:46:57.652964 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/8277eb2b-44f8-4fd9-af92-1832e0272e0e-db-sync-config-data\") pod \"glance-db-sync-n7wgw\" (UID: \"8277eb2b-44f8-4fd9-af92-1832e0272e0e\") " pod="openstack/glance-db-sync-n7wgw" Jan 29 15:46:57 crc kubenswrapper[5008]: I0129 15:46:57.755348 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9m6lk\" (UniqueName: \"kubernetes.io/projected/8277eb2b-44f8-4fd9-af92-1832e0272e0e-kube-api-access-9m6lk\") pod \"glance-db-sync-n7wgw\" (UID: \"8277eb2b-44f8-4fd9-af92-1832e0272e0e\") " pod="openstack/glance-db-sync-n7wgw" Jan 29 15:46:57 crc kubenswrapper[5008]: I0129 15:46:57.755401 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8277eb2b-44f8-4fd9-af92-1832e0272e0e-combined-ca-bundle\") pod \"glance-db-sync-n7wgw\" (UID: \"8277eb2b-44f8-4fd9-af92-1832e0272e0e\") " pod="openstack/glance-db-sync-n7wgw" Jan 29 15:46:57 crc kubenswrapper[5008]: I0129 15:46:57.755437 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8277eb2b-44f8-4fd9-af92-1832e0272e0e-config-data\") pod \"glance-db-sync-n7wgw\" (UID: \"8277eb2b-44f8-4fd9-af92-1832e0272e0e\") " pod="openstack/glance-db-sync-n7wgw" Jan 29 15:46:57 crc kubenswrapper[5008]: I0129 15:46:57.755469 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/8277eb2b-44f8-4fd9-af92-1832e0272e0e-db-sync-config-data\") pod \"glance-db-sync-n7wgw\" (UID: \"8277eb2b-44f8-4fd9-af92-1832e0272e0e\") " pod="openstack/glance-db-sync-n7wgw" Jan 29 15:46:57 crc kubenswrapper[5008]: I0129 15:46:57.762429 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/8277eb2b-44f8-4fd9-af92-1832e0272e0e-db-sync-config-data\") pod \"glance-db-sync-n7wgw\" (UID: \"8277eb2b-44f8-4fd9-af92-1832e0272e0e\") " pod="openstack/glance-db-sync-n7wgw" Jan 29 15:46:57 crc kubenswrapper[5008]: I0129 15:46:57.762438 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8277eb2b-44f8-4fd9-af92-1832e0272e0e-combined-ca-bundle\") pod \"glance-db-sync-n7wgw\" (UID: \"8277eb2b-44f8-4fd9-af92-1832e0272e0e\") " pod="openstack/glance-db-sync-n7wgw" Jan 29 15:46:57 crc kubenswrapper[5008]: I0129 15:46:57.763647 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8277eb2b-44f8-4fd9-af92-1832e0272e0e-config-data\") pod \"glance-db-sync-n7wgw\" (UID: \"8277eb2b-44f8-4fd9-af92-1832e0272e0e\") " pod="openstack/glance-db-sync-n7wgw" Jan 29 15:46:57 crc kubenswrapper[5008]: I0129 15:46:57.775166 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9m6lk\" (UniqueName: \"kubernetes.io/projected/8277eb2b-44f8-4fd9-af92-1832e0272e0e-kube-api-access-9m6lk\") pod \"glance-db-sync-n7wgw\" (UID: \"8277eb2b-44f8-4fd9-af92-1832e0272e0e\") " pod="openstack/glance-db-sync-n7wgw" Jan 29 15:46:57 crc kubenswrapper[5008]: I0129 15:46:57.830737 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-n7wgw" Jan 29 15:46:58 crc kubenswrapper[5008]: I0129 15:46:58.365085 5008 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-bw9wr" podUID="0dd702c8-269b-4fb6-a3a7-03adf93d916a" containerName="ovn-controller" probeResult="failure" output=< Jan 29 15:46:58 crc kubenswrapper[5008]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Jan 29 15:46:58 crc kubenswrapper[5008]: > Jan 29 15:46:58 crc kubenswrapper[5008]: I0129 15:46:58.420995 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-n7wgw"] Jan 29 15:46:58 crc kubenswrapper[5008]: I0129 15:46:58.459176 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-k5zwb" Jan 29 15:46:58 crc kubenswrapper[5008]: I0129 15:46:58.461848 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-k5zwb" Jan 29 15:46:58 crc kubenswrapper[5008]: I0129 15:46:58.551462 5008 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-bxxx2" Jan 29 15:46:58 crc kubenswrapper[5008]: I0129 15:46:58.676707 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s5wjx\" (UniqueName: \"kubernetes.io/projected/98c93f6a-d803-4df3-8b35-191cbe683adf-kube-api-access-s5wjx\") pod \"98c93f6a-d803-4df3-8b35-191cbe683adf\" (UID: \"98c93f6a-d803-4df3-8b35-191cbe683adf\") " Jan 29 15:46:58 crc kubenswrapper[5008]: I0129 15:46:58.676825 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/98c93f6a-d803-4df3-8b35-191cbe683adf-operator-scripts\") pod \"98c93f6a-d803-4df3-8b35-191cbe683adf\" (UID: \"98c93f6a-d803-4df3-8b35-191cbe683adf\") " Jan 29 15:46:58 crc kubenswrapper[5008]: I0129 15:46:58.678195 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/98c93f6a-d803-4df3-8b35-191cbe683adf-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "98c93f6a-d803-4df3-8b35-191cbe683adf" (UID: "98c93f6a-d803-4df3-8b35-191cbe683adf"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:46:58 crc kubenswrapper[5008]: I0129 15:46:58.679446 5008 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-bw9wr-config-rv27j"] Jan 29 15:46:58 crc kubenswrapper[5008]: E0129 15:46:58.679894 5008 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="98c93f6a-d803-4df3-8b35-191cbe683adf" containerName="mariadb-account-create-update" Jan 29 15:46:58 crc kubenswrapper[5008]: I0129 15:46:58.679988 5008 state_mem.go:107] "Deleted CPUSet assignment" podUID="98c93f6a-d803-4df3-8b35-191cbe683adf" containerName="mariadb-account-create-update" Jan 29 15:46:58 crc kubenswrapper[5008]: I0129 15:46:58.680263 5008 memory_manager.go:354] "RemoveStaleState removing state" podUID="98c93f6a-d803-4df3-8b35-191cbe683adf" containerName="mariadb-account-create-update" Jan 29 15:46:58 crc kubenswrapper[5008]: I0129 15:46:58.680745 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-bw9wr-config-rv27j" Jan 29 15:46:58 crc kubenswrapper[5008]: I0129 15:46:58.681828 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/98c93f6a-d803-4df3-8b35-191cbe683adf-kube-api-access-s5wjx" (OuterVolumeSpecName: "kube-api-access-s5wjx") pod "98c93f6a-d803-4df3-8b35-191cbe683adf" (UID: "98c93f6a-d803-4df3-8b35-191cbe683adf"). InnerVolumeSpecName "kube-api-access-s5wjx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:46:58 crc kubenswrapper[5008]: I0129 15:46:58.682539 5008 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-extra-scripts" Jan 29 15:46:58 crc kubenswrapper[5008]: I0129 15:46:58.698708 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-bw9wr-config-rv27j"] Jan 29 15:46:58 crc kubenswrapper[5008]: I0129 15:46:58.779061 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/3bbdbac9-d640-400e-a2a1-69c7e09a3211-scripts\") pod \"ovn-controller-bw9wr-config-rv27j\" (UID: \"3bbdbac9-d640-400e-a2a1-69c7e09a3211\") " pod="openstack/ovn-controller-bw9wr-config-rv27j" Jan 29 15:46:58 crc kubenswrapper[5008]: I0129 15:46:58.779102 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/3bbdbac9-d640-400e-a2a1-69c7e09a3211-var-run\") pod \"ovn-controller-bw9wr-config-rv27j\" (UID: \"3bbdbac9-d640-400e-a2a1-69c7e09a3211\") " pod="openstack/ovn-controller-bw9wr-config-rv27j" Jan 29 15:46:58 crc kubenswrapper[5008]: I0129 15:46:58.779118 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/3bbdbac9-d640-400e-a2a1-69c7e09a3211-var-log-ovn\") pod \"ovn-controller-bw9wr-config-rv27j\" (UID: \"3bbdbac9-d640-400e-a2a1-69c7e09a3211\") " pod="openstack/ovn-controller-bw9wr-config-rv27j" Jan 29 15:46:58 crc kubenswrapper[5008]: I0129 15:46:58.779440 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/3bbdbac9-d640-400e-a2a1-69c7e09a3211-additional-scripts\") pod \"ovn-controller-bw9wr-config-rv27j\" (UID: \"3bbdbac9-d640-400e-a2a1-69c7e09a3211\") " pod="openstack/ovn-controller-bw9wr-config-rv27j" Jan 29 15:46:58 crc kubenswrapper[5008]: I0129 15:46:58.779668 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/3bbdbac9-d640-400e-a2a1-69c7e09a3211-var-run-ovn\") pod \"ovn-controller-bw9wr-config-rv27j\" (UID: \"3bbdbac9-d640-400e-a2a1-69c7e09a3211\") " pod="openstack/ovn-controller-bw9wr-config-rv27j" Jan 29 15:46:58 crc kubenswrapper[5008]: I0129 15:46:58.779821 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2knfh\" (UniqueName: \"kubernetes.io/projected/3bbdbac9-d640-400e-a2a1-69c7e09a3211-kube-api-access-2knfh\") pod \"ovn-controller-bw9wr-config-rv27j\" (UID: \"3bbdbac9-d640-400e-a2a1-69c7e09a3211\") " pod="openstack/ovn-controller-bw9wr-config-rv27j" Jan 29 15:46:58 crc kubenswrapper[5008]: I0129 15:46:58.780024 5008 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s5wjx\" (UniqueName: \"kubernetes.io/projected/98c93f6a-d803-4df3-8b35-191cbe683adf-kube-api-access-s5wjx\") on node \"crc\" DevicePath \"\"" Jan 29 15:46:58 crc kubenswrapper[5008]: I0129 15:46:58.780056 5008 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/98c93f6a-d803-4df3-8b35-191cbe683adf-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 15:46:58 crc kubenswrapper[5008]: I0129 15:46:58.880885 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/3bbdbac9-d640-400e-a2a1-69c7e09a3211-var-run\") pod \"ovn-controller-bw9wr-config-rv27j\" (UID: \"3bbdbac9-d640-400e-a2a1-69c7e09a3211\") " pod="openstack/ovn-controller-bw9wr-config-rv27j" Jan 29 15:46:58 crc kubenswrapper[5008]: I0129 15:46:58.880931 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/3bbdbac9-d640-400e-a2a1-69c7e09a3211-var-log-ovn\") pod \"ovn-controller-bw9wr-config-rv27j\" (UID: \"3bbdbac9-d640-400e-a2a1-69c7e09a3211\") " pod="openstack/ovn-controller-bw9wr-config-rv27j" Jan 29 15:46:58 crc kubenswrapper[5008]: I0129 15:46:58.881025 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/3bbdbac9-d640-400e-a2a1-69c7e09a3211-additional-scripts\") pod \"ovn-controller-bw9wr-config-rv27j\" (UID: \"3bbdbac9-d640-400e-a2a1-69c7e09a3211\") " pod="openstack/ovn-controller-bw9wr-config-rv27j" Jan 29 15:46:58 crc kubenswrapper[5008]: I0129 15:46:58.881096 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/3bbdbac9-d640-400e-a2a1-69c7e09a3211-var-run-ovn\") pod \"ovn-controller-bw9wr-config-rv27j\" (UID: \"3bbdbac9-d640-400e-a2a1-69c7e09a3211\") " pod="openstack/ovn-controller-bw9wr-config-rv27j" Jan 29 15:46:58 crc kubenswrapper[5008]: I0129 15:46:58.881279 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/3bbdbac9-d640-400e-a2a1-69c7e09a3211-var-run\") pod \"ovn-controller-bw9wr-config-rv27j\" (UID: \"3bbdbac9-d640-400e-a2a1-69c7e09a3211\") " pod="openstack/ovn-controller-bw9wr-config-rv27j" Jan 29 15:46:58 crc kubenswrapper[5008]: I0129 15:46:58.881307 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/3bbdbac9-d640-400e-a2a1-69c7e09a3211-var-log-ovn\") pod \"ovn-controller-bw9wr-config-rv27j\" (UID: \"3bbdbac9-d640-400e-a2a1-69c7e09a3211\") " pod="openstack/ovn-controller-bw9wr-config-rv27j" Jan 29 15:46:58 crc kubenswrapper[5008]: I0129 15:46:58.881324 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/3bbdbac9-d640-400e-a2a1-69c7e09a3211-var-run-ovn\") pod \"ovn-controller-bw9wr-config-rv27j\" (UID: \"3bbdbac9-d640-400e-a2a1-69c7e09a3211\") " pod="openstack/ovn-controller-bw9wr-config-rv27j" Jan 29 15:46:58 crc kubenswrapper[5008]: I0129 15:46:58.881368 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2knfh\" (UniqueName: \"kubernetes.io/projected/3bbdbac9-d640-400e-a2a1-69c7e09a3211-kube-api-access-2knfh\") pod \"ovn-controller-bw9wr-config-rv27j\" (UID: \"3bbdbac9-d640-400e-a2a1-69c7e09a3211\") " pod="openstack/ovn-controller-bw9wr-config-rv27j" Jan 29 15:46:58 crc kubenswrapper[5008]: I0129 15:46:58.881466 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/3bbdbac9-d640-400e-a2a1-69c7e09a3211-scripts\") pod \"ovn-controller-bw9wr-config-rv27j\" (UID: \"3bbdbac9-d640-400e-a2a1-69c7e09a3211\") " pod="openstack/ovn-controller-bw9wr-config-rv27j" Jan 29 15:46:58 crc kubenswrapper[5008]: I0129 15:46:58.882402 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/3bbdbac9-d640-400e-a2a1-69c7e09a3211-additional-scripts\") pod \"ovn-controller-bw9wr-config-rv27j\" (UID: \"3bbdbac9-d640-400e-a2a1-69c7e09a3211\") " pod="openstack/ovn-controller-bw9wr-config-rv27j" Jan 29 15:46:58 crc kubenswrapper[5008]: I0129 15:46:58.885385 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/3bbdbac9-d640-400e-a2a1-69c7e09a3211-scripts\") pod \"ovn-controller-bw9wr-config-rv27j\" (UID: \"3bbdbac9-d640-400e-a2a1-69c7e09a3211\") " pod="openstack/ovn-controller-bw9wr-config-rv27j" Jan 29 15:46:58 crc kubenswrapper[5008]: I0129 15:46:58.899775 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2knfh\" (UniqueName: \"kubernetes.io/projected/3bbdbac9-d640-400e-a2a1-69c7e09a3211-kube-api-access-2knfh\") pod \"ovn-controller-bw9wr-config-rv27j\" (UID: \"3bbdbac9-d640-400e-a2a1-69c7e09a3211\") " pod="openstack/ovn-controller-bw9wr-config-rv27j" Jan 29 15:46:59 crc kubenswrapper[5008]: I0129 15:46:59.007630 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-bw9wr-config-rv27j" Jan 29 15:46:59 crc kubenswrapper[5008]: I0129 15:46:59.240415 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-bxxx2" event={"ID":"98c93f6a-d803-4df3-8b35-191cbe683adf","Type":"ContainerDied","Data":"b45ac6c0a52ac32bcd4c9908e0789f9ada50588c4ead8e40fd13649820fea074"} Jan 29 15:46:59 crc kubenswrapper[5008]: I0129 15:46:59.240881 5008 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b45ac6c0a52ac32bcd4c9908e0789f9ada50588c4ead8e40fd13649820fea074" Jan 29 15:46:59 crc kubenswrapper[5008]: I0129 15:46:59.240709 5008 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-bxxx2" Jan 29 15:46:59 crc kubenswrapper[5008]: I0129 15:46:59.242017 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-n7wgw" event={"ID":"8277eb2b-44f8-4fd9-af92-1832e0272e0e","Type":"ContainerStarted","Data":"b1174780d2fa3fe7c06477c9d106ea7940e8a6e121cc29c7f9f91c93470ca373"} Jan 29 15:46:59 crc kubenswrapper[5008]: I0129 15:46:59.536994 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-bw9wr-config-rv27j"] Jan 29 15:47:00 crc kubenswrapper[5008]: I0129 15:47:00.250766 5008 generic.go:334] "Generic (PLEG): container finished" podID="3bbdbac9-d640-400e-a2a1-69c7e09a3211" containerID="1545206f415995f8be0b1d78b3af14329c9b33899a9464b3994d4df802ea1766" exitCode=0 Jan 29 15:47:00 crc kubenswrapper[5008]: I0129 15:47:00.251278 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-bw9wr-config-rv27j" event={"ID":"3bbdbac9-d640-400e-a2a1-69c7e09a3211","Type":"ContainerDied","Data":"1545206f415995f8be0b1d78b3af14329c9b33899a9464b3994d4df802ea1766"} Jan 29 15:47:00 crc kubenswrapper[5008]: I0129 15:47:00.251300 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-bw9wr-config-rv27j" event={"ID":"3bbdbac9-d640-400e-a2a1-69c7e09a3211","Type":"ContainerStarted","Data":"1d34faa4e6b9bb9b24a255ac43e9f09cd1978cd595d97ab82630d6bbc255082c"} Jan 29 15:47:01 crc kubenswrapper[5008]: I0129 15:47:01.600234 5008 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-bw9wr-config-rv27j" Jan 29 15:47:01 crc kubenswrapper[5008]: I0129 15:47:01.758164 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2knfh\" (UniqueName: \"kubernetes.io/projected/3bbdbac9-d640-400e-a2a1-69c7e09a3211-kube-api-access-2knfh\") pod \"3bbdbac9-d640-400e-a2a1-69c7e09a3211\" (UID: \"3bbdbac9-d640-400e-a2a1-69c7e09a3211\") " Jan 29 15:47:01 crc kubenswrapper[5008]: I0129 15:47:01.758235 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/3bbdbac9-d640-400e-a2a1-69c7e09a3211-additional-scripts\") pod \"3bbdbac9-d640-400e-a2a1-69c7e09a3211\" (UID: \"3bbdbac9-d640-400e-a2a1-69c7e09a3211\") " Jan 29 15:47:01 crc kubenswrapper[5008]: I0129 15:47:01.758321 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/3bbdbac9-d640-400e-a2a1-69c7e09a3211-scripts\") pod \"3bbdbac9-d640-400e-a2a1-69c7e09a3211\" (UID: \"3bbdbac9-d640-400e-a2a1-69c7e09a3211\") " Jan 29 15:47:01 crc kubenswrapper[5008]: I0129 15:47:01.758367 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/3bbdbac9-d640-400e-a2a1-69c7e09a3211-var-run\") pod \"3bbdbac9-d640-400e-a2a1-69c7e09a3211\" (UID: \"3bbdbac9-d640-400e-a2a1-69c7e09a3211\") " Jan 29 15:47:01 crc kubenswrapper[5008]: I0129 15:47:01.758402 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/3bbdbac9-d640-400e-a2a1-69c7e09a3211-var-run-ovn\") pod \"3bbdbac9-d640-400e-a2a1-69c7e09a3211\" (UID: \"3bbdbac9-d640-400e-a2a1-69c7e09a3211\") " Jan 29 15:47:01 crc kubenswrapper[5008]: I0129 15:47:01.758476 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/3bbdbac9-d640-400e-a2a1-69c7e09a3211-var-log-ovn\") pod \"3bbdbac9-d640-400e-a2a1-69c7e09a3211\" (UID: \"3bbdbac9-d640-400e-a2a1-69c7e09a3211\") " Jan 29 15:47:01 crc kubenswrapper[5008]: I0129 15:47:01.758766 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3bbdbac9-d640-400e-a2a1-69c7e09a3211-var-run" (OuterVolumeSpecName: "var-run") pod "3bbdbac9-d640-400e-a2a1-69c7e09a3211" (UID: "3bbdbac9-d640-400e-a2a1-69c7e09a3211"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 15:47:01 crc kubenswrapper[5008]: I0129 15:47:01.758867 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3bbdbac9-d640-400e-a2a1-69c7e09a3211-var-log-ovn" (OuterVolumeSpecName: "var-log-ovn") pod "3bbdbac9-d640-400e-a2a1-69c7e09a3211" (UID: "3bbdbac9-d640-400e-a2a1-69c7e09a3211"). InnerVolumeSpecName "var-log-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 15:47:01 crc kubenswrapper[5008]: I0129 15:47:01.758895 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3bbdbac9-d640-400e-a2a1-69c7e09a3211-var-run-ovn" (OuterVolumeSpecName: "var-run-ovn") pod "3bbdbac9-d640-400e-a2a1-69c7e09a3211" (UID: "3bbdbac9-d640-400e-a2a1-69c7e09a3211"). InnerVolumeSpecName "var-run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 15:47:01 crc kubenswrapper[5008]: I0129 15:47:01.759282 5008 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/3bbdbac9-d640-400e-a2a1-69c7e09a3211-var-run\") on node \"crc\" DevicePath \"\"" Jan 29 15:47:01 crc kubenswrapper[5008]: I0129 15:47:01.759293 5008 reconciler_common.go:293] "Volume detached for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/3bbdbac9-d640-400e-a2a1-69c7e09a3211-var-run-ovn\") on node \"crc\" DevicePath \"\"" Jan 29 15:47:01 crc kubenswrapper[5008]: I0129 15:47:01.759304 5008 reconciler_common.go:293] "Volume detached for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/3bbdbac9-d640-400e-a2a1-69c7e09a3211-var-log-ovn\") on node \"crc\" DevicePath \"\"" Jan 29 15:47:01 crc kubenswrapper[5008]: I0129 15:47:01.759833 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3bbdbac9-d640-400e-a2a1-69c7e09a3211-additional-scripts" (OuterVolumeSpecName: "additional-scripts") pod "3bbdbac9-d640-400e-a2a1-69c7e09a3211" (UID: "3bbdbac9-d640-400e-a2a1-69c7e09a3211"). InnerVolumeSpecName "additional-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:47:01 crc kubenswrapper[5008]: I0129 15:47:01.760591 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3bbdbac9-d640-400e-a2a1-69c7e09a3211-scripts" (OuterVolumeSpecName: "scripts") pod "3bbdbac9-d640-400e-a2a1-69c7e09a3211" (UID: "3bbdbac9-d640-400e-a2a1-69c7e09a3211"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:47:01 crc kubenswrapper[5008]: I0129 15:47:01.764880 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3bbdbac9-d640-400e-a2a1-69c7e09a3211-kube-api-access-2knfh" (OuterVolumeSpecName: "kube-api-access-2knfh") pod "3bbdbac9-d640-400e-a2a1-69c7e09a3211" (UID: "3bbdbac9-d640-400e-a2a1-69c7e09a3211"). InnerVolumeSpecName "kube-api-access-2knfh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:47:01 crc kubenswrapper[5008]: I0129 15:47:01.861069 5008 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2knfh\" (UniqueName: \"kubernetes.io/projected/3bbdbac9-d640-400e-a2a1-69c7e09a3211-kube-api-access-2knfh\") on node \"crc\" DevicePath \"\"" Jan 29 15:47:01 crc kubenswrapper[5008]: I0129 15:47:01.861373 5008 reconciler_common.go:293] "Volume detached for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/3bbdbac9-d640-400e-a2a1-69c7e09a3211-additional-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 15:47:01 crc kubenswrapper[5008]: I0129 15:47:01.861508 5008 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/3bbdbac9-d640-400e-a2a1-69c7e09a3211-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 15:47:02 crc kubenswrapper[5008]: I0129 15:47:02.064825 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/7d8596d3-fe9a-4e1a-969b-2a40a90e437d-etc-swift\") pod \"swift-storage-0\" (UID: \"7d8596d3-fe9a-4e1a-969b-2a40a90e437d\") " pod="openstack/swift-storage-0" Jan 29 15:47:02 crc kubenswrapper[5008]: I0129 15:47:02.071497 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/7d8596d3-fe9a-4e1a-969b-2a40a90e437d-etc-swift\") pod \"swift-storage-0\" (UID: \"7d8596d3-fe9a-4e1a-969b-2a40a90e437d\") " pod="openstack/swift-storage-0" Jan 29 15:47:02 crc kubenswrapper[5008]: I0129 15:47:02.238014 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Jan 29 15:47:02 crc kubenswrapper[5008]: I0129 15:47:02.269825 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-bw9wr-config-rv27j" event={"ID":"3bbdbac9-d640-400e-a2a1-69c7e09a3211","Type":"ContainerDied","Data":"1d34faa4e6b9bb9b24a255ac43e9f09cd1978cd595d97ab82630d6bbc255082c"} Jan 29 15:47:02 crc kubenswrapper[5008]: I0129 15:47:02.269865 5008 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1d34faa4e6b9bb9b24a255ac43e9f09cd1978cd595d97ab82630d6bbc255082c" Jan 29 15:47:02 crc kubenswrapper[5008]: I0129 15:47:02.269984 5008 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-bw9wr-config-rv27j" Jan 29 15:47:02 crc kubenswrapper[5008]: I0129 15:47:02.695143 5008 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-bw9wr-config-rv27j"] Jan 29 15:47:02 crc kubenswrapper[5008]: I0129 15:47:02.701111 5008 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-bw9wr-config-rv27j"] Jan 29 15:47:02 crc kubenswrapper[5008]: I0129 15:47:02.767547 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Jan 29 15:47:03 crc kubenswrapper[5008]: I0129 15:47:03.280768 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"7d8596d3-fe9a-4e1a-969b-2a40a90e437d","Type":"ContainerStarted","Data":"80c25143b6f67fe98fdac7a3c17c4bcd0f31a6fa3e14bac09bed0dca8ef6218d"} Jan 29 15:47:03 crc kubenswrapper[5008]: I0129 15:47:03.340649 5008 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3bbdbac9-d640-400e-a2a1-69c7e09a3211" path="/var/lib/kubelet/pods/3bbdbac9-d640-400e-a2a1-69c7e09a3211/volumes" Jan 29 15:47:03 crc kubenswrapper[5008]: I0129 15:47:03.364336 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-bw9wr" Jan 29 15:47:03 crc kubenswrapper[5008]: I0129 15:47:03.814045 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Jan 29 15:47:04 crc kubenswrapper[5008]: I0129 15:47:04.085985 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Jan 29 15:47:04 crc kubenswrapper[5008]: I0129 15:47:04.125530 5008 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-create-ch7lz"] Jan 29 15:47:04 crc kubenswrapper[5008]: E0129 15:47:04.125872 5008 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3bbdbac9-d640-400e-a2a1-69c7e09a3211" containerName="ovn-config" Jan 29 15:47:04 crc kubenswrapper[5008]: I0129 15:47:04.125889 5008 state_mem.go:107] "Deleted CPUSet assignment" podUID="3bbdbac9-d640-400e-a2a1-69c7e09a3211" containerName="ovn-config" Jan 29 15:47:04 crc kubenswrapper[5008]: I0129 15:47:04.126048 5008 memory_manager.go:354] "RemoveStaleState removing state" podUID="3bbdbac9-d640-400e-a2a1-69c7e09a3211" containerName="ovn-config" Jan 29 15:47:04 crc kubenswrapper[5008]: I0129 15:47:04.128377 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-ch7lz" Jan 29 15:47:04 crc kubenswrapper[5008]: I0129 15:47:04.138503 5008 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-2158-account-create-update-pjst9"] Jan 29 15:47:04 crc kubenswrapper[5008]: I0129 15:47:04.139494 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-2158-account-create-update-pjst9" Jan 29 15:47:04 crc kubenswrapper[5008]: I0129 15:47:04.141611 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-db-secret" Jan 29 15:47:04 crc kubenswrapper[5008]: I0129 15:47:04.151526 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-2158-account-create-update-pjst9"] Jan 29 15:47:04 crc kubenswrapper[5008]: I0129 15:47:04.158859 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-ch7lz"] Jan 29 15:47:04 crc kubenswrapper[5008]: I0129 15:47:04.226026 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f9pkm\" (UniqueName: \"kubernetes.io/projected/75706daa-3e40-4bbe-bb1b-44120719d48d-kube-api-access-f9pkm\") pod \"cinder-db-create-ch7lz\" (UID: \"75706daa-3e40-4bbe-bb1b-44120719d48d\") " pod="openstack/cinder-db-create-ch7lz" Jan 29 15:47:04 crc kubenswrapper[5008]: I0129 15:47:04.226202 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/75706daa-3e40-4bbe-bb1b-44120719d48d-operator-scripts\") pod \"cinder-db-create-ch7lz\" (UID: \"75706daa-3e40-4bbe-bb1b-44120719d48d\") " pod="openstack/cinder-db-create-ch7lz" Jan 29 15:47:04 crc kubenswrapper[5008]: I0129 15:47:04.226671 5008 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-create-ls2rz"] Jan 29 15:47:04 crc kubenswrapper[5008]: I0129 15:47:04.227894 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-ls2rz" Jan 29 15:47:04 crc kubenswrapper[5008]: I0129 15:47:04.247184 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-ls2rz"] Jan 29 15:47:04 crc kubenswrapper[5008]: I0129 15:47:04.327218 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0494524d-f73e-4534-9064-b578d41bea87-operator-scripts\") pod \"cinder-2158-account-create-update-pjst9\" (UID: \"0494524d-f73e-4534-9064-b578d41bea87\") " pod="openstack/cinder-2158-account-create-update-pjst9" Jan 29 15:47:04 crc kubenswrapper[5008]: I0129 15:47:04.327270 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/75706daa-3e40-4bbe-bb1b-44120719d48d-operator-scripts\") pod \"cinder-db-create-ch7lz\" (UID: \"75706daa-3e40-4bbe-bb1b-44120719d48d\") " pod="openstack/cinder-db-create-ch7lz" Jan 29 15:47:04 crc kubenswrapper[5008]: I0129 15:47:04.327290 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qvf5q\" (UniqueName: \"kubernetes.io/projected/0494524d-f73e-4534-9064-b578d41bea87-kube-api-access-qvf5q\") pod \"cinder-2158-account-create-update-pjst9\" (UID: \"0494524d-f73e-4534-9064-b578d41bea87\") " pod="openstack/cinder-2158-account-create-update-pjst9" Jan 29 15:47:04 crc kubenswrapper[5008]: I0129 15:47:04.327335 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f9pkm\" (UniqueName: \"kubernetes.io/projected/75706daa-3e40-4bbe-bb1b-44120719d48d-kube-api-access-f9pkm\") pod \"cinder-db-create-ch7lz\" (UID: \"75706daa-3e40-4bbe-bb1b-44120719d48d\") " pod="openstack/cinder-db-create-ch7lz" Jan 29 15:47:04 crc kubenswrapper[5008]: I0129 15:47:04.327351 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/36bf973b-f73a-425e-9923-09caa2622a41-operator-scripts\") pod \"barbican-db-create-ls2rz\" (UID: \"36bf973b-f73a-425e-9923-09caa2622a41\") " pod="openstack/barbican-db-create-ls2rz" Jan 29 15:47:04 crc kubenswrapper[5008]: I0129 15:47:04.327393 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ljgr6\" (UniqueName: \"kubernetes.io/projected/36bf973b-f73a-425e-9923-09caa2622a41-kube-api-access-ljgr6\") pod \"barbican-db-create-ls2rz\" (UID: \"36bf973b-f73a-425e-9923-09caa2622a41\") " pod="openstack/barbican-db-create-ls2rz" Jan 29 15:47:04 crc kubenswrapper[5008]: I0129 15:47:04.328119 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/75706daa-3e40-4bbe-bb1b-44120719d48d-operator-scripts\") pod \"cinder-db-create-ch7lz\" (UID: \"75706daa-3e40-4bbe-bb1b-44120719d48d\") " pod="openstack/cinder-db-create-ch7lz" Jan 29 15:47:04 crc kubenswrapper[5008]: I0129 15:47:04.341790 5008 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-351a-account-create-update-tbrc5"] Jan 29 15:47:04 crc kubenswrapper[5008]: I0129 15:47:04.343089 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-351a-account-create-update-tbrc5" Jan 29 15:47:04 crc kubenswrapper[5008]: I0129 15:47:04.348072 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-db-secret" Jan 29 15:47:04 crc kubenswrapper[5008]: I0129 15:47:04.363532 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-351a-account-create-update-tbrc5"] Jan 29 15:47:04 crc kubenswrapper[5008]: I0129 15:47:04.369523 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f9pkm\" (UniqueName: \"kubernetes.io/projected/75706daa-3e40-4bbe-bb1b-44120719d48d-kube-api-access-f9pkm\") pod \"cinder-db-create-ch7lz\" (UID: \"75706daa-3e40-4bbe-bb1b-44120719d48d\") " pod="openstack/cinder-db-create-ch7lz" Jan 29 15:47:04 crc kubenswrapper[5008]: I0129 15:47:04.421917 5008 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-sync-rdpcb"] Jan 29 15:47:04 crc kubenswrapper[5008]: I0129 15:47:04.423178 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-rdpcb" Jan 29 15:47:04 crc kubenswrapper[5008]: I0129 15:47:04.425385 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-sgcvh" Jan 29 15:47:04 crc kubenswrapper[5008]: I0129 15:47:04.425652 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Jan 29 15:47:04 crc kubenswrapper[5008]: I0129 15:47:04.425771 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Jan 29 15:47:04 crc kubenswrapper[5008]: I0129 15:47:04.430909 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/36bf973b-f73a-425e-9923-09caa2622a41-operator-scripts\") pod \"barbican-db-create-ls2rz\" (UID: \"36bf973b-f73a-425e-9923-09caa2622a41\") " pod="openstack/barbican-db-create-ls2rz" Jan 29 15:47:04 crc kubenswrapper[5008]: I0129 15:47:04.430965 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/826ac6d8-e950-4bd5-b5f4-0d3f5be5b960-operator-scripts\") pod \"barbican-351a-account-create-update-tbrc5\" (UID: \"826ac6d8-e950-4bd5-b5f4-0d3f5be5b960\") " pod="openstack/barbican-351a-account-create-update-tbrc5" Jan 29 15:47:04 crc kubenswrapper[5008]: I0129 15:47:04.430995 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Jan 29 15:47:04 crc kubenswrapper[5008]: I0129 15:47:04.431011 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ljgr6\" (UniqueName: \"kubernetes.io/projected/36bf973b-f73a-425e-9923-09caa2622a41-kube-api-access-ljgr6\") pod \"barbican-db-create-ls2rz\" (UID: \"36bf973b-f73a-425e-9923-09caa2622a41\") " pod="openstack/barbican-db-create-ls2rz" Jan 29 15:47:04 crc kubenswrapper[5008]: I0129 15:47:04.431101 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0494524d-f73e-4534-9064-b578d41bea87-operator-scripts\") pod \"cinder-2158-account-create-update-pjst9\" (UID: \"0494524d-f73e-4534-9064-b578d41bea87\") " pod="openstack/cinder-2158-account-create-update-pjst9" Jan 29 15:47:04 crc kubenswrapper[5008]: I0129 15:47:04.431131 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qvf5q\" (UniqueName: \"kubernetes.io/projected/0494524d-f73e-4534-9064-b578d41bea87-kube-api-access-qvf5q\") pod \"cinder-2158-account-create-update-pjst9\" (UID: \"0494524d-f73e-4534-9064-b578d41bea87\") " pod="openstack/cinder-2158-account-create-update-pjst9" Jan 29 15:47:04 crc kubenswrapper[5008]: I0129 15:47:04.431174 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hprzv\" (UniqueName: \"kubernetes.io/projected/826ac6d8-e950-4bd5-b5f4-0d3f5be5b960-kube-api-access-hprzv\") pod \"barbican-351a-account-create-update-tbrc5\" (UID: \"826ac6d8-e950-4bd5-b5f4-0d3f5be5b960\") " pod="openstack/barbican-351a-account-create-update-tbrc5" Jan 29 15:47:04 crc kubenswrapper[5008]: I0129 15:47:04.431953 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/36bf973b-f73a-425e-9923-09caa2622a41-operator-scripts\") pod \"barbican-db-create-ls2rz\" (UID: \"36bf973b-f73a-425e-9923-09caa2622a41\") " pod="openstack/barbican-db-create-ls2rz" Jan 29 15:47:04 crc kubenswrapper[5008]: I0129 15:47:04.432052 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0494524d-f73e-4534-9064-b578d41bea87-operator-scripts\") pod \"cinder-2158-account-create-update-pjst9\" (UID: \"0494524d-f73e-4534-9064-b578d41bea87\") " pod="openstack/cinder-2158-account-create-update-pjst9" Jan 29 15:47:04 crc kubenswrapper[5008]: I0129 15:47:04.447039 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-rdpcb"] Jan 29 15:47:04 crc kubenswrapper[5008]: I0129 15:47:04.450665 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qvf5q\" (UniqueName: \"kubernetes.io/projected/0494524d-f73e-4534-9064-b578d41bea87-kube-api-access-qvf5q\") pod \"cinder-2158-account-create-update-pjst9\" (UID: \"0494524d-f73e-4534-9064-b578d41bea87\") " pod="openstack/cinder-2158-account-create-update-pjst9" Jan 29 15:47:04 crc kubenswrapper[5008]: I0129 15:47:04.453591 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ljgr6\" (UniqueName: \"kubernetes.io/projected/36bf973b-f73a-425e-9923-09caa2622a41-kube-api-access-ljgr6\") pod \"barbican-db-create-ls2rz\" (UID: \"36bf973b-f73a-425e-9923-09caa2622a41\") " pod="openstack/barbican-db-create-ls2rz" Jan 29 15:47:04 crc kubenswrapper[5008]: I0129 15:47:04.464576 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-ch7lz" Jan 29 15:47:04 crc kubenswrapper[5008]: I0129 15:47:04.473758 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-2158-account-create-update-pjst9" Jan 29 15:47:04 crc kubenswrapper[5008]: I0129 15:47:04.537206 5008 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-create-8sctv"] Jan 29 15:47:04 crc kubenswrapper[5008]: I0129 15:47:04.538356 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-8sctv" Jan 29 15:47:04 crc kubenswrapper[5008]: I0129 15:47:04.542642 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hprzv\" (UniqueName: \"kubernetes.io/projected/826ac6d8-e950-4bd5-b5f4-0d3f5be5b960-kube-api-access-hprzv\") pod \"barbican-351a-account-create-update-tbrc5\" (UID: \"826ac6d8-e950-4bd5-b5f4-0d3f5be5b960\") " pod="openstack/barbican-351a-account-create-update-tbrc5" Jan 29 15:47:04 crc kubenswrapper[5008]: I0129 15:47:04.542753 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/826ac6d8-e950-4bd5-b5f4-0d3f5be5b960-operator-scripts\") pod \"barbican-351a-account-create-update-tbrc5\" (UID: \"826ac6d8-e950-4bd5-b5f4-0d3f5be5b960\") " pod="openstack/barbican-351a-account-create-update-tbrc5" Jan 29 15:47:04 crc kubenswrapper[5008]: I0129 15:47:04.543022 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4a79f96d-ad2b-4b69-b9e9-719b1cc0b183-config-data\") pod \"keystone-db-sync-rdpcb\" (UID: \"4a79f96d-ad2b-4b69-b9e9-719b1cc0b183\") " pod="openstack/keystone-db-sync-rdpcb" Jan 29 15:47:04 crc kubenswrapper[5008]: I0129 15:47:04.543109 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4a79f96d-ad2b-4b69-b9e9-719b1cc0b183-combined-ca-bundle\") pod \"keystone-db-sync-rdpcb\" (UID: \"4a79f96d-ad2b-4b69-b9e9-719b1cc0b183\") " pod="openstack/keystone-db-sync-rdpcb" Jan 29 15:47:04 crc kubenswrapper[5008]: I0129 15:47:04.543166 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6xlln\" (UniqueName: \"kubernetes.io/projected/4a79f96d-ad2b-4b69-b9e9-719b1cc0b183-kube-api-access-6xlln\") pod \"keystone-db-sync-rdpcb\" (UID: \"4a79f96d-ad2b-4b69-b9e9-719b1cc0b183\") " pod="openstack/keystone-db-sync-rdpcb" Jan 29 15:47:04 crc kubenswrapper[5008]: I0129 15:47:04.543775 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/826ac6d8-e950-4bd5-b5f4-0d3f5be5b960-operator-scripts\") pod \"barbican-351a-account-create-update-tbrc5\" (UID: \"826ac6d8-e950-4bd5-b5f4-0d3f5be5b960\") " pod="openstack/barbican-351a-account-create-update-tbrc5" Jan 29 15:47:04 crc kubenswrapper[5008]: I0129 15:47:04.551317 5008 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-9316-account-create-update-hpxxq"] Jan 29 15:47:04 crc kubenswrapper[5008]: I0129 15:47:04.552476 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-9316-account-create-update-hpxxq" Jan 29 15:47:04 crc kubenswrapper[5008]: I0129 15:47:04.555734 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-db-secret" Jan 29 15:47:04 crc kubenswrapper[5008]: I0129 15:47:04.555970 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-ls2rz" Jan 29 15:47:04 crc kubenswrapper[5008]: I0129 15:47:04.561910 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hprzv\" (UniqueName: \"kubernetes.io/projected/826ac6d8-e950-4bd5-b5f4-0d3f5be5b960-kube-api-access-hprzv\") pod \"barbican-351a-account-create-update-tbrc5\" (UID: \"826ac6d8-e950-4bd5-b5f4-0d3f5be5b960\") " pod="openstack/barbican-351a-account-create-update-tbrc5" Jan 29 15:47:04 crc kubenswrapper[5008]: I0129 15:47:04.564645 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-8sctv"] Jan 29 15:47:04 crc kubenswrapper[5008]: I0129 15:47:04.576961 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-9316-account-create-update-hpxxq"] Jan 29 15:47:04 crc kubenswrapper[5008]: I0129 15:47:04.644999 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6xlln\" (UniqueName: \"kubernetes.io/projected/4a79f96d-ad2b-4b69-b9e9-719b1cc0b183-kube-api-access-6xlln\") pod \"keystone-db-sync-rdpcb\" (UID: \"4a79f96d-ad2b-4b69-b9e9-719b1cc0b183\") " pod="openstack/keystone-db-sync-rdpcb" Jan 29 15:47:04 crc kubenswrapper[5008]: I0129 15:47:04.645100 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4256c8e0-3a7b-43fd-9ad4-23b2495bc92e-operator-scripts\") pod \"neutron-db-create-8sctv\" (UID: \"4256c8e0-3a7b-43fd-9ad4-23b2495bc92e\") " pod="openstack/neutron-db-create-8sctv" Jan 29 15:47:04 crc kubenswrapper[5008]: I0129 15:47:04.645142 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bbc0f9ba-13f2-4092-b3e4-a5744ae24174-operator-scripts\") pod \"neutron-9316-account-create-update-hpxxq\" (UID: \"bbc0f9ba-13f2-4092-b3e4-a5744ae24174\") " pod="openstack/neutron-9316-account-create-update-hpxxq" Jan 29 15:47:04 crc kubenswrapper[5008]: I0129 15:47:04.645157 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xj4h5\" (UniqueName: \"kubernetes.io/projected/bbc0f9ba-13f2-4092-b3e4-a5744ae24174-kube-api-access-xj4h5\") pod \"neutron-9316-account-create-update-hpxxq\" (UID: \"bbc0f9ba-13f2-4092-b3e4-a5744ae24174\") " pod="openstack/neutron-9316-account-create-update-hpxxq" Jan 29 15:47:04 crc kubenswrapper[5008]: I0129 15:47:04.645182 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z2878\" (UniqueName: \"kubernetes.io/projected/4256c8e0-3a7b-43fd-9ad4-23b2495bc92e-kube-api-access-z2878\") pod \"neutron-db-create-8sctv\" (UID: \"4256c8e0-3a7b-43fd-9ad4-23b2495bc92e\") " pod="openstack/neutron-db-create-8sctv" Jan 29 15:47:04 crc kubenswrapper[5008]: I0129 15:47:04.645212 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4a79f96d-ad2b-4b69-b9e9-719b1cc0b183-config-data\") pod \"keystone-db-sync-rdpcb\" (UID: \"4a79f96d-ad2b-4b69-b9e9-719b1cc0b183\") " pod="openstack/keystone-db-sync-rdpcb" Jan 29 15:47:04 crc kubenswrapper[5008]: I0129 15:47:04.645240 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4a79f96d-ad2b-4b69-b9e9-719b1cc0b183-combined-ca-bundle\") pod \"keystone-db-sync-rdpcb\" (UID: \"4a79f96d-ad2b-4b69-b9e9-719b1cc0b183\") " pod="openstack/keystone-db-sync-rdpcb" Jan 29 15:47:04 crc kubenswrapper[5008]: I0129 15:47:04.649095 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4a79f96d-ad2b-4b69-b9e9-719b1cc0b183-combined-ca-bundle\") pod \"keystone-db-sync-rdpcb\" (UID: \"4a79f96d-ad2b-4b69-b9e9-719b1cc0b183\") " pod="openstack/keystone-db-sync-rdpcb" Jan 29 15:47:04 crc kubenswrapper[5008]: I0129 15:47:04.657696 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4a79f96d-ad2b-4b69-b9e9-719b1cc0b183-config-data\") pod \"keystone-db-sync-rdpcb\" (UID: \"4a79f96d-ad2b-4b69-b9e9-719b1cc0b183\") " pod="openstack/keystone-db-sync-rdpcb" Jan 29 15:47:04 crc kubenswrapper[5008]: I0129 15:47:04.660196 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6xlln\" (UniqueName: \"kubernetes.io/projected/4a79f96d-ad2b-4b69-b9e9-719b1cc0b183-kube-api-access-6xlln\") pod \"keystone-db-sync-rdpcb\" (UID: \"4a79f96d-ad2b-4b69-b9e9-719b1cc0b183\") " pod="openstack/keystone-db-sync-rdpcb" Jan 29 15:47:04 crc kubenswrapper[5008]: I0129 15:47:04.663145 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-351a-account-create-update-tbrc5" Jan 29 15:47:04 crc kubenswrapper[5008]: I0129 15:47:04.746277 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4256c8e0-3a7b-43fd-9ad4-23b2495bc92e-operator-scripts\") pod \"neutron-db-create-8sctv\" (UID: \"4256c8e0-3a7b-43fd-9ad4-23b2495bc92e\") " pod="openstack/neutron-db-create-8sctv" Jan 29 15:47:04 crc kubenswrapper[5008]: I0129 15:47:04.746338 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bbc0f9ba-13f2-4092-b3e4-a5744ae24174-operator-scripts\") pod \"neutron-9316-account-create-update-hpxxq\" (UID: \"bbc0f9ba-13f2-4092-b3e4-a5744ae24174\") " pod="openstack/neutron-9316-account-create-update-hpxxq" Jan 29 15:47:04 crc kubenswrapper[5008]: I0129 15:47:04.746356 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xj4h5\" (UniqueName: \"kubernetes.io/projected/bbc0f9ba-13f2-4092-b3e4-a5744ae24174-kube-api-access-xj4h5\") pod \"neutron-9316-account-create-update-hpxxq\" (UID: \"bbc0f9ba-13f2-4092-b3e4-a5744ae24174\") " pod="openstack/neutron-9316-account-create-update-hpxxq" Jan 29 15:47:04 crc kubenswrapper[5008]: I0129 15:47:04.746379 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z2878\" (UniqueName: \"kubernetes.io/projected/4256c8e0-3a7b-43fd-9ad4-23b2495bc92e-kube-api-access-z2878\") pod \"neutron-db-create-8sctv\" (UID: \"4256c8e0-3a7b-43fd-9ad4-23b2495bc92e\") " pod="openstack/neutron-db-create-8sctv" Jan 29 15:47:04 crc kubenswrapper[5008]: I0129 15:47:04.747597 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4256c8e0-3a7b-43fd-9ad4-23b2495bc92e-operator-scripts\") pod \"neutron-db-create-8sctv\" (UID: \"4256c8e0-3a7b-43fd-9ad4-23b2495bc92e\") " pod="openstack/neutron-db-create-8sctv" Jan 29 15:47:04 crc kubenswrapper[5008]: I0129 15:47:04.748186 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bbc0f9ba-13f2-4092-b3e4-a5744ae24174-operator-scripts\") pod \"neutron-9316-account-create-update-hpxxq\" (UID: \"bbc0f9ba-13f2-4092-b3e4-a5744ae24174\") " pod="openstack/neutron-9316-account-create-update-hpxxq" Jan 29 15:47:04 crc kubenswrapper[5008]: I0129 15:47:04.747934 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-rdpcb" Jan 29 15:47:04 crc kubenswrapper[5008]: I0129 15:47:04.761802 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z2878\" (UniqueName: \"kubernetes.io/projected/4256c8e0-3a7b-43fd-9ad4-23b2495bc92e-kube-api-access-z2878\") pod \"neutron-db-create-8sctv\" (UID: \"4256c8e0-3a7b-43fd-9ad4-23b2495bc92e\") " pod="openstack/neutron-db-create-8sctv" Jan 29 15:47:04 crc kubenswrapper[5008]: I0129 15:47:04.775269 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xj4h5\" (UniqueName: \"kubernetes.io/projected/bbc0f9ba-13f2-4092-b3e4-a5744ae24174-kube-api-access-xj4h5\") pod \"neutron-9316-account-create-update-hpxxq\" (UID: \"bbc0f9ba-13f2-4092-b3e4-a5744ae24174\") " pod="openstack/neutron-9316-account-create-update-hpxxq" Jan 29 15:47:04 crc kubenswrapper[5008]: I0129 15:47:04.858128 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-8sctv" Jan 29 15:47:04 crc kubenswrapper[5008]: I0129 15:47:04.920991 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-9316-account-create-update-hpxxq" Jan 29 15:47:23 crc kubenswrapper[5008]: E0129 15:47:23.399707 5008 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-glance-api:current-podified" Jan 29 15:47:23 crc kubenswrapper[5008]: E0129 15:47:23.402108 5008 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:glance-db-sync,Image:quay.io/podified-antelope-centos9/openstack-glance-api:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/glance/glance.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:db-sync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-9m6lk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42415,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:*42415,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod glance-db-sync-n7wgw_openstack(8277eb2b-44f8-4fd9-af92-1832e0272e0e): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 29 15:47:23 crc kubenswrapper[5008]: E0129 15:47:23.403308 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"glance-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/glance-db-sync-n7wgw" podUID="8277eb2b-44f8-4fd9-af92-1832e0272e0e" Jan 29 15:47:23 crc kubenswrapper[5008]: E0129 15:47:23.563043 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"glance-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-glance-api:current-podified\\\"\"" pod="openstack/glance-db-sync-n7wgw" podUID="8277eb2b-44f8-4fd9-af92-1832e0272e0e" Jan 29 15:47:24 crc kubenswrapper[5008]: I0129 15:47:24.023869 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-rdpcb"] Jan 29 15:47:24 crc kubenswrapper[5008]: I0129 15:47:24.048635 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-9316-account-create-update-hpxxq"] Jan 29 15:47:24 crc kubenswrapper[5008]: I0129 15:47:24.056599 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-ls2rz"] Jan 29 15:47:24 crc kubenswrapper[5008]: I0129 15:47:24.062362 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-8sctv"] Jan 29 15:47:24 crc kubenswrapper[5008]: I0129 15:47:24.067965 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-2158-account-create-update-pjst9"] Jan 29 15:47:24 crc kubenswrapper[5008]: I0129 15:47:24.075104 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-ch7lz"] Jan 29 15:47:24 crc kubenswrapper[5008]: I0129 15:47:24.082595 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-351a-account-create-update-tbrc5"] Jan 29 15:47:24 crc kubenswrapper[5008]: W0129 15:47:24.175630 5008 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod75706daa_3e40_4bbe_bb1b_44120719d48d.slice/crio-42010612b037d6fbdff5bbefce52a78ed791578647d4824b32c8de7c57ab879c WatchSource:0}: Error finding container 42010612b037d6fbdff5bbefce52a78ed791578647d4824b32c8de7c57ab879c: Status 404 returned error can't find the container with id 42010612b037d6fbdff5bbefce52a78ed791578647d4824b32c8de7c57ab879c Jan 29 15:47:24 crc kubenswrapper[5008]: W0129 15:47:24.204716 5008 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbbc0f9ba_13f2_4092_b3e4_a5744ae24174.slice/crio-e4d5c2c1aeee86641b6212aae340f1ae72f844e31ce4d2724c0a3aa7146bd0c2 WatchSource:0}: Error finding container e4d5c2c1aeee86641b6212aae340f1ae72f844e31ce4d2724c0a3aa7146bd0c2: Status 404 returned error can't find the container with id e4d5c2c1aeee86641b6212aae340f1ae72f844e31ce4d2724c0a3aa7146bd0c2 Jan 29 15:47:24 crc kubenswrapper[5008]: W0129 15:47:24.205276 5008 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod826ac6d8_e950_4bd5_b5f4_0d3f5be5b960.slice/crio-627883610a5617ebcdf236fef832e43198615e811da995a1cba676167544ea47 WatchSource:0}: Error finding container 627883610a5617ebcdf236fef832e43198615e811da995a1cba676167544ea47: Status 404 returned error can't find the container with id 627883610a5617ebcdf236fef832e43198615e811da995a1cba676167544ea47 Jan 29 15:47:24 crc kubenswrapper[5008]: I0129 15:47:24.567144 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-8sctv" event={"ID":"4256c8e0-3a7b-43fd-9ad4-23b2495bc92e","Type":"ContainerStarted","Data":"7ca75479ef338f89bd18ce28569eaa84b3102801c80c2efffac33fec97763ec5"} Jan 29 15:47:24 crc kubenswrapper[5008]: I0129 15:47:24.568550 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-9316-account-create-update-hpxxq" event={"ID":"bbc0f9ba-13f2-4092-b3e4-a5744ae24174","Type":"ContainerStarted","Data":"e4d5c2c1aeee86641b6212aae340f1ae72f844e31ce4d2724c0a3aa7146bd0c2"} Jan 29 15:47:24 crc kubenswrapper[5008]: I0129 15:47:24.569368 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-rdpcb" event={"ID":"4a79f96d-ad2b-4b69-b9e9-719b1cc0b183","Type":"ContainerStarted","Data":"e523bacd3d00d7c299e8d1ee84b44f3d8235fdd0edd8465f7b1e2360b0719fb8"} Jan 29 15:47:24 crc kubenswrapper[5008]: I0129 15:47:24.571035 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-2158-account-create-update-pjst9" event={"ID":"0494524d-f73e-4534-9064-b578d41bea87","Type":"ContainerStarted","Data":"f7337579b0c05cef5036ba373b06ec94f4c86859c74c4cf38a1a6c866cfa3d5e"} Jan 29 15:47:24 crc kubenswrapper[5008]: I0129 15:47:24.571070 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-2158-account-create-update-pjst9" event={"ID":"0494524d-f73e-4534-9064-b578d41bea87","Type":"ContainerStarted","Data":"94f893ff8af23a8830de458746a4ab5e3bf3e11dbeefac60089754522f1ff45b"} Jan 29 15:47:24 crc kubenswrapper[5008]: I0129 15:47:24.574589 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-ch7lz" event={"ID":"75706daa-3e40-4bbe-bb1b-44120719d48d","Type":"ContainerStarted","Data":"6c61687e12f73c515f558a6a4b2824cb17762d52f0bf2ebbaaed1f1b074de225"} Jan 29 15:47:24 crc kubenswrapper[5008]: I0129 15:47:24.574645 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-ch7lz" event={"ID":"75706daa-3e40-4bbe-bb1b-44120719d48d","Type":"ContainerStarted","Data":"42010612b037d6fbdff5bbefce52a78ed791578647d4824b32c8de7c57ab879c"} Jan 29 15:47:24 crc kubenswrapper[5008]: I0129 15:47:24.575896 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-351a-account-create-update-tbrc5" event={"ID":"826ac6d8-e950-4bd5-b5f4-0d3f5be5b960","Type":"ContainerStarted","Data":"627883610a5617ebcdf236fef832e43198615e811da995a1cba676167544ea47"} Jan 29 15:47:24 crc kubenswrapper[5008]: I0129 15:47:24.577423 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-ls2rz" event={"ID":"36bf973b-f73a-425e-9923-09caa2622a41","Type":"ContainerStarted","Data":"ed5cc6ce99bd405e3383395a42bb5c67b67109276e849e2857a96654dfe667f0"} Jan 29 15:47:24 crc kubenswrapper[5008]: I0129 15:47:24.590763 5008 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-2158-account-create-update-pjst9" podStartSLOduration=20.590742163 podStartE2EDuration="20.590742163s" podCreationTimestamp="2026-01-29 15:47:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 15:47:24.584370058 +0000 UTC m=+1188.257224295" watchObservedRunningTime="2026-01-29 15:47:24.590742163 +0000 UTC m=+1188.263596400" Jan 29 15:47:24 crc kubenswrapper[5008]: I0129 15:47:24.605020 5008 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-db-create-ch7lz" podStartSLOduration=20.605001019 podStartE2EDuration="20.605001019s" podCreationTimestamp="2026-01-29 15:47:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 15:47:24.600329266 +0000 UTC m=+1188.273183523" watchObservedRunningTime="2026-01-29 15:47:24.605001019 +0000 UTC m=+1188.277855256" Jan 29 15:47:25 crc kubenswrapper[5008]: I0129 15:47:25.592629 5008 generic.go:334] "Generic (PLEG): container finished" podID="826ac6d8-e950-4bd5-b5f4-0d3f5be5b960" containerID="ca99078315f1792020893b0155199b35cf28a5d2e22b71f951d215c87d9c1097" exitCode=0 Jan 29 15:47:25 crc kubenswrapper[5008]: I0129 15:47:25.592845 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-351a-account-create-update-tbrc5" event={"ID":"826ac6d8-e950-4bd5-b5f4-0d3f5be5b960","Type":"ContainerDied","Data":"ca99078315f1792020893b0155199b35cf28a5d2e22b71f951d215c87d9c1097"} Jan 29 15:47:25 crc kubenswrapper[5008]: I0129 15:47:25.595561 5008 generic.go:334] "Generic (PLEG): container finished" podID="36bf973b-f73a-425e-9923-09caa2622a41" containerID="64cf9712b9a6a018d4f38c41a288a8f15705222afe6688de0979f4ea4ab02893" exitCode=0 Jan 29 15:47:25 crc kubenswrapper[5008]: I0129 15:47:25.595605 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-ls2rz" event={"ID":"36bf973b-f73a-425e-9923-09caa2622a41","Type":"ContainerDied","Data":"64cf9712b9a6a018d4f38c41a288a8f15705222afe6688de0979f4ea4ab02893"} Jan 29 15:47:25 crc kubenswrapper[5008]: I0129 15:47:25.605087 5008 generic.go:334] "Generic (PLEG): container finished" podID="4256c8e0-3a7b-43fd-9ad4-23b2495bc92e" containerID="e3f4a0bf80eb8c9f3329a22ef35badafd100d8a972517b1491615c6612a7b55a" exitCode=0 Jan 29 15:47:25 crc kubenswrapper[5008]: I0129 15:47:25.605215 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-8sctv" event={"ID":"4256c8e0-3a7b-43fd-9ad4-23b2495bc92e","Type":"ContainerDied","Data":"e3f4a0bf80eb8c9f3329a22ef35badafd100d8a972517b1491615c6612a7b55a"} Jan 29 15:47:25 crc kubenswrapper[5008]: I0129 15:47:25.612455 5008 generic.go:334] "Generic (PLEG): container finished" podID="bbc0f9ba-13f2-4092-b3e4-a5744ae24174" containerID="6f05c53cf48d2a332db38d95de29d8cfb8a983e457e1d6fed6a77e002f9f5183" exitCode=0 Jan 29 15:47:25 crc kubenswrapper[5008]: I0129 15:47:25.612552 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-9316-account-create-update-hpxxq" event={"ID":"bbc0f9ba-13f2-4092-b3e4-a5744ae24174","Type":"ContainerDied","Data":"6f05c53cf48d2a332db38d95de29d8cfb8a983e457e1d6fed6a77e002f9f5183"} Jan 29 15:47:25 crc kubenswrapper[5008]: I0129 15:47:25.616301 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"7d8596d3-fe9a-4e1a-969b-2a40a90e437d","Type":"ContainerStarted","Data":"2212529dd2f325960b0a75d9f75f86cf2ff6a278a3f594a0528f1f59cdb29f95"} Jan 29 15:47:25 crc kubenswrapper[5008]: I0129 15:47:25.616451 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"7d8596d3-fe9a-4e1a-969b-2a40a90e437d","Type":"ContainerStarted","Data":"cbb09b30f2da85dabef49da1927febd1bd6890e6db3d10092cebb71cfa1da299"} Jan 29 15:47:25 crc kubenswrapper[5008]: I0129 15:47:25.616544 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"7d8596d3-fe9a-4e1a-969b-2a40a90e437d","Type":"ContainerStarted","Data":"25611f8a32294d584338b6ed28f48d7d0cbad43cf19e86aa5d7009d821a5705e"} Jan 29 15:47:25 crc kubenswrapper[5008]: I0129 15:47:25.616629 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"7d8596d3-fe9a-4e1a-969b-2a40a90e437d","Type":"ContainerStarted","Data":"1a9b0307771a31787dd09578530f5d5331db12304403edf1b9227795cf40f412"} Jan 29 15:47:25 crc kubenswrapper[5008]: I0129 15:47:25.619213 5008 generic.go:334] "Generic (PLEG): container finished" podID="0494524d-f73e-4534-9064-b578d41bea87" containerID="f7337579b0c05cef5036ba373b06ec94f4c86859c74c4cf38a1a6c866cfa3d5e" exitCode=0 Jan 29 15:47:25 crc kubenswrapper[5008]: I0129 15:47:25.619293 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-2158-account-create-update-pjst9" event={"ID":"0494524d-f73e-4534-9064-b578d41bea87","Type":"ContainerDied","Data":"f7337579b0c05cef5036ba373b06ec94f4c86859c74c4cf38a1a6c866cfa3d5e"} Jan 29 15:47:25 crc kubenswrapper[5008]: I0129 15:47:25.624064 5008 generic.go:334] "Generic (PLEG): container finished" podID="75706daa-3e40-4bbe-bb1b-44120719d48d" containerID="6c61687e12f73c515f558a6a4b2824cb17762d52f0bf2ebbaaed1f1b074de225" exitCode=0 Jan 29 15:47:25 crc kubenswrapper[5008]: I0129 15:47:25.624143 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-ch7lz" event={"ID":"75706daa-3e40-4bbe-bb1b-44120719d48d","Type":"ContainerDied","Data":"6c61687e12f73c515f558a6a4b2824cb17762d52f0bf2ebbaaed1f1b074de225"} Jan 29 15:47:28 crc kubenswrapper[5008]: I0129 15:47:28.650875 5008 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-9316-account-create-update-hpxxq" Jan 29 15:47:28 crc kubenswrapper[5008]: I0129 15:47:28.651842 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-ch7lz" event={"ID":"75706daa-3e40-4bbe-bb1b-44120719d48d","Type":"ContainerDied","Data":"42010612b037d6fbdff5bbefce52a78ed791578647d4824b32c8de7c57ab879c"} Jan 29 15:47:28 crc kubenswrapper[5008]: I0129 15:47:28.651886 5008 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="42010612b037d6fbdff5bbefce52a78ed791578647d4824b32c8de7c57ab879c" Jan 29 15:47:28 crc kubenswrapper[5008]: I0129 15:47:28.654226 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-351a-account-create-update-tbrc5" event={"ID":"826ac6d8-e950-4bd5-b5f4-0d3f5be5b960","Type":"ContainerDied","Data":"627883610a5617ebcdf236fef832e43198615e811da995a1cba676167544ea47"} Jan 29 15:47:28 crc kubenswrapper[5008]: I0129 15:47:28.654253 5008 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="627883610a5617ebcdf236fef832e43198615e811da995a1cba676167544ea47" Jan 29 15:47:28 crc kubenswrapper[5008]: I0129 15:47:28.655776 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-ls2rz" event={"ID":"36bf973b-f73a-425e-9923-09caa2622a41","Type":"ContainerDied","Data":"ed5cc6ce99bd405e3383395a42bb5c67b67109276e849e2857a96654dfe667f0"} Jan 29 15:47:28 crc kubenswrapper[5008]: I0129 15:47:28.655824 5008 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ed5cc6ce99bd405e3383395a42bb5c67b67109276e849e2857a96654dfe667f0" Jan 29 15:47:28 crc kubenswrapper[5008]: I0129 15:47:28.657083 5008 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-8sctv" Jan 29 15:47:28 crc kubenswrapper[5008]: I0129 15:47:28.657892 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-8sctv" event={"ID":"4256c8e0-3a7b-43fd-9ad4-23b2495bc92e","Type":"ContainerDied","Data":"7ca75479ef338f89bd18ce28569eaa84b3102801c80c2efffac33fec97763ec5"} Jan 29 15:47:28 crc kubenswrapper[5008]: I0129 15:47:28.657927 5008 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7ca75479ef338f89bd18ce28569eaa84b3102801c80c2efffac33fec97763ec5" Jan 29 15:47:28 crc kubenswrapper[5008]: I0129 15:47:28.659566 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-9316-account-create-update-hpxxq" event={"ID":"bbc0f9ba-13f2-4092-b3e4-a5744ae24174","Type":"ContainerDied","Data":"e4d5c2c1aeee86641b6212aae340f1ae72f844e31ce4d2724c0a3aa7146bd0c2"} Jan 29 15:47:28 crc kubenswrapper[5008]: I0129 15:47:28.659593 5008 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e4d5c2c1aeee86641b6212aae340f1ae72f844e31ce4d2724c0a3aa7146bd0c2" Jan 29 15:47:28 crc kubenswrapper[5008]: I0129 15:47:28.659636 5008 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-9316-account-create-update-hpxxq" Jan 29 15:47:28 crc kubenswrapper[5008]: I0129 15:47:28.661663 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-2158-account-create-update-pjst9" event={"ID":"0494524d-f73e-4534-9064-b578d41bea87","Type":"ContainerDied","Data":"94f893ff8af23a8830de458746a4ab5e3bf3e11dbeefac60089754522f1ff45b"} Jan 29 15:47:28 crc kubenswrapper[5008]: I0129 15:47:28.661690 5008 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="94f893ff8af23a8830de458746a4ab5e3bf3e11dbeefac60089754522f1ff45b" Jan 29 15:47:28 crc kubenswrapper[5008]: I0129 15:47:28.673518 5008 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-351a-account-create-update-tbrc5" Jan 29 15:47:28 crc kubenswrapper[5008]: I0129 15:47:28.719303 5008 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-ch7lz" Jan 29 15:47:28 crc kubenswrapper[5008]: I0129 15:47:28.725678 5008 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-2158-account-create-update-pjst9" Jan 29 15:47:28 crc kubenswrapper[5008]: I0129 15:47:28.744306 5008 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-ls2rz" Jan 29 15:47:28 crc kubenswrapper[5008]: I0129 15:47:28.804432 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z2878\" (UniqueName: \"kubernetes.io/projected/4256c8e0-3a7b-43fd-9ad4-23b2495bc92e-kube-api-access-z2878\") pod \"4256c8e0-3a7b-43fd-9ad4-23b2495bc92e\" (UID: \"4256c8e0-3a7b-43fd-9ad4-23b2495bc92e\") " Jan 29 15:47:28 crc kubenswrapper[5008]: I0129 15:47:28.804492 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bbc0f9ba-13f2-4092-b3e4-a5744ae24174-operator-scripts\") pod \"bbc0f9ba-13f2-4092-b3e4-a5744ae24174\" (UID: \"bbc0f9ba-13f2-4092-b3e4-a5744ae24174\") " Jan 29 15:47:28 crc kubenswrapper[5008]: I0129 15:47:28.804542 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4256c8e0-3a7b-43fd-9ad4-23b2495bc92e-operator-scripts\") pod \"4256c8e0-3a7b-43fd-9ad4-23b2495bc92e\" (UID: \"4256c8e0-3a7b-43fd-9ad4-23b2495bc92e\") " Jan 29 15:47:28 crc kubenswrapper[5008]: I0129 15:47:28.804668 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xj4h5\" (UniqueName: \"kubernetes.io/projected/bbc0f9ba-13f2-4092-b3e4-a5744ae24174-kube-api-access-xj4h5\") pod \"bbc0f9ba-13f2-4092-b3e4-a5744ae24174\" (UID: \"bbc0f9ba-13f2-4092-b3e4-a5744ae24174\") " Jan 29 15:47:28 crc kubenswrapper[5008]: I0129 15:47:28.804693 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hprzv\" (UniqueName: \"kubernetes.io/projected/826ac6d8-e950-4bd5-b5f4-0d3f5be5b960-kube-api-access-hprzv\") pod \"826ac6d8-e950-4bd5-b5f4-0d3f5be5b960\" (UID: \"826ac6d8-e950-4bd5-b5f4-0d3f5be5b960\") " Jan 29 15:47:28 crc kubenswrapper[5008]: I0129 15:47:28.804775 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/826ac6d8-e950-4bd5-b5f4-0d3f5be5b960-operator-scripts\") pod \"826ac6d8-e950-4bd5-b5f4-0d3f5be5b960\" (UID: \"826ac6d8-e950-4bd5-b5f4-0d3f5be5b960\") " Jan 29 15:47:28 crc kubenswrapper[5008]: I0129 15:47:28.805346 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bbc0f9ba-13f2-4092-b3e4-a5744ae24174-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "bbc0f9ba-13f2-4092-b3e4-a5744ae24174" (UID: "bbc0f9ba-13f2-4092-b3e4-a5744ae24174"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:47:28 crc kubenswrapper[5008]: I0129 15:47:28.805348 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/826ac6d8-e950-4bd5-b5f4-0d3f5be5b960-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "826ac6d8-e950-4bd5-b5f4-0d3f5be5b960" (UID: "826ac6d8-e950-4bd5-b5f4-0d3f5be5b960"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:47:28 crc kubenswrapper[5008]: I0129 15:47:28.805417 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4256c8e0-3a7b-43fd-9ad4-23b2495bc92e-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "4256c8e0-3a7b-43fd-9ad4-23b2495bc92e" (UID: "4256c8e0-3a7b-43fd-9ad4-23b2495bc92e"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:47:28 crc kubenswrapper[5008]: I0129 15:47:28.808363 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bbc0f9ba-13f2-4092-b3e4-a5744ae24174-kube-api-access-xj4h5" (OuterVolumeSpecName: "kube-api-access-xj4h5") pod "bbc0f9ba-13f2-4092-b3e4-a5744ae24174" (UID: "bbc0f9ba-13f2-4092-b3e4-a5744ae24174"). InnerVolumeSpecName "kube-api-access-xj4h5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:47:28 crc kubenswrapper[5008]: I0129 15:47:28.808657 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4256c8e0-3a7b-43fd-9ad4-23b2495bc92e-kube-api-access-z2878" (OuterVolumeSpecName: "kube-api-access-z2878") pod "4256c8e0-3a7b-43fd-9ad4-23b2495bc92e" (UID: "4256c8e0-3a7b-43fd-9ad4-23b2495bc92e"). InnerVolumeSpecName "kube-api-access-z2878". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:47:28 crc kubenswrapper[5008]: I0129 15:47:28.814760 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/826ac6d8-e950-4bd5-b5f4-0d3f5be5b960-kube-api-access-hprzv" (OuterVolumeSpecName: "kube-api-access-hprzv") pod "826ac6d8-e950-4bd5-b5f4-0d3f5be5b960" (UID: "826ac6d8-e950-4bd5-b5f4-0d3f5be5b960"). InnerVolumeSpecName "kube-api-access-hprzv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:47:28 crc kubenswrapper[5008]: I0129 15:47:28.906071 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/75706daa-3e40-4bbe-bb1b-44120719d48d-operator-scripts\") pod \"75706daa-3e40-4bbe-bb1b-44120719d48d\" (UID: \"75706daa-3e40-4bbe-bb1b-44120719d48d\") " Jan 29 15:47:28 crc kubenswrapper[5008]: I0129 15:47:28.906141 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/36bf973b-f73a-425e-9923-09caa2622a41-operator-scripts\") pod \"36bf973b-f73a-425e-9923-09caa2622a41\" (UID: \"36bf973b-f73a-425e-9923-09caa2622a41\") " Jan 29 15:47:28 crc kubenswrapper[5008]: I0129 15:47:28.906167 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qvf5q\" (UniqueName: \"kubernetes.io/projected/0494524d-f73e-4534-9064-b578d41bea87-kube-api-access-qvf5q\") pod \"0494524d-f73e-4534-9064-b578d41bea87\" (UID: \"0494524d-f73e-4534-9064-b578d41bea87\") " Jan 29 15:47:28 crc kubenswrapper[5008]: I0129 15:47:28.906190 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0494524d-f73e-4534-9064-b578d41bea87-operator-scripts\") pod \"0494524d-f73e-4534-9064-b578d41bea87\" (UID: \"0494524d-f73e-4534-9064-b578d41bea87\") " Jan 29 15:47:28 crc kubenswrapper[5008]: I0129 15:47:28.906276 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f9pkm\" (UniqueName: \"kubernetes.io/projected/75706daa-3e40-4bbe-bb1b-44120719d48d-kube-api-access-f9pkm\") pod \"75706daa-3e40-4bbe-bb1b-44120719d48d\" (UID: \"75706daa-3e40-4bbe-bb1b-44120719d48d\") " Jan 29 15:47:28 crc kubenswrapper[5008]: I0129 15:47:28.906315 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ljgr6\" (UniqueName: \"kubernetes.io/projected/36bf973b-f73a-425e-9923-09caa2622a41-kube-api-access-ljgr6\") pod \"36bf973b-f73a-425e-9923-09caa2622a41\" (UID: \"36bf973b-f73a-425e-9923-09caa2622a41\") " Jan 29 15:47:28 crc kubenswrapper[5008]: I0129 15:47:28.906503 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/75706daa-3e40-4bbe-bb1b-44120719d48d-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "75706daa-3e40-4bbe-bb1b-44120719d48d" (UID: "75706daa-3e40-4bbe-bb1b-44120719d48d"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:47:28 crc kubenswrapper[5008]: I0129 15:47:28.906536 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/36bf973b-f73a-425e-9923-09caa2622a41-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "36bf973b-f73a-425e-9923-09caa2622a41" (UID: "36bf973b-f73a-425e-9923-09caa2622a41"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:47:28 crc kubenswrapper[5008]: I0129 15:47:28.906823 5008 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/826ac6d8-e950-4bd5-b5f4-0d3f5be5b960-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 15:47:28 crc kubenswrapper[5008]: I0129 15:47:28.906852 5008 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z2878\" (UniqueName: \"kubernetes.io/projected/4256c8e0-3a7b-43fd-9ad4-23b2495bc92e-kube-api-access-z2878\") on node \"crc\" DevicePath \"\"" Jan 29 15:47:28 crc kubenswrapper[5008]: I0129 15:47:28.906868 5008 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bbc0f9ba-13f2-4092-b3e4-a5744ae24174-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 15:47:28 crc kubenswrapper[5008]: I0129 15:47:28.906880 5008 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4256c8e0-3a7b-43fd-9ad4-23b2495bc92e-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 15:47:28 crc kubenswrapper[5008]: I0129 15:47:28.906893 5008 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xj4h5\" (UniqueName: \"kubernetes.io/projected/bbc0f9ba-13f2-4092-b3e4-a5744ae24174-kube-api-access-xj4h5\") on node \"crc\" DevicePath \"\"" Jan 29 15:47:28 crc kubenswrapper[5008]: I0129 15:47:28.906866 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0494524d-f73e-4534-9064-b578d41bea87-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "0494524d-f73e-4534-9064-b578d41bea87" (UID: "0494524d-f73e-4534-9064-b578d41bea87"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:47:28 crc kubenswrapper[5008]: I0129 15:47:28.907383 5008 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/75706daa-3e40-4bbe-bb1b-44120719d48d-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 15:47:28 crc kubenswrapper[5008]: I0129 15:47:28.907409 5008 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hprzv\" (UniqueName: \"kubernetes.io/projected/826ac6d8-e950-4bd5-b5f4-0d3f5be5b960-kube-api-access-hprzv\") on node \"crc\" DevicePath \"\"" Jan 29 15:47:28 crc kubenswrapper[5008]: I0129 15:47:28.907421 5008 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/36bf973b-f73a-425e-9923-09caa2622a41-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 15:47:28 crc kubenswrapper[5008]: I0129 15:47:28.909817 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0494524d-f73e-4534-9064-b578d41bea87-kube-api-access-qvf5q" (OuterVolumeSpecName: "kube-api-access-qvf5q") pod "0494524d-f73e-4534-9064-b578d41bea87" (UID: "0494524d-f73e-4534-9064-b578d41bea87"). InnerVolumeSpecName "kube-api-access-qvf5q". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:47:28 crc kubenswrapper[5008]: I0129 15:47:28.909969 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/75706daa-3e40-4bbe-bb1b-44120719d48d-kube-api-access-f9pkm" (OuterVolumeSpecName: "kube-api-access-f9pkm") pod "75706daa-3e40-4bbe-bb1b-44120719d48d" (UID: "75706daa-3e40-4bbe-bb1b-44120719d48d"). InnerVolumeSpecName "kube-api-access-f9pkm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:47:28 crc kubenswrapper[5008]: I0129 15:47:28.910990 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/36bf973b-f73a-425e-9923-09caa2622a41-kube-api-access-ljgr6" (OuterVolumeSpecName: "kube-api-access-ljgr6") pod "36bf973b-f73a-425e-9923-09caa2622a41" (UID: "36bf973b-f73a-425e-9923-09caa2622a41"). InnerVolumeSpecName "kube-api-access-ljgr6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:47:29 crc kubenswrapper[5008]: I0129 15:47:29.008517 5008 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qvf5q\" (UniqueName: \"kubernetes.io/projected/0494524d-f73e-4534-9064-b578d41bea87-kube-api-access-qvf5q\") on node \"crc\" DevicePath \"\"" Jan 29 15:47:29 crc kubenswrapper[5008]: I0129 15:47:29.009192 5008 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0494524d-f73e-4534-9064-b578d41bea87-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 15:47:29 crc kubenswrapper[5008]: I0129 15:47:29.009213 5008 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f9pkm\" (UniqueName: \"kubernetes.io/projected/75706daa-3e40-4bbe-bb1b-44120719d48d-kube-api-access-f9pkm\") on node \"crc\" DevicePath \"\"" Jan 29 15:47:29 crc kubenswrapper[5008]: I0129 15:47:29.009224 5008 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ljgr6\" (UniqueName: \"kubernetes.io/projected/36bf973b-f73a-425e-9923-09caa2622a41-kube-api-access-ljgr6\") on node \"crc\" DevicePath \"\"" Jan 29 15:47:29 crc kubenswrapper[5008]: I0129 15:47:29.676141 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"7d8596d3-fe9a-4e1a-969b-2a40a90e437d","Type":"ContainerStarted","Data":"d023f0a06c3a4858a00f2e869c3a0dbb0bed1aa0a84b387042d32627a5131e98"} Jan 29 15:47:29 crc kubenswrapper[5008]: I0129 15:47:29.677206 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"7d8596d3-fe9a-4e1a-969b-2a40a90e437d","Type":"ContainerStarted","Data":"9fbf7b1ddf1641b2b56def51b3cd15d59889fb82eeab0a92495fb54fa70a3584"} Jan 29 15:47:29 crc kubenswrapper[5008]: I0129 15:47:29.677355 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"7d8596d3-fe9a-4e1a-969b-2a40a90e437d","Type":"ContainerStarted","Data":"b7a2845d3f78241dfae6156f9b58f3be79eb1b7aeaadc6035f50335680bc6960"} Jan 29 15:47:29 crc kubenswrapper[5008]: I0129 15:47:29.677496 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"7d8596d3-fe9a-4e1a-969b-2a40a90e437d","Type":"ContainerStarted","Data":"5055f3c6cc3af28f8f53be3a562c6490dbaf97f77ac697e5466544ec9a05d491"} Jan 29 15:47:29 crc kubenswrapper[5008]: I0129 15:47:29.678143 5008 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-ch7lz" Jan 29 15:47:29 crc kubenswrapper[5008]: I0129 15:47:29.678320 5008 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-ls2rz" Jan 29 15:47:29 crc kubenswrapper[5008]: I0129 15:47:29.678541 5008 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-2158-account-create-update-pjst9" Jan 29 15:47:29 crc kubenswrapper[5008]: I0129 15:47:29.678581 5008 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-8sctv" Jan 29 15:47:29 crc kubenswrapper[5008]: I0129 15:47:29.678619 5008 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-351a-account-create-update-tbrc5" Jan 29 15:47:29 crc kubenswrapper[5008]: I0129 15:47:29.678628 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-rdpcb" event={"ID":"4a79f96d-ad2b-4b69-b9e9-719b1cc0b183","Type":"ContainerStarted","Data":"eacc0139ac8b112a9da7c9f07cae68774d1d37d4498b8a7bcd2ca73c4e6b805f"} Jan 29 15:47:29 crc kubenswrapper[5008]: I0129 15:47:29.712771 5008 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-db-sync-rdpcb" podStartSLOduration=21.463216666 podStartE2EDuration="25.712744574s" podCreationTimestamp="2026-01-29 15:47:04 +0000 UTC" firstStartedPulling="2026-01-29 15:47:24.187114781 +0000 UTC m=+1187.859969018" lastFinishedPulling="2026-01-29 15:47:28.436642689 +0000 UTC m=+1192.109496926" observedRunningTime="2026-01-29 15:47:29.700065037 +0000 UTC m=+1193.372919294" watchObservedRunningTime="2026-01-29 15:47:29.712744574 +0000 UTC m=+1193.385598821" Jan 29 15:47:31 crc kubenswrapper[5008]: I0129 15:47:31.700998 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"7d8596d3-fe9a-4e1a-969b-2a40a90e437d","Type":"ContainerStarted","Data":"c42fa5ec399e9df0cd5d9503d61de7bf9bdcb5b5027dcd02746f8446fed7da66"} Jan 29 15:47:31 crc kubenswrapper[5008]: I0129 15:47:31.701293 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"7d8596d3-fe9a-4e1a-969b-2a40a90e437d","Type":"ContainerStarted","Data":"9eb9b05b0bef55e69bf25de1e5d402963b23a145e9cd5e7bf113c41de93a6318"} Jan 29 15:47:31 crc kubenswrapper[5008]: I0129 15:47:31.701308 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"7d8596d3-fe9a-4e1a-969b-2a40a90e437d","Type":"ContainerStarted","Data":"fda414e92a3ca3aface69b1a9c98558a6ec8b9c8d878064054f64fa3507d1b0c"} Jan 29 15:47:32 crc kubenswrapper[5008]: I0129 15:47:32.714555 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"7d8596d3-fe9a-4e1a-969b-2a40a90e437d","Type":"ContainerStarted","Data":"6cf28e6fc3cff5bdc43980966f3928eb9a5c5615d1c60550b291c734697d20c8"} Jan 29 15:47:33 crc kubenswrapper[5008]: I0129 15:47:33.734143 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"7d8596d3-fe9a-4e1a-969b-2a40a90e437d","Type":"ContainerStarted","Data":"9fb6e0bee1283670a65e1e295200557f9a262303b6d0de045b04513bb4e07886"} Jan 29 15:47:35 crc kubenswrapper[5008]: I0129 15:47:35.755075 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"7d8596d3-fe9a-4e1a-969b-2a40a90e437d","Type":"ContainerStarted","Data":"dcd568c6c622d136e4a94c3dc4bc9021d6aa1b554bf5fac44a2f31e5ba6c5c56"} Jan 29 15:47:37 crc kubenswrapper[5008]: I0129 15:47:37.774842 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"7d8596d3-fe9a-4e1a-969b-2a40a90e437d","Type":"ContainerStarted","Data":"a37e87d63e7a4f5cd475c5cc437007014e64b560a242462428fe61e6e7ca18ad"} Jan 29 15:47:37 crc kubenswrapper[5008]: I0129 15:47:37.835927 5008 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-storage-0" podStartSLOduration=40.617522106 podStartE2EDuration="1m8.835909978s" podCreationTimestamp="2026-01-29 15:46:29 +0000 UTC" firstStartedPulling="2026-01-29 15:47:02.78615543 +0000 UTC m=+1166.459009667" lastFinishedPulling="2026-01-29 15:47:31.004543302 +0000 UTC m=+1194.677397539" observedRunningTime="2026-01-29 15:47:37.826699344 +0000 UTC m=+1201.499553641" watchObservedRunningTime="2026-01-29 15:47:37.835909978 +0000 UTC m=+1201.508764205" Jan 29 15:47:38 crc kubenswrapper[5008]: I0129 15:47:38.134863 5008 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5c79d794d7-k22kg"] Jan 29 15:47:38 crc kubenswrapper[5008]: E0129 15:47:38.135168 5008 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="826ac6d8-e950-4bd5-b5f4-0d3f5be5b960" containerName="mariadb-account-create-update" Jan 29 15:47:38 crc kubenswrapper[5008]: I0129 15:47:38.135180 5008 state_mem.go:107] "Deleted CPUSet assignment" podUID="826ac6d8-e950-4bd5-b5f4-0d3f5be5b960" containerName="mariadb-account-create-update" Jan 29 15:47:38 crc kubenswrapper[5008]: E0129 15:47:38.135198 5008 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4256c8e0-3a7b-43fd-9ad4-23b2495bc92e" containerName="mariadb-database-create" Jan 29 15:47:38 crc kubenswrapper[5008]: I0129 15:47:38.135204 5008 state_mem.go:107] "Deleted CPUSet assignment" podUID="4256c8e0-3a7b-43fd-9ad4-23b2495bc92e" containerName="mariadb-database-create" Jan 29 15:47:38 crc kubenswrapper[5008]: E0129 15:47:38.135217 5008 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="75706daa-3e40-4bbe-bb1b-44120719d48d" containerName="mariadb-database-create" Jan 29 15:47:38 crc kubenswrapper[5008]: I0129 15:47:38.135223 5008 state_mem.go:107] "Deleted CPUSet assignment" podUID="75706daa-3e40-4bbe-bb1b-44120719d48d" containerName="mariadb-database-create" Jan 29 15:47:38 crc kubenswrapper[5008]: E0129 15:47:38.135235 5008 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="36bf973b-f73a-425e-9923-09caa2622a41" containerName="mariadb-database-create" Jan 29 15:47:38 crc kubenswrapper[5008]: I0129 15:47:38.135241 5008 state_mem.go:107] "Deleted CPUSet assignment" podUID="36bf973b-f73a-425e-9923-09caa2622a41" containerName="mariadb-database-create" Jan 29 15:47:38 crc kubenswrapper[5008]: E0129 15:47:38.135254 5008 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0494524d-f73e-4534-9064-b578d41bea87" containerName="mariadb-account-create-update" Jan 29 15:47:38 crc kubenswrapper[5008]: I0129 15:47:38.135260 5008 state_mem.go:107] "Deleted CPUSet assignment" podUID="0494524d-f73e-4534-9064-b578d41bea87" containerName="mariadb-account-create-update" Jan 29 15:47:38 crc kubenswrapper[5008]: E0129 15:47:38.135273 5008 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bbc0f9ba-13f2-4092-b3e4-a5744ae24174" containerName="mariadb-account-create-update" Jan 29 15:47:38 crc kubenswrapper[5008]: I0129 15:47:38.135280 5008 state_mem.go:107] "Deleted CPUSet assignment" podUID="bbc0f9ba-13f2-4092-b3e4-a5744ae24174" containerName="mariadb-account-create-update" Jan 29 15:47:38 crc kubenswrapper[5008]: I0129 15:47:38.135440 5008 memory_manager.go:354] "RemoveStaleState removing state" podUID="75706daa-3e40-4bbe-bb1b-44120719d48d" containerName="mariadb-database-create" Jan 29 15:47:38 crc kubenswrapper[5008]: I0129 15:47:38.135456 5008 memory_manager.go:354] "RemoveStaleState removing state" podUID="bbc0f9ba-13f2-4092-b3e4-a5744ae24174" containerName="mariadb-account-create-update" Jan 29 15:47:38 crc kubenswrapper[5008]: I0129 15:47:38.135464 5008 memory_manager.go:354] "RemoveStaleState removing state" podUID="36bf973b-f73a-425e-9923-09caa2622a41" containerName="mariadb-database-create" Jan 29 15:47:38 crc kubenswrapper[5008]: I0129 15:47:38.135473 5008 memory_manager.go:354] "RemoveStaleState removing state" podUID="0494524d-f73e-4534-9064-b578d41bea87" containerName="mariadb-account-create-update" Jan 29 15:47:38 crc kubenswrapper[5008]: I0129 15:47:38.135480 5008 memory_manager.go:354] "RemoveStaleState removing state" podUID="826ac6d8-e950-4bd5-b5f4-0d3f5be5b960" containerName="mariadb-account-create-update" Jan 29 15:47:38 crc kubenswrapper[5008]: I0129 15:47:38.135489 5008 memory_manager.go:354] "RemoveStaleState removing state" podUID="4256c8e0-3a7b-43fd-9ad4-23b2495bc92e" containerName="mariadb-database-create" Jan 29 15:47:38 crc kubenswrapper[5008]: I0129 15:47:38.136228 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c79d794d7-k22kg" Jan 29 15:47:38 crc kubenswrapper[5008]: I0129 15:47:38.139973 5008 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-swift-storage-0" Jan 29 15:47:38 crc kubenswrapper[5008]: I0129 15:47:38.159409 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5c79d794d7-k22kg"] Jan 29 15:47:38 crc kubenswrapper[5008]: I0129 15:47:38.222693 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1d24d44a-1e0f-43ea-a065-9c4f369e0045-ovsdbserver-sb\") pod \"dnsmasq-dns-5c79d794d7-k22kg\" (UID: \"1d24d44a-1e0f-43ea-a065-9c4f369e0045\") " pod="openstack/dnsmasq-dns-5c79d794d7-k22kg" Jan 29 15:47:38 crc kubenswrapper[5008]: I0129 15:47:38.222770 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1d24d44a-1e0f-43ea-a065-9c4f369e0045-dns-svc\") pod \"dnsmasq-dns-5c79d794d7-k22kg\" (UID: \"1d24d44a-1e0f-43ea-a065-9c4f369e0045\") " pod="openstack/dnsmasq-dns-5c79d794d7-k22kg" Jan 29 15:47:38 crc kubenswrapper[5008]: I0129 15:47:38.222875 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1d24d44a-1e0f-43ea-a065-9c4f369e0045-ovsdbserver-nb\") pod \"dnsmasq-dns-5c79d794d7-k22kg\" (UID: \"1d24d44a-1e0f-43ea-a065-9c4f369e0045\") " pod="openstack/dnsmasq-dns-5c79d794d7-k22kg" Jan 29 15:47:38 crc kubenswrapper[5008]: I0129 15:47:38.222902 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zsqfw\" (UniqueName: \"kubernetes.io/projected/1d24d44a-1e0f-43ea-a065-9c4f369e0045-kube-api-access-zsqfw\") pod \"dnsmasq-dns-5c79d794d7-k22kg\" (UID: \"1d24d44a-1e0f-43ea-a065-9c4f369e0045\") " pod="openstack/dnsmasq-dns-5c79d794d7-k22kg" Jan 29 15:47:38 crc kubenswrapper[5008]: I0129 15:47:38.223011 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1d24d44a-1e0f-43ea-a065-9c4f369e0045-config\") pod \"dnsmasq-dns-5c79d794d7-k22kg\" (UID: \"1d24d44a-1e0f-43ea-a065-9c4f369e0045\") " pod="openstack/dnsmasq-dns-5c79d794d7-k22kg" Jan 29 15:47:38 crc kubenswrapper[5008]: I0129 15:47:38.223143 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/1d24d44a-1e0f-43ea-a065-9c4f369e0045-dns-swift-storage-0\") pod \"dnsmasq-dns-5c79d794d7-k22kg\" (UID: \"1d24d44a-1e0f-43ea-a065-9c4f369e0045\") " pod="openstack/dnsmasq-dns-5c79d794d7-k22kg" Jan 29 15:47:38 crc kubenswrapper[5008]: I0129 15:47:38.324589 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1d24d44a-1e0f-43ea-a065-9c4f369e0045-ovsdbserver-nb\") pod \"dnsmasq-dns-5c79d794d7-k22kg\" (UID: \"1d24d44a-1e0f-43ea-a065-9c4f369e0045\") " pod="openstack/dnsmasq-dns-5c79d794d7-k22kg" Jan 29 15:47:38 crc kubenswrapper[5008]: I0129 15:47:38.324635 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zsqfw\" (UniqueName: \"kubernetes.io/projected/1d24d44a-1e0f-43ea-a065-9c4f369e0045-kube-api-access-zsqfw\") pod \"dnsmasq-dns-5c79d794d7-k22kg\" (UID: \"1d24d44a-1e0f-43ea-a065-9c4f369e0045\") " pod="openstack/dnsmasq-dns-5c79d794d7-k22kg" Jan 29 15:47:38 crc kubenswrapper[5008]: I0129 15:47:38.324656 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1d24d44a-1e0f-43ea-a065-9c4f369e0045-config\") pod \"dnsmasq-dns-5c79d794d7-k22kg\" (UID: \"1d24d44a-1e0f-43ea-a065-9c4f369e0045\") " pod="openstack/dnsmasq-dns-5c79d794d7-k22kg" Jan 29 15:47:38 crc kubenswrapper[5008]: I0129 15:47:38.324682 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/1d24d44a-1e0f-43ea-a065-9c4f369e0045-dns-swift-storage-0\") pod \"dnsmasq-dns-5c79d794d7-k22kg\" (UID: \"1d24d44a-1e0f-43ea-a065-9c4f369e0045\") " pod="openstack/dnsmasq-dns-5c79d794d7-k22kg" Jan 29 15:47:38 crc kubenswrapper[5008]: I0129 15:47:38.324752 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1d24d44a-1e0f-43ea-a065-9c4f369e0045-ovsdbserver-sb\") pod \"dnsmasq-dns-5c79d794d7-k22kg\" (UID: \"1d24d44a-1e0f-43ea-a065-9c4f369e0045\") " pod="openstack/dnsmasq-dns-5c79d794d7-k22kg" Jan 29 15:47:38 crc kubenswrapper[5008]: I0129 15:47:38.324819 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1d24d44a-1e0f-43ea-a065-9c4f369e0045-dns-svc\") pod \"dnsmasq-dns-5c79d794d7-k22kg\" (UID: \"1d24d44a-1e0f-43ea-a065-9c4f369e0045\") " pod="openstack/dnsmasq-dns-5c79d794d7-k22kg" Jan 29 15:47:38 crc kubenswrapper[5008]: I0129 15:47:38.325671 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1d24d44a-1e0f-43ea-a065-9c4f369e0045-config\") pod \"dnsmasq-dns-5c79d794d7-k22kg\" (UID: \"1d24d44a-1e0f-43ea-a065-9c4f369e0045\") " pod="openstack/dnsmasq-dns-5c79d794d7-k22kg" Jan 29 15:47:38 crc kubenswrapper[5008]: I0129 15:47:38.325757 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1d24d44a-1e0f-43ea-a065-9c4f369e0045-dns-svc\") pod \"dnsmasq-dns-5c79d794d7-k22kg\" (UID: \"1d24d44a-1e0f-43ea-a065-9c4f369e0045\") " pod="openstack/dnsmasq-dns-5c79d794d7-k22kg" Jan 29 15:47:38 crc kubenswrapper[5008]: I0129 15:47:38.325864 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1d24d44a-1e0f-43ea-a065-9c4f369e0045-ovsdbserver-nb\") pod \"dnsmasq-dns-5c79d794d7-k22kg\" (UID: \"1d24d44a-1e0f-43ea-a065-9c4f369e0045\") " pod="openstack/dnsmasq-dns-5c79d794d7-k22kg" Jan 29 15:47:38 crc kubenswrapper[5008]: I0129 15:47:38.325917 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/1d24d44a-1e0f-43ea-a065-9c4f369e0045-dns-swift-storage-0\") pod \"dnsmasq-dns-5c79d794d7-k22kg\" (UID: \"1d24d44a-1e0f-43ea-a065-9c4f369e0045\") " pod="openstack/dnsmasq-dns-5c79d794d7-k22kg" Jan 29 15:47:38 crc kubenswrapper[5008]: I0129 15:47:38.326177 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1d24d44a-1e0f-43ea-a065-9c4f369e0045-ovsdbserver-sb\") pod \"dnsmasq-dns-5c79d794d7-k22kg\" (UID: \"1d24d44a-1e0f-43ea-a065-9c4f369e0045\") " pod="openstack/dnsmasq-dns-5c79d794d7-k22kg" Jan 29 15:47:38 crc kubenswrapper[5008]: I0129 15:47:38.348184 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zsqfw\" (UniqueName: \"kubernetes.io/projected/1d24d44a-1e0f-43ea-a065-9c4f369e0045-kube-api-access-zsqfw\") pod \"dnsmasq-dns-5c79d794d7-k22kg\" (UID: \"1d24d44a-1e0f-43ea-a065-9c4f369e0045\") " pod="openstack/dnsmasq-dns-5c79d794d7-k22kg" Jan 29 15:47:38 crc kubenswrapper[5008]: I0129 15:47:38.456236 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c79d794d7-k22kg" Jan 29 15:47:43 crc kubenswrapper[5008]: I0129 15:47:43.990438 5008 patch_prober.go:28] interesting pod/machine-config-daemon-gk9q8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 15:47:43 crc kubenswrapper[5008]: I0129 15:47:43.991158 5008 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-gk9q8" podUID="ca0fcb2d-733d-4bde-9bbf-3f7082d0e244" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 15:48:01 crc kubenswrapper[5008]: I0129 15:48:01.972088 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5c79d794d7-k22kg"] Jan 29 15:48:02 crc kubenswrapper[5008]: I0129 15:48:02.006609 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c79d794d7-k22kg" event={"ID":"1d24d44a-1e0f-43ea-a065-9c4f369e0045","Type":"ContainerStarted","Data":"ce4f811545cec808190704383cf9c2a75b48fb0966a323612a8e888c6a8f70bd"} Jan 29 15:48:03 crc kubenswrapper[5008]: I0129 15:48:03.037680 5008 generic.go:334] "Generic (PLEG): container finished" podID="1d24d44a-1e0f-43ea-a065-9c4f369e0045" containerID="083f5bd0f3b73b9e5442787b14d42aed7700b0e82373d83000e080c51c1d585e" exitCode=0 Jan 29 15:48:03 crc kubenswrapper[5008]: I0129 15:48:03.037896 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c79d794d7-k22kg" event={"ID":"1d24d44a-1e0f-43ea-a065-9c4f369e0045","Type":"ContainerDied","Data":"083f5bd0f3b73b9e5442787b14d42aed7700b0e82373d83000e080c51c1d585e"} Jan 29 15:48:03 crc kubenswrapper[5008]: I0129 15:48:03.041134 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-n7wgw" event={"ID":"8277eb2b-44f8-4fd9-af92-1832e0272e0e","Type":"ContainerStarted","Data":"bde50669bd65351b30c48ee0e65fb0911aba9f1d7624eae95461658432ebf883"} Jan 29 15:48:03 crc kubenswrapper[5008]: I0129 15:48:03.084637 5008 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-db-sync-n7wgw" podStartSLOduration=2.993305539 podStartE2EDuration="1m6.084613s" podCreationTimestamp="2026-01-29 15:46:57 +0000 UTC" firstStartedPulling="2026-01-29 15:46:58.426948702 +0000 UTC m=+1162.099802959" lastFinishedPulling="2026-01-29 15:48:01.518256183 +0000 UTC m=+1225.191110420" observedRunningTime="2026-01-29 15:48:03.079871855 +0000 UTC m=+1226.752726082" watchObservedRunningTime="2026-01-29 15:48:03.084613 +0000 UTC m=+1226.757467267" Jan 29 15:48:04 crc kubenswrapper[5008]: I0129 15:48:04.055918 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c79d794d7-k22kg" event={"ID":"1d24d44a-1e0f-43ea-a065-9c4f369e0045","Type":"ContainerStarted","Data":"8c955580cc84bdb7c729644dacf0097c59885b458cef63ff2bf7694209b8b51b"} Jan 29 15:48:04 crc kubenswrapper[5008]: I0129 15:48:04.057972 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5c79d794d7-k22kg" Jan 29 15:48:04 crc kubenswrapper[5008]: I0129 15:48:04.088523 5008 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5c79d794d7-k22kg" podStartSLOduration=26.088492343 podStartE2EDuration="26.088492343s" podCreationTimestamp="2026-01-29 15:47:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 15:48:04.082430285 +0000 UTC m=+1227.755284552" watchObservedRunningTime="2026-01-29 15:48:04.088492343 +0000 UTC m=+1227.761346650" Jan 29 15:48:07 crc kubenswrapper[5008]: I0129 15:48:07.084475 5008 generic.go:334] "Generic (PLEG): container finished" podID="4a79f96d-ad2b-4b69-b9e9-719b1cc0b183" containerID="eacc0139ac8b112a9da7c9f07cae68774d1d37d4498b8a7bcd2ca73c4e6b805f" exitCode=0 Jan 29 15:48:07 crc kubenswrapper[5008]: I0129 15:48:07.084608 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-rdpcb" event={"ID":"4a79f96d-ad2b-4b69-b9e9-719b1cc0b183","Type":"ContainerDied","Data":"eacc0139ac8b112a9da7c9f07cae68774d1d37d4498b8a7bcd2ca73c4e6b805f"} Jan 29 15:48:08 crc kubenswrapper[5008]: I0129 15:48:08.472451 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-5c79d794d7-k22kg" Jan 29 15:48:08 crc kubenswrapper[5008]: I0129 15:48:08.487506 5008 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-rdpcb" Jan 29 15:48:08 crc kubenswrapper[5008]: I0129 15:48:08.531974 5008 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-b8fbc5445-jlh8x"] Jan 29 15:48:08 crc kubenswrapper[5008]: I0129 15:48:08.532280 5008 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-b8fbc5445-jlh8x" podUID="536998c7-ad3f-4b4c-ad9e-342343eded97" containerName="dnsmasq-dns" containerID="cri-o://ce100ea2fe5691613542967271b16e95f2aec9ffb301642d42302c1d83db5831" gracePeriod=10 Jan 29 15:48:08 crc kubenswrapper[5008]: I0129 15:48:08.669021 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6xlln\" (UniqueName: \"kubernetes.io/projected/4a79f96d-ad2b-4b69-b9e9-719b1cc0b183-kube-api-access-6xlln\") pod \"4a79f96d-ad2b-4b69-b9e9-719b1cc0b183\" (UID: \"4a79f96d-ad2b-4b69-b9e9-719b1cc0b183\") " Jan 29 15:48:08 crc kubenswrapper[5008]: I0129 15:48:08.669112 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4a79f96d-ad2b-4b69-b9e9-719b1cc0b183-combined-ca-bundle\") pod \"4a79f96d-ad2b-4b69-b9e9-719b1cc0b183\" (UID: \"4a79f96d-ad2b-4b69-b9e9-719b1cc0b183\") " Jan 29 15:48:08 crc kubenswrapper[5008]: I0129 15:48:08.669301 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4a79f96d-ad2b-4b69-b9e9-719b1cc0b183-config-data\") pod \"4a79f96d-ad2b-4b69-b9e9-719b1cc0b183\" (UID: \"4a79f96d-ad2b-4b69-b9e9-719b1cc0b183\") " Jan 29 15:48:08 crc kubenswrapper[5008]: I0129 15:48:08.678762 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4a79f96d-ad2b-4b69-b9e9-719b1cc0b183-kube-api-access-6xlln" (OuterVolumeSpecName: "kube-api-access-6xlln") pod "4a79f96d-ad2b-4b69-b9e9-719b1cc0b183" (UID: "4a79f96d-ad2b-4b69-b9e9-719b1cc0b183"). InnerVolumeSpecName "kube-api-access-6xlln". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:48:08 crc kubenswrapper[5008]: I0129 15:48:08.701543 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4a79f96d-ad2b-4b69-b9e9-719b1cc0b183-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "4a79f96d-ad2b-4b69-b9e9-719b1cc0b183" (UID: "4a79f96d-ad2b-4b69-b9e9-719b1cc0b183"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:48:08 crc kubenswrapper[5008]: I0129 15:48:08.721615 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4a79f96d-ad2b-4b69-b9e9-719b1cc0b183-config-data" (OuterVolumeSpecName: "config-data") pod "4a79f96d-ad2b-4b69-b9e9-719b1cc0b183" (UID: "4a79f96d-ad2b-4b69-b9e9-719b1cc0b183"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:48:08 crc kubenswrapper[5008]: I0129 15:48:08.770593 5008 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4a79f96d-ad2b-4b69-b9e9-719b1cc0b183-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 15:48:08 crc kubenswrapper[5008]: I0129 15:48:08.770626 5008 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6xlln\" (UniqueName: \"kubernetes.io/projected/4a79f96d-ad2b-4b69-b9e9-719b1cc0b183-kube-api-access-6xlln\") on node \"crc\" DevicePath \"\"" Jan 29 15:48:08 crc kubenswrapper[5008]: I0129 15:48:08.770642 5008 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4a79f96d-ad2b-4b69-b9e9-719b1cc0b183-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 15:48:08 crc kubenswrapper[5008]: I0129 15:48:08.938054 5008 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-b8fbc5445-jlh8x" Jan 29 15:48:09 crc kubenswrapper[5008]: I0129 15:48:09.075659 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/536998c7-ad3f-4b4c-ad9e-342343eded97-config\") pod \"536998c7-ad3f-4b4c-ad9e-342343eded97\" (UID: \"536998c7-ad3f-4b4c-ad9e-342343eded97\") " Jan 29 15:48:09 crc kubenswrapper[5008]: I0129 15:48:09.075773 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/536998c7-ad3f-4b4c-ad9e-342343eded97-ovsdbserver-nb\") pod \"536998c7-ad3f-4b4c-ad9e-342343eded97\" (UID: \"536998c7-ad3f-4b4c-ad9e-342343eded97\") " Jan 29 15:48:09 crc kubenswrapper[5008]: I0129 15:48:09.075815 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/536998c7-ad3f-4b4c-ad9e-342343eded97-ovsdbserver-sb\") pod \"536998c7-ad3f-4b4c-ad9e-342343eded97\" (UID: \"536998c7-ad3f-4b4c-ad9e-342343eded97\") " Jan 29 15:48:09 crc kubenswrapper[5008]: I0129 15:48:09.075914 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qsqq2\" (UniqueName: \"kubernetes.io/projected/536998c7-ad3f-4b4c-ad9e-342343eded97-kube-api-access-qsqq2\") pod \"536998c7-ad3f-4b4c-ad9e-342343eded97\" (UID: \"536998c7-ad3f-4b4c-ad9e-342343eded97\") " Jan 29 15:48:09 crc kubenswrapper[5008]: I0129 15:48:09.075942 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/536998c7-ad3f-4b4c-ad9e-342343eded97-dns-svc\") pod \"536998c7-ad3f-4b4c-ad9e-342343eded97\" (UID: \"536998c7-ad3f-4b4c-ad9e-342343eded97\") " Jan 29 15:48:09 crc kubenswrapper[5008]: I0129 15:48:09.087745 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/536998c7-ad3f-4b4c-ad9e-342343eded97-kube-api-access-qsqq2" (OuterVolumeSpecName: "kube-api-access-qsqq2") pod "536998c7-ad3f-4b4c-ad9e-342343eded97" (UID: "536998c7-ad3f-4b4c-ad9e-342343eded97"). InnerVolumeSpecName "kube-api-access-qsqq2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:48:09 crc kubenswrapper[5008]: I0129 15:48:09.104052 5008 generic.go:334] "Generic (PLEG): container finished" podID="536998c7-ad3f-4b4c-ad9e-342343eded97" containerID="ce100ea2fe5691613542967271b16e95f2aec9ffb301642d42302c1d83db5831" exitCode=0 Jan 29 15:48:09 crc kubenswrapper[5008]: I0129 15:48:09.104122 5008 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-b8fbc5445-jlh8x" Jan 29 15:48:09 crc kubenswrapper[5008]: I0129 15:48:09.104158 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-b8fbc5445-jlh8x" event={"ID":"536998c7-ad3f-4b4c-ad9e-342343eded97","Type":"ContainerDied","Data":"ce100ea2fe5691613542967271b16e95f2aec9ffb301642d42302c1d83db5831"} Jan 29 15:48:09 crc kubenswrapper[5008]: I0129 15:48:09.104213 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-b8fbc5445-jlh8x" event={"ID":"536998c7-ad3f-4b4c-ad9e-342343eded97","Type":"ContainerDied","Data":"e0537e06f45058060e30f1ea912f4b791f0f50a83a241274268db34f9a3ef7fc"} Jan 29 15:48:09 crc kubenswrapper[5008]: I0129 15:48:09.104231 5008 scope.go:117] "RemoveContainer" containerID="ce100ea2fe5691613542967271b16e95f2aec9ffb301642d42302c1d83db5831" Jan 29 15:48:09 crc kubenswrapper[5008]: I0129 15:48:09.106535 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-rdpcb" event={"ID":"4a79f96d-ad2b-4b69-b9e9-719b1cc0b183","Type":"ContainerDied","Data":"e523bacd3d00d7c299e8d1ee84b44f3d8235fdd0edd8465f7b1e2360b0719fb8"} Jan 29 15:48:09 crc kubenswrapper[5008]: I0129 15:48:09.106566 5008 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e523bacd3d00d7c299e8d1ee84b44f3d8235fdd0edd8465f7b1e2360b0719fb8" Jan 29 15:48:09 crc kubenswrapper[5008]: I0129 15:48:09.106616 5008 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-rdpcb" Jan 29 15:48:09 crc kubenswrapper[5008]: I0129 15:48:09.116036 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/536998c7-ad3f-4b4c-ad9e-342343eded97-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "536998c7-ad3f-4b4c-ad9e-342343eded97" (UID: "536998c7-ad3f-4b4c-ad9e-342343eded97"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:48:09 crc kubenswrapper[5008]: I0129 15:48:09.128314 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/536998c7-ad3f-4b4c-ad9e-342343eded97-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "536998c7-ad3f-4b4c-ad9e-342343eded97" (UID: "536998c7-ad3f-4b4c-ad9e-342343eded97"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:48:09 crc kubenswrapper[5008]: I0129 15:48:09.137546 5008 scope.go:117] "RemoveContainer" containerID="01f240842a9d581bbdd4e45548c395b54d038ece16a8256fdcca28f72896aa94" Jan 29 15:48:09 crc kubenswrapper[5008]: I0129 15:48:09.139473 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/536998c7-ad3f-4b4c-ad9e-342343eded97-config" (OuterVolumeSpecName: "config") pod "536998c7-ad3f-4b4c-ad9e-342343eded97" (UID: "536998c7-ad3f-4b4c-ad9e-342343eded97"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:48:09 crc kubenswrapper[5008]: I0129 15:48:09.168534 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/536998c7-ad3f-4b4c-ad9e-342343eded97-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "536998c7-ad3f-4b4c-ad9e-342343eded97" (UID: "536998c7-ad3f-4b4c-ad9e-342343eded97"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:48:09 crc kubenswrapper[5008]: I0129 15:48:09.178346 5008 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qsqq2\" (UniqueName: \"kubernetes.io/projected/536998c7-ad3f-4b4c-ad9e-342343eded97-kube-api-access-qsqq2\") on node \"crc\" DevicePath \"\"" Jan 29 15:48:09 crc kubenswrapper[5008]: I0129 15:48:09.178400 5008 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/536998c7-ad3f-4b4c-ad9e-342343eded97-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 29 15:48:09 crc kubenswrapper[5008]: I0129 15:48:09.178411 5008 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/536998c7-ad3f-4b4c-ad9e-342343eded97-config\") on node \"crc\" DevicePath \"\"" Jan 29 15:48:09 crc kubenswrapper[5008]: I0129 15:48:09.178420 5008 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/536998c7-ad3f-4b4c-ad9e-342343eded97-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 29 15:48:09 crc kubenswrapper[5008]: I0129 15:48:09.178431 5008 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/536998c7-ad3f-4b4c-ad9e-342343eded97-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 29 15:48:09 crc kubenswrapper[5008]: I0129 15:48:09.188313 5008 scope.go:117] "RemoveContainer" containerID="ce100ea2fe5691613542967271b16e95f2aec9ffb301642d42302c1d83db5831" Jan 29 15:48:09 crc kubenswrapper[5008]: E0129 15:48:09.188812 5008 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ce100ea2fe5691613542967271b16e95f2aec9ffb301642d42302c1d83db5831\": container with ID starting with ce100ea2fe5691613542967271b16e95f2aec9ffb301642d42302c1d83db5831 not found: ID does not exist" containerID="ce100ea2fe5691613542967271b16e95f2aec9ffb301642d42302c1d83db5831" Jan 29 15:48:09 crc kubenswrapper[5008]: I0129 15:48:09.188850 5008 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ce100ea2fe5691613542967271b16e95f2aec9ffb301642d42302c1d83db5831"} err="failed to get container status \"ce100ea2fe5691613542967271b16e95f2aec9ffb301642d42302c1d83db5831\": rpc error: code = NotFound desc = could not find container \"ce100ea2fe5691613542967271b16e95f2aec9ffb301642d42302c1d83db5831\": container with ID starting with ce100ea2fe5691613542967271b16e95f2aec9ffb301642d42302c1d83db5831 not found: ID does not exist" Jan 29 15:48:09 crc kubenswrapper[5008]: I0129 15:48:09.188875 5008 scope.go:117] "RemoveContainer" containerID="01f240842a9d581bbdd4e45548c395b54d038ece16a8256fdcca28f72896aa94" Jan 29 15:48:09 crc kubenswrapper[5008]: E0129 15:48:09.189185 5008 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"01f240842a9d581bbdd4e45548c395b54d038ece16a8256fdcca28f72896aa94\": container with ID starting with 01f240842a9d581bbdd4e45548c395b54d038ece16a8256fdcca28f72896aa94 not found: ID does not exist" containerID="01f240842a9d581bbdd4e45548c395b54d038ece16a8256fdcca28f72896aa94" Jan 29 15:48:09 crc kubenswrapper[5008]: I0129 15:48:09.189219 5008 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"01f240842a9d581bbdd4e45548c395b54d038ece16a8256fdcca28f72896aa94"} err="failed to get container status \"01f240842a9d581bbdd4e45548c395b54d038ece16a8256fdcca28f72896aa94\": rpc error: code = NotFound desc = could not find container \"01f240842a9d581bbdd4e45548c395b54d038ece16a8256fdcca28f72896aa94\": container with ID starting with 01f240842a9d581bbdd4e45548c395b54d038ece16a8256fdcca28f72896aa94 not found: ID does not exist" Jan 29 15:48:09 crc kubenswrapper[5008]: I0129 15:48:09.334398 5008 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5b868669f-l96nk"] Jan 29 15:48:09 crc kubenswrapper[5008]: E0129 15:48:09.334637 5008 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="536998c7-ad3f-4b4c-ad9e-342343eded97" containerName="dnsmasq-dns" Jan 29 15:48:09 crc kubenswrapper[5008]: I0129 15:48:09.334649 5008 state_mem.go:107] "Deleted CPUSet assignment" podUID="536998c7-ad3f-4b4c-ad9e-342343eded97" containerName="dnsmasq-dns" Jan 29 15:48:09 crc kubenswrapper[5008]: E0129 15:48:09.334660 5008 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="536998c7-ad3f-4b4c-ad9e-342343eded97" containerName="init" Jan 29 15:48:09 crc kubenswrapper[5008]: I0129 15:48:09.334666 5008 state_mem.go:107] "Deleted CPUSet assignment" podUID="536998c7-ad3f-4b4c-ad9e-342343eded97" containerName="init" Jan 29 15:48:09 crc kubenswrapper[5008]: E0129 15:48:09.334691 5008 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4a79f96d-ad2b-4b69-b9e9-719b1cc0b183" containerName="keystone-db-sync" Jan 29 15:48:09 crc kubenswrapper[5008]: I0129 15:48:09.334696 5008 state_mem.go:107] "Deleted CPUSet assignment" podUID="4a79f96d-ad2b-4b69-b9e9-719b1cc0b183" containerName="keystone-db-sync" Jan 29 15:48:09 crc kubenswrapper[5008]: I0129 15:48:09.334842 5008 memory_manager.go:354] "RemoveStaleState removing state" podUID="4a79f96d-ad2b-4b69-b9e9-719b1cc0b183" containerName="keystone-db-sync" Jan 29 15:48:09 crc kubenswrapper[5008]: I0129 15:48:09.334855 5008 memory_manager.go:354] "RemoveStaleState removing state" podUID="536998c7-ad3f-4b4c-ad9e-342343eded97" containerName="dnsmasq-dns" Jan 29 15:48:09 crc kubenswrapper[5008]: I0129 15:48:09.335600 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5b868669f-l96nk" Jan 29 15:48:09 crc kubenswrapper[5008]: I0129 15:48:09.345312 5008 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-b8gfd"] Jan 29 15:48:09 crc kubenswrapper[5008]: I0129 15:48:09.348774 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-b8gfd" Jan 29 15:48:09 crc kubenswrapper[5008]: I0129 15:48:09.362491 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Jan 29 15:48:09 crc kubenswrapper[5008]: I0129 15:48:09.362850 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Jan 29 15:48:09 crc kubenswrapper[5008]: I0129 15:48:09.364539 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Jan 29 15:48:09 crc kubenswrapper[5008]: I0129 15:48:09.365521 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Jan 29 15:48:09 crc kubenswrapper[5008]: I0129 15:48:09.367037 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-sgcvh" Jan 29 15:48:09 crc kubenswrapper[5008]: I0129 15:48:09.376364 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-b8gfd"] Jan 29 15:48:09 crc kubenswrapper[5008]: I0129 15:48:09.470246 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5b868669f-l96nk"] Jan 29 15:48:09 crc kubenswrapper[5008]: I0129 15:48:09.505763 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-btnzd\" (UniqueName: \"kubernetes.io/projected/f8408515-bbd2-46aa-b98f-a331b6659aa8-kube-api-access-btnzd\") pod \"keystone-bootstrap-b8gfd\" (UID: \"f8408515-bbd2-46aa-b98f-a331b6659aa8\") " pod="openstack/keystone-bootstrap-b8gfd" Jan 29 15:48:09 crc kubenswrapper[5008]: I0129 15:48:09.505826 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/32d4f252-93b9-4d91-9501-7fac414b7b47-dns-swift-storage-0\") pod \"dnsmasq-dns-5b868669f-l96nk\" (UID: \"32d4f252-93b9-4d91-9501-7fac414b7b47\") " pod="openstack/dnsmasq-dns-5b868669f-l96nk" Jan 29 15:48:09 crc kubenswrapper[5008]: I0129 15:48:09.505866 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f8408515-bbd2-46aa-b98f-a331b6659aa8-combined-ca-bundle\") pod \"keystone-bootstrap-b8gfd\" (UID: \"f8408515-bbd2-46aa-b98f-a331b6659aa8\") " pod="openstack/keystone-bootstrap-b8gfd" Jan 29 15:48:09 crc kubenswrapper[5008]: I0129 15:48:09.505916 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/f8408515-bbd2-46aa-b98f-a331b6659aa8-fernet-keys\") pod \"keystone-bootstrap-b8gfd\" (UID: \"f8408515-bbd2-46aa-b98f-a331b6659aa8\") " pod="openstack/keystone-bootstrap-b8gfd" Jan 29 15:48:09 crc kubenswrapper[5008]: I0129 15:48:09.505941 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g58mz\" (UniqueName: \"kubernetes.io/projected/32d4f252-93b9-4d91-9501-7fac414b7b47-kube-api-access-g58mz\") pod \"dnsmasq-dns-5b868669f-l96nk\" (UID: \"32d4f252-93b9-4d91-9501-7fac414b7b47\") " pod="openstack/dnsmasq-dns-5b868669f-l96nk" Jan 29 15:48:09 crc kubenswrapper[5008]: I0129 15:48:09.505966 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f8408515-bbd2-46aa-b98f-a331b6659aa8-scripts\") pod \"keystone-bootstrap-b8gfd\" (UID: \"f8408515-bbd2-46aa-b98f-a331b6659aa8\") " pod="openstack/keystone-bootstrap-b8gfd" Jan 29 15:48:09 crc kubenswrapper[5008]: I0129 15:48:09.505979 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/32d4f252-93b9-4d91-9501-7fac414b7b47-dns-svc\") pod \"dnsmasq-dns-5b868669f-l96nk\" (UID: \"32d4f252-93b9-4d91-9501-7fac414b7b47\") " pod="openstack/dnsmasq-dns-5b868669f-l96nk" Jan 29 15:48:09 crc kubenswrapper[5008]: I0129 15:48:09.506002 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f8408515-bbd2-46aa-b98f-a331b6659aa8-config-data\") pod \"keystone-bootstrap-b8gfd\" (UID: \"f8408515-bbd2-46aa-b98f-a331b6659aa8\") " pod="openstack/keystone-bootstrap-b8gfd" Jan 29 15:48:09 crc kubenswrapper[5008]: I0129 15:48:09.506050 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/32d4f252-93b9-4d91-9501-7fac414b7b47-ovsdbserver-sb\") pod \"dnsmasq-dns-5b868669f-l96nk\" (UID: \"32d4f252-93b9-4d91-9501-7fac414b7b47\") " pod="openstack/dnsmasq-dns-5b868669f-l96nk" Jan 29 15:48:09 crc kubenswrapper[5008]: I0129 15:48:09.506077 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/32d4f252-93b9-4d91-9501-7fac414b7b47-ovsdbserver-nb\") pod \"dnsmasq-dns-5b868669f-l96nk\" (UID: \"32d4f252-93b9-4d91-9501-7fac414b7b47\") " pod="openstack/dnsmasq-dns-5b868669f-l96nk" Jan 29 15:48:09 crc kubenswrapper[5008]: I0129 15:48:09.506090 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/f8408515-bbd2-46aa-b98f-a331b6659aa8-credential-keys\") pod \"keystone-bootstrap-b8gfd\" (UID: \"f8408515-bbd2-46aa-b98f-a331b6659aa8\") " pod="openstack/keystone-bootstrap-b8gfd" Jan 29 15:48:09 crc kubenswrapper[5008]: I0129 15:48:09.506104 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/32d4f252-93b9-4d91-9501-7fac414b7b47-config\") pod \"dnsmasq-dns-5b868669f-l96nk\" (UID: \"32d4f252-93b9-4d91-9501-7fac414b7b47\") " pod="openstack/dnsmasq-dns-5b868669f-l96nk" Jan 29 15:48:09 crc kubenswrapper[5008]: I0129 15:48:09.538252 5008 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-66f4589f77-j49wf"] Jan 29 15:48:09 crc kubenswrapper[5008]: I0129 15:48:09.539665 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-66f4589f77-j49wf" Jan 29 15:48:09 crc kubenswrapper[5008]: I0129 15:48:09.545402 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"horizon-horizon-dockercfg-8svhc" Jan 29 15:48:09 crc kubenswrapper[5008]: I0129 15:48:09.545585 5008 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"horizon-scripts" Jan 29 15:48:09 crc kubenswrapper[5008]: I0129 15:48:09.545943 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"horizon" Jan 29 15:48:09 crc kubenswrapper[5008]: I0129 15:48:09.546194 5008 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"horizon-config-data" Jan 29 15:48:09 crc kubenswrapper[5008]: I0129 15:48:09.554045 5008 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-b8fbc5445-jlh8x"] Jan 29 15:48:09 crc kubenswrapper[5008]: I0129 15:48:09.568864 5008 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-b8fbc5445-jlh8x"] Jan 29 15:48:09 crc kubenswrapper[5008]: I0129 15:48:09.577766 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-66f4589f77-j49wf"] Jan 29 15:48:09 crc kubenswrapper[5008]: I0129 15:48:09.589131 5008 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-sync-fwhd5"] Jan 29 15:48:09 crc kubenswrapper[5008]: I0129 15:48:09.590146 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-fwhd5" Jan 29 15:48:09 crc kubenswrapper[5008]: I0129 15:48:09.600470 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Jan 29 15:48:09 crc kubenswrapper[5008]: I0129 15:48:09.601325 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-x6pwm" Jan 29 15:48:09 crc kubenswrapper[5008]: I0129 15:48:09.601517 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Jan 29 15:48:09 crc kubenswrapper[5008]: I0129 15:48:09.607942 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f8408515-bbd2-46aa-b98f-a331b6659aa8-combined-ca-bundle\") pod \"keystone-bootstrap-b8gfd\" (UID: \"f8408515-bbd2-46aa-b98f-a331b6659aa8\") " pod="openstack/keystone-bootstrap-b8gfd" Jan 29 15:48:09 crc kubenswrapper[5008]: I0129 15:48:09.607997 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/f8408515-bbd2-46aa-b98f-a331b6659aa8-fernet-keys\") pod \"keystone-bootstrap-b8gfd\" (UID: \"f8408515-bbd2-46aa-b98f-a331b6659aa8\") " pod="openstack/keystone-bootstrap-b8gfd" Jan 29 15:48:09 crc kubenswrapper[5008]: I0129 15:48:09.608031 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g58mz\" (UniqueName: \"kubernetes.io/projected/32d4f252-93b9-4d91-9501-7fac414b7b47-kube-api-access-g58mz\") pod \"dnsmasq-dns-5b868669f-l96nk\" (UID: \"32d4f252-93b9-4d91-9501-7fac414b7b47\") " pod="openstack/dnsmasq-dns-5b868669f-l96nk" Jan 29 15:48:09 crc kubenswrapper[5008]: I0129 15:48:09.608056 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f8408515-bbd2-46aa-b98f-a331b6659aa8-scripts\") pod \"keystone-bootstrap-b8gfd\" (UID: \"f8408515-bbd2-46aa-b98f-a331b6659aa8\") " pod="openstack/keystone-bootstrap-b8gfd" Jan 29 15:48:09 crc kubenswrapper[5008]: I0129 15:48:09.608073 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/32d4f252-93b9-4d91-9501-7fac414b7b47-dns-svc\") pod \"dnsmasq-dns-5b868669f-l96nk\" (UID: \"32d4f252-93b9-4d91-9501-7fac414b7b47\") " pod="openstack/dnsmasq-dns-5b868669f-l96nk" Jan 29 15:48:09 crc kubenswrapper[5008]: I0129 15:48:09.608094 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f8408515-bbd2-46aa-b98f-a331b6659aa8-config-data\") pod \"keystone-bootstrap-b8gfd\" (UID: \"f8408515-bbd2-46aa-b98f-a331b6659aa8\") " pod="openstack/keystone-bootstrap-b8gfd" Jan 29 15:48:09 crc kubenswrapper[5008]: I0129 15:48:09.608131 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/32d4f252-93b9-4d91-9501-7fac414b7b47-ovsdbserver-sb\") pod \"dnsmasq-dns-5b868669f-l96nk\" (UID: \"32d4f252-93b9-4d91-9501-7fac414b7b47\") " pod="openstack/dnsmasq-dns-5b868669f-l96nk" Jan 29 15:48:09 crc kubenswrapper[5008]: I0129 15:48:09.608155 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/f8408515-bbd2-46aa-b98f-a331b6659aa8-credential-keys\") pod \"keystone-bootstrap-b8gfd\" (UID: \"f8408515-bbd2-46aa-b98f-a331b6659aa8\") " pod="openstack/keystone-bootstrap-b8gfd" Jan 29 15:48:09 crc kubenswrapper[5008]: I0129 15:48:09.608171 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/32d4f252-93b9-4d91-9501-7fac414b7b47-ovsdbserver-nb\") pod \"dnsmasq-dns-5b868669f-l96nk\" (UID: \"32d4f252-93b9-4d91-9501-7fac414b7b47\") " pod="openstack/dnsmasq-dns-5b868669f-l96nk" Jan 29 15:48:09 crc kubenswrapper[5008]: I0129 15:48:09.608200 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/32d4f252-93b9-4d91-9501-7fac414b7b47-config\") pod \"dnsmasq-dns-5b868669f-l96nk\" (UID: \"32d4f252-93b9-4d91-9501-7fac414b7b47\") " pod="openstack/dnsmasq-dns-5b868669f-l96nk" Jan 29 15:48:09 crc kubenswrapper[5008]: I0129 15:48:09.608222 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-btnzd\" (UniqueName: \"kubernetes.io/projected/f8408515-bbd2-46aa-b98f-a331b6659aa8-kube-api-access-btnzd\") pod \"keystone-bootstrap-b8gfd\" (UID: \"f8408515-bbd2-46aa-b98f-a331b6659aa8\") " pod="openstack/keystone-bootstrap-b8gfd" Jan 29 15:48:09 crc kubenswrapper[5008]: I0129 15:48:09.608241 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/32d4f252-93b9-4d91-9501-7fac414b7b47-dns-swift-storage-0\") pod \"dnsmasq-dns-5b868669f-l96nk\" (UID: \"32d4f252-93b9-4d91-9501-7fac414b7b47\") " pod="openstack/dnsmasq-dns-5b868669f-l96nk" Jan 29 15:48:09 crc kubenswrapper[5008]: I0129 15:48:09.609573 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/32d4f252-93b9-4d91-9501-7fac414b7b47-dns-swift-storage-0\") pod \"dnsmasq-dns-5b868669f-l96nk\" (UID: \"32d4f252-93b9-4d91-9501-7fac414b7b47\") " pod="openstack/dnsmasq-dns-5b868669f-l96nk" Jan 29 15:48:09 crc kubenswrapper[5008]: I0129 15:48:09.609732 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/32d4f252-93b9-4d91-9501-7fac414b7b47-ovsdbserver-sb\") pod \"dnsmasq-dns-5b868669f-l96nk\" (UID: \"32d4f252-93b9-4d91-9501-7fac414b7b47\") " pod="openstack/dnsmasq-dns-5b868669f-l96nk" Jan 29 15:48:09 crc kubenswrapper[5008]: I0129 15:48:09.610085 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/32d4f252-93b9-4d91-9501-7fac414b7b47-config\") pod \"dnsmasq-dns-5b868669f-l96nk\" (UID: \"32d4f252-93b9-4d91-9501-7fac414b7b47\") " pod="openstack/dnsmasq-dns-5b868669f-l96nk" Jan 29 15:48:09 crc kubenswrapper[5008]: I0129 15:48:09.613009 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/32d4f252-93b9-4d91-9501-7fac414b7b47-ovsdbserver-nb\") pod \"dnsmasq-dns-5b868669f-l96nk\" (UID: \"32d4f252-93b9-4d91-9501-7fac414b7b47\") " pod="openstack/dnsmasq-dns-5b868669f-l96nk" Jan 29 15:48:09 crc kubenswrapper[5008]: I0129 15:48:09.613683 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/32d4f252-93b9-4d91-9501-7fac414b7b47-dns-svc\") pod \"dnsmasq-dns-5b868669f-l96nk\" (UID: \"32d4f252-93b9-4d91-9501-7fac414b7b47\") " pod="openstack/dnsmasq-dns-5b868669f-l96nk" Jan 29 15:48:09 crc kubenswrapper[5008]: I0129 15:48:09.619841 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f8408515-bbd2-46aa-b98f-a331b6659aa8-config-data\") pod \"keystone-bootstrap-b8gfd\" (UID: \"f8408515-bbd2-46aa-b98f-a331b6659aa8\") " pod="openstack/keystone-bootstrap-b8gfd" Jan 29 15:48:09 crc kubenswrapper[5008]: I0129 15:48:09.620207 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/f8408515-bbd2-46aa-b98f-a331b6659aa8-credential-keys\") pod \"keystone-bootstrap-b8gfd\" (UID: \"f8408515-bbd2-46aa-b98f-a331b6659aa8\") " pod="openstack/keystone-bootstrap-b8gfd" Jan 29 15:48:09 crc kubenswrapper[5008]: I0129 15:48:09.635548 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f8408515-bbd2-46aa-b98f-a331b6659aa8-scripts\") pod \"keystone-bootstrap-b8gfd\" (UID: \"f8408515-bbd2-46aa-b98f-a331b6659aa8\") " pod="openstack/keystone-bootstrap-b8gfd" Jan 29 15:48:09 crc kubenswrapper[5008]: I0129 15:48:09.640508 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/f8408515-bbd2-46aa-b98f-a331b6659aa8-fernet-keys\") pod \"keystone-bootstrap-b8gfd\" (UID: \"f8408515-bbd2-46aa-b98f-a331b6659aa8\") " pod="openstack/keystone-bootstrap-b8gfd" Jan 29 15:48:09 crc kubenswrapper[5008]: I0129 15:48:09.642557 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f8408515-bbd2-46aa-b98f-a331b6659aa8-combined-ca-bundle\") pod \"keystone-bootstrap-b8gfd\" (UID: \"f8408515-bbd2-46aa-b98f-a331b6659aa8\") " pod="openstack/keystone-bootstrap-b8gfd" Jan 29 15:48:09 crc kubenswrapper[5008]: I0129 15:48:09.651397 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g58mz\" (UniqueName: \"kubernetes.io/projected/32d4f252-93b9-4d91-9501-7fac414b7b47-kube-api-access-g58mz\") pod \"dnsmasq-dns-5b868669f-l96nk\" (UID: \"32d4f252-93b9-4d91-9501-7fac414b7b47\") " pod="openstack/dnsmasq-dns-5b868669f-l96nk" Jan 29 15:48:09 crc kubenswrapper[5008]: I0129 15:48:09.667342 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-fwhd5"] Jan 29 15:48:09 crc kubenswrapper[5008]: I0129 15:48:09.692452 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-btnzd\" (UniqueName: \"kubernetes.io/projected/f8408515-bbd2-46aa-b98f-a331b6659aa8-kube-api-access-btnzd\") pod \"keystone-bootstrap-b8gfd\" (UID: \"f8408515-bbd2-46aa-b98f-a331b6659aa8\") " pod="openstack/keystone-bootstrap-b8gfd" Jan 29 15:48:09 crc kubenswrapper[5008]: I0129 15:48:09.703940 5008 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5b868669f-l96nk"] Jan 29 15:48:09 crc kubenswrapper[5008]: I0129 15:48:09.704527 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5b868669f-l96nk" Jan 29 15:48:09 crc kubenswrapper[5008]: I0129 15:48:09.709252 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/8e9e19dd-550a-467d-bd79-03ee07c2f470-config-data\") pod \"horizon-66f4589f77-j49wf\" (UID: \"8e9e19dd-550a-467d-bd79-03ee07c2f470\") " pod="openstack/horizon-66f4589f77-j49wf" Jan 29 15:48:09 crc kubenswrapper[5008]: I0129 15:48:09.709307 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/9069f34b-ed91-4ced-8b05-91b83dd02938-etc-machine-id\") pod \"cinder-db-sync-fwhd5\" (UID: \"9069f34b-ed91-4ced-8b05-91b83dd02938\") " pod="openstack/cinder-db-sync-fwhd5" Jan 29 15:48:09 crc kubenswrapper[5008]: I0129 15:48:09.709331 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/8e9e19dd-550a-467d-bd79-03ee07c2f470-scripts\") pod \"horizon-66f4589f77-j49wf\" (UID: \"8e9e19dd-550a-467d-bd79-03ee07c2f470\") " pod="openstack/horizon-66f4589f77-j49wf" Jan 29 15:48:09 crc kubenswrapper[5008]: I0129 15:48:09.709345 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6jtf7\" (UniqueName: \"kubernetes.io/projected/8e9e19dd-550a-467d-bd79-03ee07c2f470-kube-api-access-6jtf7\") pod \"horizon-66f4589f77-j49wf\" (UID: \"8e9e19dd-550a-467d-bd79-03ee07c2f470\") " pod="openstack/horizon-66f4589f77-j49wf" Jan 29 15:48:09 crc kubenswrapper[5008]: I0129 15:48:09.709370 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8e9e19dd-550a-467d-bd79-03ee07c2f470-logs\") pod \"horizon-66f4589f77-j49wf\" (UID: \"8e9e19dd-550a-467d-bd79-03ee07c2f470\") " pod="openstack/horizon-66f4589f77-j49wf" Jan 29 15:48:09 crc kubenswrapper[5008]: I0129 15:48:09.709402 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9069f34b-ed91-4ced-8b05-91b83dd02938-combined-ca-bundle\") pod \"cinder-db-sync-fwhd5\" (UID: \"9069f34b-ed91-4ced-8b05-91b83dd02938\") " pod="openstack/cinder-db-sync-fwhd5" Jan 29 15:48:09 crc kubenswrapper[5008]: I0129 15:48:09.709448 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/9069f34b-ed91-4ced-8b05-91b83dd02938-db-sync-config-data\") pod \"cinder-db-sync-fwhd5\" (UID: \"9069f34b-ed91-4ced-8b05-91b83dd02938\") " pod="openstack/cinder-db-sync-fwhd5" Jan 29 15:48:09 crc kubenswrapper[5008]: I0129 15:48:09.709474 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9069f34b-ed91-4ced-8b05-91b83dd02938-scripts\") pod \"cinder-db-sync-fwhd5\" (UID: \"9069f34b-ed91-4ced-8b05-91b83dd02938\") " pod="openstack/cinder-db-sync-fwhd5" Jan 29 15:48:09 crc kubenswrapper[5008]: I0129 15:48:09.709493 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/8e9e19dd-550a-467d-bd79-03ee07c2f470-horizon-secret-key\") pod \"horizon-66f4589f77-j49wf\" (UID: \"8e9e19dd-550a-467d-bd79-03ee07c2f470\") " pod="openstack/horizon-66f4589f77-j49wf" Jan 29 15:48:09 crc kubenswrapper[5008]: I0129 15:48:09.709515 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6b5fh\" (UniqueName: \"kubernetes.io/projected/9069f34b-ed91-4ced-8b05-91b83dd02938-kube-api-access-6b5fh\") pod \"cinder-db-sync-fwhd5\" (UID: \"9069f34b-ed91-4ced-8b05-91b83dd02938\") " pod="openstack/cinder-db-sync-fwhd5" Jan 29 15:48:09 crc kubenswrapper[5008]: I0129 15:48:09.709539 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9069f34b-ed91-4ced-8b05-91b83dd02938-config-data\") pod \"cinder-db-sync-fwhd5\" (UID: \"9069f34b-ed91-4ced-8b05-91b83dd02938\") " pod="openstack/cinder-db-sync-fwhd5" Jan 29 15:48:09 crc kubenswrapper[5008]: I0129 15:48:09.731929 5008 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-sync-rcl2z"] Jan 29 15:48:09 crc kubenswrapper[5008]: I0129 15:48:09.733262 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-rcl2z" Jan 29 15:48:09 crc kubenswrapper[5008]: I0129 15:48:09.737858 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-wg4h5" Jan 29 15:48:09 crc kubenswrapper[5008]: I0129 15:48:09.738171 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Jan 29 15:48:09 crc kubenswrapper[5008]: I0129 15:48:09.771870 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-rcl2z"] Jan 29 15:48:09 crc kubenswrapper[5008]: I0129 15:48:09.773948 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-b8gfd" Jan 29 15:48:09 crc kubenswrapper[5008]: I0129 15:48:09.788766 5008 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-sync-tqc26"] Jan 29 15:48:09 crc kubenswrapper[5008]: I0129 15:48:09.790029 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-tqc26" Jan 29 15:48:09 crc kubenswrapper[5008]: I0129 15:48:09.793403 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Jan 29 15:48:09 crc kubenswrapper[5008]: I0129 15:48:09.793586 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Jan 29 15:48:09 crc kubenswrapper[5008]: I0129 15:48:09.793487 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-rlqfr" Jan 29 15:48:09 crc kubenswrapper[5008]: I0129 15:48:09.817772 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/9069f34b-ed91-4ced-8b05-91b83dd02938-etc-machine-id\") pod \"cinder-db-sync-fwhd5\" (UID: \"9069f34b-ed91-4ced-8b05-91b83dd02938\") " pod="openstack/cinder-db-sync-fwhd5" Jan 29 15:48:09 crc kubenswrapper[5008]: I0129 15:48:09.828523 5008 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-cf78879c9-f77w7"] Jan 29 15:48:09 crc kubenswrapper[5008]: I0129 15:48:09.829398 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/9069f34b-ed91-4ced-8b05-91b83dd02938-etc-machine-id\") pod \"cinder-db-sync-fwhd5\" (UID: \"9069f34b-ed91-4ced-8b05-91b83dd02938\") " pod="openstack/cinder-db-sync-fwhd5" Jan 29 15:48:09 crc kubenswrapper[5008]: I0129 15:48:09.830166 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-cf78879c9-f77w7" Jan 29 15:48:09 crc kubenswrapper[5008]: I0129 15:48:09.830424 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/8e9e19dd-550a-467d-bd79-03ee07c2f470-scripts\") pod \"horizon-66f4589f77-j49wf\" (UID: \"8e9e19dd-550a-467d-bd79-03ee07c2f470\") " pod="openstack/horizon-66f4589f77-j49wf" Jan 29 15:48:09 crc kubenswrapper[5008]: I0129 15:48:09.830455 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6jtf7\" (UniqueName: \"kubernetes.io/projected/8e9e19dd-550a-467d-bd79-03ee07c2f470-kube-api-access-6jtf7\") pod \"horizon-66f4589f77-j49wf\" (UID: \"8e9e19dd-550a-467d-bd79-03ee07c2f470\") " pod="openstack/horizon-66f4589f77-j49wf" Jan 29 15:48:09 crc kubenswrapper[5008]: I0129 15:48:09.830494 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8e9e19dd-550a-467d-bd79-03ee07c2f470-logs\") pod \"horizon-66f4589f77-j49wf\" (UID: \"8e9e19dd-550a-467d-bd79-03ee07c2f470\") " pod="openstack/horizon-66f4589f77-j49wf" Jan 29 15:48:09 crc kubenswrapper[5008]: I0129 15:48:09.830564 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/4ec0e696-652d-463e-b97e-dad0065a543b-db-sync-config-data\") pod \"barbican-db-sync-rcl2z\" (UID: \"4ec0e696-652d-463e-b97e-dad0065a543b\") " pod="openstack/barbican-db-sync-rcl2z" Jan 29 15:48:09 crc kubenswrapper[5008]: I0129 15:48:09.830590 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9069f34b-ed91-4ced-8b05-91b83dd02938-combined-ca-bundle\") pod \"cinder-db-sync-fwhd5\" (UID: \"9069f34b-ed91-4ced-8b05-91b83dd02938\") " pod="openstack/cinder-db-sync-fwhd5" Jan 29 15:48:09 crc kubenswrapper[5008]: I0129 15:48:09.830708 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/9069f34b-ed91-4ced-8b05-91b83dd02938-db-sync-config-data\") pod \"cinder-db-sync-fwhd5\" (UID: \"9069f34b-ed91-4ced-8b05-91b83dd02938\") " pod="openstack/cinder-db-sync-fwhd5" Jan 29 15:48:09 crc kubenswrapper[5008]: I0129 15:48:09.830788 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9069f34b-ed91-4ced-8b05-91b83dd02938-scripts\") pod \"cinder-db-sync-fwhd5\" (UID: \"9069f34b-ed91-4ced-8b05-91b83dd02938\") " pod="openstack/cinder-db-sync-fwhd5" Jan 29 15:48:09 crc kubenswrapper[5008]: I0129 15:48:09.831302 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8e9e19dd-550a-467d-bd79-03ee07c2f470-logs\") pod \"horizon-66f4589f77-j49wf\" (UID: \"8e9e19dd-550a-467d-bd79-03ee07c2f470\") " pod="openstack/horizon-66f4589f77-j49wf" Jan 29 15:48:09 crc kubenswrapper[5008]: I0129 15:48:09.831609 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/8e9e19dd-550a-467d-bd79-03ee07c2f470-horizon-secret-key\") pod \"horizon-66f4589f77-j49wf\" (UID: \"8e9e19dd-550a-467d-bd79-03ee07c2f470\") " pod="openstack/horizon-66f4589f77-j49wf" Jan 29 15:48:09 crc kubenswrapper[5008]: I0129 15:48:09.831650 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6b5fh\" (UniqueName: \"kubernetes.io/projected/9069f34b-ed91-4ced-8b05-91b83dd02938-kube-api-access-6b5fh\") pod \"cinder-db-sync-fwhd5\" (UID: \"9069f34b-ed91-4ced-8b05-91b83dd02938\") " pod="openstack/cinder-db-sync-fwhd5" Jan 29 15:48:09 crc kubenswrapper[5008]: I0129 15:48:09.831681 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5jftb\" (UniqueName: \"kubernetes.io/projected/4ec0e696-652d-463e-b97e-dad0065a543b-kube-api-access-5jftb\") pod \"barbican-db-sync-rcl2z\" (UID: \"4ec0e696-652d-463e-b97e-dad0065a543b\") " pod="openstack/barbican-db-sync-rcl2z" Jan 29 15:48:09 crc kubenswrapper[5008]: I0129 15:48:09.831711 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9069f34b-ed91-4ced-8b05-91b83dd02938-config-data\") pod \"cinder-db-sync-fwhd5\" (UID: \"9069f34b-ed91-4ced-8b05-91b83dd02938\") " pod="openstack/cinder-db-sync-fwhd5" Jan 29 15:48:09 crc kubenswrapper[5008]: I0129 15:48:09.831767 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/8e9e19dd-550a-467d-bd79-03ee07c2f470-config-data\") pod \"horizon-66f4589f77-j49wf\" (UID: \"8e9e19dd-550a-467d-bd79-03ee07c2f470\") " pod="openstack/horizon-66f4589f77-j49wf" Jan 29 15:48:09 crc kubenswrapper[5008]: I0129 15:48:09.831805 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4ec0e696-652d-463e-b97e-dad0065a543b-combined-ca-bundle\") pod \"barbican-db-sync-rcl2z\" (UID: \"4ec0e696-652d-463e-b97e-dad0065a543b\") " pod="openstack/barbican-db-sync-rcl2z" Jan 29 15:48:09 crc kubenswrapper[5008]: I0129 15:48:09.840207 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/9069f34b-ed91-4ced-8b05-91b83dd02938-db-sync-config-data\") pod \"cinder-db-sync-fwhd5\" (UID: \"9069f34b-ed91-4ced-8b05-91b83dd02938\") " pod="openstack/cinder-db-sync-fwhd5" Jan 29 15:48:09 crc kubenswrapper[5008]: I0129 15:48:09.841458 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/8e9e19dd-550a-467d-bd79-03ee07c2f470-scripts\") pod \"horizon-66f4589f77-j49wf\" (UID: \"8e9e19dd-550a-467d-bd79-03ee07c2f470\") " pod="openstack/horizon-66f4589f77-j49wf" Jan 29 15:48:09 crc kubenswrapper[5008]: I0129 15:48:09.850533 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/8e9e19dd-550a-467d-bd79-03ee07c2f470-horizon-secret-key\") pod \"horizon-66f4589f77-j49wf\" (UID: \"8e9e19dd-550a-467d-bd79-03ee07c2f470\") " pod="openstack/horizon-66f4589f77-j49wf" Jan 29 15:48:09 crc kubenswrapper[5008]: I0129 15:48:09.854697 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/8e9e19dd-550a-467d-bd79-03ee07c2f470-config-data\") pod \"horizon-66f4589f77-j49wf\" (UID: \"8e9e19dd-550a-467d-bd79-03ee07c2f470\") " pod="openstack/horizon-66f4589f77-j49wf" Jan 29 15:48:09 crc kubenswrapper[5008]: I0129 15:48:09.856998 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9069f34b-ed91-4ced-8b05-91b83dd02938-scripts\") pod \"cinder-db-sync-fwhd5\" (UID: \"9069f34b-ed91-4ced-8b05-91b83dd02938\") " pod="openstack/cinder-db-sync-fwhd5" Jan 29 15:48:09 crc kubenswrapper[5008]: I0129 15:48:09.857511 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9069f34b-ed91-4ced-8b05-91b83dd02938-combined-ca-bundle\") pod \"cinder-db-sync-fwhd5\" (UID: \"9069f34b-ed91-4ced-8b05-91b83dd02938\") " pod="openstack/cinder-db-sync-fwhd5" Jan 29 15:48:09 crc kubenswrapper[5008]: I0129 15:48:09.863245 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-cf78879c9-f77w7"] Jan 29 15:48:09 crc kubenswrapper[5008]: I0129 15:48:09.870306 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9069f34b-ed91-4ced-8b05-91b83dd02938-config-data\") pod \"cinder-db-sync-fwhd5\" (UID: \"9069f34b-ed91-4ced-8b05-91b83dd02938\") " pod="openstack/cinder-db-sync-fwhd5" Jan 29 15:48:09 crc kubenswrapper[5008]: I0129 15:48:09.880923 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6b5fh\" (UniqueName: \"kubernetes.io/projected/9069f34b-ed91-4ced-8b05-91b83dd02938-kube-api-access-6b5fh\") pod \"cinder-db-sync-fwhd5\" (UID: \"9069f34b-ed91-4ced-8b05-91b83dd02938\") " pod="openstack/cinder-db-sync-fwhd5" Jan 29 15:48:09 crc kubenswrapper[5008]: I0129 15:48:09.881523 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6jtf7\" (UniqueName: \"kubernetes.io/projected/8e9e19dd-550a-467d-bd79-03ee07c2f470-kube-api-access-6jtf7\") pod \"horizon-66f4589f77-j49wf\" (UID: \"8e9e19dd-550a-467d-bd79-03ee07c2f470\") " pod="openstack/horizon-66f4589f77-j49wf" Jan 29 15:48:09 crc kubenswrapper[5008]: I0129 15:48:09.898856 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-tqc26"] Jan 29 15:48:09 crc kubenswrapper[5008]: I0129 15:48:09.932962 5008 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-sync-4h8lc"] Jan 29 15:48:09 crc kubenswrapper[5008]: I0129 15:48:09.934008 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-4h8lc" Jan 29 15:48:09 crc kubenswrapper[5008]: I0129 15:48:09.934224 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5jftb\" (UniqueName: \"kubernetes.io/projected/4ec0e696-652d-463e-b97e-dad0065a543b-kube-api-access-5jftb\") pod \"barbican-db-sync-rcl2z\" (UID: \"4ec0e696-652d-463e-b97e-dad0065a543b\") " pod="openstack/barbican-db-sync-rcl2z" Jan 29 15:48:09 crc kubenswrapper[5008]: I0129 15:48:09.934358 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c3a233d5-bf7f-4906-881c-5e81ea64e0e8-combined-ca-bundle\") pod \"placement-db-sync-tqc26\" (UID: \"c3a233d5-bf7f-4906-881c-5e81ea64e0e8\") " pod="openstack/placement-db-sync-tqc26" Jan 29 15:48:09 crc kubenswrapper[5008]: I0129 15:48:09.934465 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c3a233d5-bf7f-4906-881c-5e81ea64e0e8-config-data\") pod \"placement-db-sync-tqc26\" (UID: \"c3a233d5-bf7f-4906-881c-5e81ea64e0e8\") " pod="openstack/placement-db-sync-tqc26" Jan 29 15:48:09 crc kubenswrapper[5008]: I0129 15:48:09.934583 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4ec0e696-652d-463e-b97e-dad0065a543b-combined-ca-bundle\") pod \"barbican-db-sync-rcl2z\" (UID: \"4ec0e696-652d-463e-b97e-dad0065a543b\") " pod="openstack/barbican-db-sync-rcl2z" Jan 29 15:48:09 crc kubenswrapper[5008]: I0129 15:48:09.934670 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/771d4fdc-7731-4bfc-a65a-7c3b8624eb32-config\") pod \"dnsmasq-dns-cf78879c9-f77w7\" (UID: \"771d4fdc-7731-4bfc-a65a-7c3b8624eb32\") " pod="openstack/dnsmasq-dns-cf78879c9-f77w7" Jan 29 15:48:09 crc kubenswrapper[5008]: I0129 15:48:09.934742 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c3a233d5-bf7f-4906-881c-5e81ea64e0e8-scripts\") pod \"placement-db-sync-tqc26\" (UID: \"c3a233d5-bf7f-4906-881c-5e81ea64e0e8\") " pod="openstack/placement-db-sync-tqc26" Jan 29 15:48:09 crc kubenswrapper[5008]: I0129 15:48:09.934865 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c3a233d5-bf7f-4906-881c-5e81ea64e0e8-logs\") pod \"placement-db-sync-tqc26\" (UID: \"c3a233d5-bf7f-4906-881c-5e81ea64e0e8\") " pod="openstack/placement-db-sync-tqc26" Jan 29 15:48:09 crc kubenswrapper[5008]: I0129 15:48:09.934950 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/4ec0e696-652d-463e-b97e-dad0065a543b-db-sync-config-data\") pod \"barbican-db-sync-rcl2z\" (UID: \"4ec0e696-652d-463e-b97e-dad0065a543b\") " pod="openstack/barbican-db-sync-rcl2z" Jan 29 15:48:09 crc kubenswrapper[5008]: I0129 15:48:09.935020 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/771d4fdc-7731-4bfc-a65a-7c3b8624eb32-ovsdbserver-sb\") pod \"dnsmasq-dns-cf78879c9-f77w7\" (UID: \"771d4fdc-7731-4bfc-a65a-7c3b8624eb32\") " pod="openstack/dnsmasq-dns-cf78879c9-f77w7" Jan 29 15:48:09 crc kubenswrapper[5008]: I0129 15:48:09.935108 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/771d4fdc-7731-4bfc-a65a-7c3b8624eb32-ovsdbserver-nb\") pod \"dnsmasq-dns-cf78879c9-f77w7\" (UID: \"771d4fdc-7731-4bfc-a65a-7c3b8624eb32\") " pod="openstack/dnsmasq-dns-cf78879c9-f77w7" Jan 29 15:48:09 crc kubenswrapper[5008]: I0129 15:48:09.935207 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2hqb9\" (UniqueName: \"kubernetes.io/projected/771d4fdc-7731-4bfc-a65a-7c3b8624eb32-kube-api-access-2hqb9\") pod \"dnsmasq-dns-cf78879c9-f77w7\" (UID: \"771d4fdc-7731-4bfc-a65a-7c3b8624eb32\") " pod="openstack/dnsmasq-dns-cf78879c9-f77w7" Jan 29 15:48:09 crc kubenswrapper[5008]: I0129 15:48:09.963186 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/771d4fdc-7731-4bfc-a65a-7c3b8624eb32-dns-swift-storage-0\") pod \"dnsmasq-dns-cf78879c9-f77w7\" (UID: \"771d4fdc-7731-4bfc-a65a-7c3b8624eb32\") " pod="openstack/dnsmasq-dns-cf78879c9-f77w7" Jan 29 15:48:09 crc kubenswrapper[5008]: I0129 15:48:09.963239 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/771d4fdc-7731-4bfc-a65a-7c3b8624eb32-dns-svc\") pod \"dnsmasq-dns-cf78879c9-f77w7\" (UID: \"771d4fdc-7731-4bfc-a65a-7c3b8624eb32\") " pod="openstack/dnsmasq-dns-cf78879c9-f77w7" Jan 29 15:48:09 crc kubenswrapper[5008]: I0129 15:48:09.963292 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-62fxx\" (UniqueName: \"kubernetes.io/projected/c3a233d5-bf7f-4906-881c-5e81ea64e0e8-kube-api-access-62fxx\") pod \"placement-db-sync-tqc26\" (UID: \"c3a233d5-bf7f-4906-881c-5e81ea64e0e8\") " pod="openstack/placement-db-sync-tqc26" Jan 29 15:48:09 crc kubenswrapper[5008]: I0129 15:48:09.957023 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-4h8lc"] Jan 29 15:48:09 crc kubenswrapper[5008]: I0129 15:48:09.941243 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Jan 29 15:48:09 crc kubenswrapper[5008]: I0129 15:48:09.954146 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4ec0e696-652d-463e-b97e-dad0065a543b-combined-ca-bundle\") pod \"barbican-db-sync-rcl2z\" (UID: \"4ec0e696-652d-463e-b97e-dad0065a543b\") " pod="openstack/barbican-db-sync-rcl2z" Jan 29 15:48:09 crc kubenswrapper[5008]: I0129 15:48:09.954424 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/4ec0e696-652d-463e-b97e-dad0065a543b-db-sync-config-data\") pod \"barbican-db-sync-rcl2z\" (UID: \"4ec0e696-652d-463e-b97e-dad0065a543b\") " pod="openstack/barbican-db-sync-rcl2z" Jan 29 15:48:09 crc kubenswrapper[5008]: I0129 15:48:09.941364 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Jan 29 15:48:09 crc kubenswrapper[5008]: I0129 15:48:09.941761 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-qg4fq" Jan 29 15:48:10 crc kubenswrapper[5008]: I0129 15:48:10.001172 5008 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-59d66dd7b7-rjtfk"] Jan 29 15:48:10 crc kubenswrapper[5008]: I0129 15:48:10.003169 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-59d66dd7b7-rjtfk" Jan 29 15:48:10 crc kubenswrapper[5008]: I0129 15:48:10.007847 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-fwhd5" Jan 29 15:48:10 crc kubenswrapper[5008]: I0129 15:48:10.008193 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5jftb\" (UniqueName: \"kubernetes.io/projected/4ec0e696-652d-463e-b97e-dad0065a543b-kube-api-access-5jftb\") pod \"barbican-db-sync-rcl2z\" (UID: \"4ec0e696-652d-463e-b97e-dad0065a543b\") " pod="openstack/barbican-db-sync-rcl2z" Jan 29 15:48:10 crc kubenswrapper[5008]: I0129 15:48:10.031946 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-59d66dd7b7-rjtfk"] Jan 29 15:48:10 crc kubenswrapper[5008]: I0129 15:48:10.064701 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zfh8n\" (UniqueName: \"kubernetes.io/projected/3b110ddf-5eea-4e32-b9f3-f07886d636a2-kube-api-access-zfh8n\") pod \"horizon-59d66dd7b7-rjtfk\" (UID: \"3b110ddf-5eea-4e32-b9f3-f07886d636a2\") " pod="openstack/horizon-59d66dd7b7-rjtfk" Jan 29 15:48:10 crc kubenswrapper[5008]: I0129 15:48:10.064760 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/771d4fdc-7731-4bfc-a65a-7c3b8624eb32-config\") pod \"dnsmasq-dns-cf78879c9-f77w7\" (UID: \"771d4fdc-7731-4bfc-a65a-7c3b8624eb32\") " pod="openstack/dnsmasq-dns-cf78879c9-f77w7" Jan 29 15:48:10 crc kubenswrapper[5008]: I0129 15:48:10.064802 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c3a233d5-bf7f-4906-881c-5e81ea64e0e8-scripts\") pod \"placement-db-sync-tqc26\" (UID: \"c3a233d5-bf7f-4906-881c-5e81ea64e0e8\") " pod="openstack/placement-db-sync-tqc26" Jan 29 15:48:10 crc kubenswrapper[5008]: I0129 15:48:10.064838 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3b110ddf-5eea-4e32-b9f3-f07886d636a2-logs\") pod \"horizon-59d66dd7b7-rjtfk\" (UID: \"3b110ddf-5eea-4e32-b9f3-f07886d636a2\") " pod="openstack/horizon-59d66dd7b7-rjtfk" Jan 29 15:48:10 crc kubenswrapper[5008]: I0129 15:48:10.064857 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c3a233d5-bf7f-4906-881c-5e81ea64e0e8-logs\") pod \"placement-db-sync-tqc26\" (UID: \"c3a233d5-bf7f-4906-881c-5e81ea64e0e8\") " pod="openstack/placement-db-sync-tqc26" Jan 29 15:48:10 crc kubenswrapper[5008]: I0129 15:48:10.064895 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hmvz6\" (UniqueName: \"kubernetes.io/projected/6c2a1a18-16ff-4419-b233-8649579edbea-kube-api-access-hmvz6\") pod \"neutron-db-sync-4h8lc\" (UID: \"6c2a1a18-16ff-4419-b233-8649579edbea\") " pod="openstack/neutron-db-sync-4h8lc" Jan 29 15:48:10 crc kubenswrapper[5008]: I0129 15:48:10.064914 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/771d4fdc-7731-4bfc-a65a-7c3b8624eb32-ovsdbserver-sb\") pod \"dnsmasq-dns-cf78879c9-f77w7\" (UID: \"771d4fdc-7731-4bfc-a65a-7c3b8624eb32\") " pod="openstack/dnsmasq-dns-cf78879c9-f77w7" Jan 29 15:48:10 crc kubenswrapper[5008]: I0129 15:48:10.064933 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/771d4fdc-7731-4bfc-a65a-7c3b8624eb32-ovsdbserver-nb\") pod \"dnsmasq-dns-cf78879c9-f77w7\" (UID: \"771d4fdc-7731-4bfc-a65a-7c3b8624eb32\") " pod="openstack/dnsmasq-dns-cf78879c9-f77w7" Jan 29 15:48:10 crc kubenswrapper[5008]: I0129 15:48:10.064958 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/3b110ddf-5eea-4e32-b9f3-f07886d636a2-scripts\") pod \"horizon-59d66dd7b7-rjtfk\" (UID: \"3b110ddf-5eea-4e32-b9f3-f07886d636a2\") " pod="openstack/horizon-59d66dd7b7-rjtfk" Jan 29 15:48:10 crc kubenswrapper[5008]: I0129 15:48:10.065014 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2hqb9\" (UniqueName: \"kubernetes.io/projected/771d4fdc-7731-4bfc-a65a-7c3b8624eb32-kube-api-access-2hqb9\") pod \"dnsmasq-dns-cf78879c9-f77w7\" (UID: \"771d4fdc-7731-4bfc-a65a-7c3b8624eb32\") " pod="openstack/dnsmasq-dns-cf78879c9-f77w7" Jan 29 15:48:10 crc kubenswrapper[5008]: I0129 15:48:10.065042 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/771d4fdc-7731-4bfc-a65a-7c3b8624eb32-dns-swift-storage-0\") pod \"dnsmasq-dns-cf78879c9-f77w7\" (UID: \"771d4fdc-7731-4bfc-a65a-7c3b8624eb32\") " pod="openstack/dnsmasq-dns-cf78879c9-f77w7" Jan 29 15:48:10 crc kubenswrapper[5008]: I0129 15:48:10.065058 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/3b110ddf-5eea-4e32-b9f3-f07886d636a2-horizon-secret-key\") pod \"horizon-59d66dd7b7-rjtfk\" (UID: \"3b110ddf-5eea-4e32-b9f3-f07886d636a2\") " pod="openstack/horizon-59d66dd7b7-rjtfk" Jan 29 15:48:10 crc kubenswrapper[5008]: I0129 15:48:10.065079 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/771d4fdc-7731-4bfc-a65a-7c3b8624eb32-dns-svc\") pod \"dnsmasq-dns-cf78879c9-f77w7\" (UID: \"771d4fdc-7731-4bfc-a65a-7c3b8624eb32\") " pod="openstack/dnsmasq-dns-cf78879c9-f77w7" Jan 29 15:48:10 crc kubenswrapper[5008]: I0129 15:48:10.065101 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/3b110ddf-5eea-4e32-b9f3-f07886d636a2-config-data\") pod \"horizon-59d66dd7b7-rjtfk\" (UID: \"3b110ddf-5eea-4e32-b9f3-f07886d636a2\") " pod="openstack/horizon-59d66dd7b7-rjtfk" Jan 29 15:48:10 crc kubenswrapper[5008]: I0129 15:48:10.065117 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/6c2a1a18-16ff-4419-b233-8649579edbea-config\") pod \"neutron-db-sync-4h8lc\" (UID: \"6c2a1a18-16ff-4419-b233-8649579edbea\") " pod="openstack/neutron-db-sync-4h8lc" Jan 29 15:48:10 crc kubenswrapper[5008]: I0129 15:48:10.065134 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-62fxx\" (UniqueName: \"kubernetes.io/projected/c3a233d5-bf7f-4906-881c-5e81ea64e0e8-kube-api-access-62fxx\") pod \"placement-db-sync-tqc26\" (UID: \"c3a233d5-bf7f-4906-881c-5e81ea64e0e8\") " pod="openstack/placement-db-sync-tqc26" Jan 29 15:48:10 crc kubenswrapper[5008]: I0129 15:48:10.065171 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c3a233d5-bf7f-4906-881c-5e81ea64e0e8-combined-ca-bundle\") pod \"placement-db-sync-tqc26\" (UID: \"c3a233d5-bf7f-4906-881c-5e81ea64e0e8\") " pod="openstack/placement-db-sync-tqc26" Jan 29 15:48:10 crc kubenswrapper[5008]: I0129 15:48:10.065201 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c3a233d5-bf7f-4906-881c-5e81ea64e0e8-config-data\") pod \"placement-db-sync-tqc26\" (UID: \"c3a233d5-bf7f-4906-881c-5e81ea64e0e8\") " pod="openstack/placement-db-sync-tqc26" Jan 29 15:48:10 crc kubenswrapper[5008]: I0129 15:48:10.065224 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6c2a1a18-16ff-4419-b233-8649579edbea-combined-ca-bundle\") pod \"neutron-db-sync-4h8lc\" (UID: \"6c2a1a18-16ff-4419-b233-8649579edbea\") " pod="openstack/neutron-db-sync-4h8lc" Jan 29 15:48:10 crc kubenswrapper[5008]: I0129 15:48:10.065683 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c3a233d5-bf7f-4906-881c-5e81ea64e0e8-logs\") pod \"placement-db-sync-tqc26\" (UID: \"c3a233d5-bf7f-4906-881c-5e81ea64e0e8\") " pod="openstack/placement-db-sync-tqc26" Jan 29 15:48:10 crc kubenswrapper[5008]: I0129 15:48:10.065842 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/771d4fdc-7731-4bfc-a65a-7c3b8624eb32-config\") pod \"dnsmasq-dns-cf78879c9-f77w7\" (UID: \"771d4fdc-7731-4bfc-a65a-7c3b8624eb32\") " pod="openstack/dnsmasq-dns-cf78879c9-f77w7" Jan 29 15:48:10 crc kubenswrapper[5008]: I0129 15:48:10.066039 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/771d4fdc-7731-4bfc-a65a-7c3b8624eb32-dns-swift-storage-0\") pod \"dnsmasq-dns-cf78879c9-f77w7\" (UID: \"771d4fdc-7731-4bfc-a65a-7c3b8624eb32\") " pod="openstack/dnsmasq-dns-cf78879c9-f77w7" Jan 29 15:48:10 crc kubenswrapper[5008]: I0129 15:48:10.066390 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/771d4fdc-7731-4bfc-a65a-7c3b8624eb32-ovsdbserver-sb\") pod \"dnsmasq-dns-cf78879c9-f77w7\" (UID: \"771d4fdc-7731-4bfc-a65a-7c3b8624eb32\") " pod="openstack/dnsmasq-dns-cf78879c9-f77w7" Jan 29 15:48:10 crc kubenswrapper[5008]: I0129 15:48:10.066399 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/771d4fdc-7731-4bfc-a65a-7c3b8624eb32-ovsdbserver-nb\") pod \"dnsmasq-dns-cf78879c9-f77w7\" (UID: \"771d4fdc-7731-4bfc-a65a-7c3b8624eb32\") " pod="openstack/dnsmasq-dns-cf78879c9-f77w7" Jan 29 15:48:10 crc kubenswrapper[5008]: I0129 15:48:10.066557 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/771d4fdc-7731-4bfc-a65a-7c3b8624eb32-dns-svc\") pod \"dnsmasq-dns-cf78879c9-f77w7\" (UID: \"771d4fdc-7731-4bfc-a65a-7c3b8624eb32\") " pod="openstack/dnsmasq-dns-cf78879c9-f77w7" Jan 29 15:48:10 crc kubenswrapper[5008]: I0129 15:48:10.071139 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c3a233d5-bf7f-4906-881c-5e81ea64e0e8-scripts\") pod \"placement-db-sync-tqc26\" (UID: \"c3a233d5-bf7f-4906-881c-5e81ea64e0e8\") " pod="openstack/placement-db-sync-tqc26" Jan 29 15:48:10 crc kubenswrapper[5008]: I0129 15:48:10.071374 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c3a233d5-bf7f-4906-881c-5e81ea64e0e8-combined-ca-bundle\") pod \"placement-db-sync-tqc26\" (UID: \"c3a233d5-bf7f-4906-881c-5e81ea64e0e8\") " pod="openstack/placement-db-sync-tqc26" Jan 29 15:48:10 crc kubenswrapper[5008]: I0129 15:48:10.077118 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c3a233d5-bf7f-4906-881c-5e81ea64e0e8-config-data\") pod \"placement-db-sync-tqc26\" (UID: \"c3a233d5-bf7f-4906-881c-5e81ea64e0e8\") " pod="openstack/placement-db-sync-tqc26" Jan 29 15:48:10 crc kubenswrapper[5008]: I0129 15:48:10.079021 5008 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 29 15:48:10 crc kubenswrapper[5008]: I0129 15:48:10.080892 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 29 15:48:10 crc kubenswrapper[5008]: I0129 15:48:10.087062 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 29 15:48:10 crc kubenswrapper[5008]: I0129 15:48:10.087908 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 29 15:48:10 crc kubenswrapper[5008]: I0129 15:48:10.097260 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 29 15:48:10 crc kubenswrapper[5008]: I0129 15:48:10.097592 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-rcl2z" Jan 29 15:48:10 crc kubenswrapper[5008]: I0129 15:48:10.105748 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-62fxx\" (UniqueName: \"kubernetes.io/projected/c3a233d5-bf7f-4906-881c-5e81ea64e0e8-kube-api-access-62fxx\") pod \"placement-db-sync-tqc26\" (UID: \"c3a233d5-bf7f-4906-881c-5e81ea64e0e8\") " pod="openstack/placement-db-sync-tqc26" Jan 29 15:48:10 crc kubenswrapper[5008]: I0129 15:48:10.106018 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2hqb9\" (UniqueName: \"kubernetes.io/projected/771d4fdc-7731-4bfc-a65a-7c3b8624eb32-kube-api-access-2hqb9\") pod \"dnsmasq-dns-cf78879c9-f77w7\" (UID: \"771d4fdc-7731-4bfc-a65a-7c3b8624eb32\") " pod="openstack/dnsmasq-dns-cf78879c9-f77w7" Jan 29 15:48:10 crc kubenswrapper[5008]: I0129 15:48:10.153807 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-tqc26" Jan 29 15:48:10 crc kubenswrapper[5008]: I0129 15:48:10.167528 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-cf78879c9-f77w7" Jan 29 15:48:10 crc kubenswrapper[5008]: I0129 15:48:10.170142 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3b110ddf-5eea-4e32-b9f3-f07886d636a2-logs\") pod \"horizon-59d66dd7b7-rjtfk\" (UID: \"3b110ddf-5eea-4e32-b9f3-f07886d636a2\") " pod="openstack/horizon-59d66dd7b7-rjtfk" Jan 29 15:48:10 crc kubenswrapper[5008]: I0129 15:48:10.170227 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hmvz6\" (UniqueName: \"kubernetes.io/projected/6c2a1a18-16ff-4419-b233-8649579edbea-kube-api-access-hmvz6\") pod \"neutron-db-sync-4h8lc\" (UID: \"6c2a1a18-16ff-4419-b233-8649579edbea\") " pod="openstack/neutron-db-sync-4h8lc" Jan 29 15:48:10 crc kubenswrapper[5008]: I0129 15:48:10.170280 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ngjqg\" (UniqueName: \"kubernetes.io/projected/8457b44a-814e-403f-a2c9-71907f5cb2d2-kube-api-access-ngjqg\") pod \"ceilometer-0\" (UID: \"8457b44a-814e-403f-a2c9-71907f5cb2d2\") " pod="openstack/ceilometer-0" Jan 29 15:48:10 crc kubenswrapper[5008]: I0129 15:48:10.170321 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8457b44a-814e-403f-a2c9-71907f5cb2d2-log-httpd\") pod \"ceilometer-0\" (UID: \"8457b44a-814e-403f-a2c9-71907f5cb2d2\") " pod="openstack/ceilometer-0" Jan 29 15:48:10 crc kubenswrapper[5008]: I0129 15:48:10.170354 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/3b110ddf-5eea-4e32-b9f3-f07886d636a2-scripts\") pod \"horizon-59d66dd7b7-rjtfk\" (UID: \"3b110ddf-5eea-4e32-b9f3-f07886d636a2\") " pod="openstack/horizon-59d66dd7b7-rjtfk" Jan 29 15:48:10 crc kubenswrapper[5008]: I0129 15:48:10.170427 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/3b110ddf-5eea-4e32-b9f3-f07886d636a2-horizon-secret-key\") pod \"horizon-59d66dd7b7-rjtfk\" (UID: \"3b110ddf-5eea-4e32-b9f3-f07886d636a2\") " pod="openstack/horizon-59d66dd7b7-rjtfk" Jan 29 15:48:10 crc kubenswrapper[5008]: I0129 15:48:10.170454 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8457b44a-814e-403f-a2c9-71907f5cb2d2-config-data\") pod \"ceilometer-0\" (UID: \"8457b44a-814e-403f-a2c9-71907f5cb2d2\") " pod="openstack/ceilometer-0" Jan 29 15:48:10 crc kubenswrapper[5008]: I0129 15:48:10.170476 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8457b44a-814e-403f-a2c9-71907f5cb2d2-run-httpd\") pod \"ceilometer-0\" (UID: \"8457b44a-814e-403f-a2c9-71907f5cb2d2\") " pod="openstack/ceilometer-0" Jan 29 15:48:10 crc kubenswrapper[5008]: I0129 15:48:10.170509 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/3b110ddf-5eea-4e32-b9f3-f07886d636a2-config-data\") pod \"horizon-59d66dd7b7-rjtfk\" (UID: \"3b110ddf-5eea-4e32-b9f3-f07886d636a2\") " pod="openstack/horizon-59d66dd7b7-rjtfk" Jan 29 15:48:10 crc kubenswrapper[5008]: I0129 15:48:10.170540 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/6c2a1a18-16ff-4419-b233-8649579edbea-config\") pod \"neutron-db-sync-4h8lc\" (UID: \"6c2a1a18-16ff-4419-b233-8649579edbea\") " pod="openstack/neutron-db-sync-4h8lc" Jan 29 15:48:10 crc kubenswrapper[5008]: I0129 15:48:10.170613 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6c2a1a18-16ff-4419-b233-8649579edbea-combined-ca-bundle\") pod \"neutron-db-sync-4h8lc\" (UID: \"6c2a1a18-16ff-4419-b233-8649579edbea\") " pod="openstack/neutron-db-sync-4h8lc" Jan 29 15:48:10 crc kubenswrapper[5008]: I0129 15:48:10.170644 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/8457b44a-814e-403f-a2c9-71907f5cb2d2-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"8457b44a-814e-403f-a2c9-71907f5cb2d2\") " pod="openstack/ceilometer-0" Jan 29 15:48:10 crc kubenswrapper[5008]: I0129 15:48:10.170693 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zfh8n\" (UniqueName: \"kubernetes.io/projected/3b110ddf-5eea-4e32-b9f3-f07886d636a2-kube-api-access-zfh8n\") pod \"horizon-59d66dd7b7-rjtfk\" (UID: \"3b110ddf-5eea-4e32-b9f3-f07886d636a2\") " pod="openstack/horizon-59d66dd7b7-rjtfk" Jan 29 15:48:10 crc kubenswrapper[5008]: I0129 15:48:10.170730 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8457b44a-814e-403f-a2c9-71907f5cb2d2-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"8457b44a-814e-403f-a2c9-71907f5cb2d2\") " pod="openstack/ceilometer-0" Jan 29 15:48:10 crc kubenswrapper[5008]: I0129 15:48:10.170797 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8457b44a-814e-403f-a2c9-71907f5cb2d2-scripts\") pod \"ceilometer-0\" (UID: \"8457b44a-814e-403f-a2c9-71907f5cb2d2\") " pod="openstack/ceilometer-0" Jan 29 15:48:10 crc kubenswrapper[5008]: I0129 15:48:10.171624 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/3b110ddf-5eea-4e32-b9f3-f07886d636a2-scripts\") pod \"horizon-59d66dd7b7-rjtfk\" (UID: \"3b110ddf-5eea-4e32-b9f3-f07886d636a2\") " pod="openstack/horizon-59d66dd7b7-rjtfk" Jan 29 15:48:10 crc kubenswrapper[5008]: I0129 15:48:10.173140 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/3b110ddf-5eea-4e32-b9f3-f07886d636a2-config-data\") pod \"horizon-59d66dd7b7-rjtfk\" (UID: \"3b110ddf-5eea-4e32-b9f3-f07886d636a2\") " pod="openstack/horizon-59d66dd7b7-rjtfk" Jan 29 15:48:10 crc kubenswrapper[5008]: I0129 15:48:10.176859 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3b110ddf-5eea-4e32-b9f3-f07886d636a2-logs\") pod \"horizon-59d66dd7b7-rjtfk\" (UID: \"3b110ddf-5eea-4e32-b9f3-f07886d636a2\") " pod="openstack/horizon-59d66dd7b7-rjtfk" Jan 29 15:48:10 crc kubenswrapper[5008]: I0129 15:48:10.182841 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6c2a1a18-16ff-4419-b233-8649579edbea-combined-ca-bundle\") pod \"neutron-db-sync-4h8lc\" (UID: \"6c2a1a18-16ff-4419-b233-8649579edbea\") " pod="openstack/neutron-db-sync-4h8lc" Jan 29 15:48:10 crc kubenswrapper[5008]: I0129 15:48:10.183296 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-66f4589f77-j49wf" Jan 29 15:48:10 crc kubenswrapper[5008]: I0129 15:48:10.204632 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zfh8n\" (UniqueName: \"kubernetes.io/projected/3b110ddf-5eea-4e32-b9f3-f07886d636a2-kube-api-access-zfh8n\") pod \"horizon-59d66dd7b7-rjtfk\" (UID: \"3b110ddf-5eea-4e32-b9f3-f07886d636a2\") " pod="openstack/horizon-59d66dd7b7-rjtfk" Jan 29 15:48:10 crc kubenswrapper[5008]: I0129 15:48:10.206195 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hmvz6\" (UniqueName: \"kubernetes.io/projected/6c2a1a18-16ff-4419-b233-8649579edbea-kube-api-access-hmvz6\") pod \"neutron-db-sync-4h8lc\" (UID: \"6c2a1a18-16ff-4419-b233-8649579edbea\") " pod="openstack/neutron-db-sync-4h8lc" Jan 29 15:48:10 crc kubenswrapper[5008]: I0129 15:48:10.207015 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/6c2a1a18-16ff-4419-b233-8649579edbea-config\") pod \"neutron-db-sync-4h8lc\" (UID: \"6c2a1a18-16ff-4419-b233-8649579edbea\") " pod="openstack/neutron-db-sync-4h8lc" Jan 29 15:48:10 crc kubenswrapper[5008]: I0129 15:48:10.209447 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/3b110ddf-5eea-4e32-b9f3-f07886d636a2-horizon-secret-key\") pod \"horizon-59d66dd7b7-rjtfk\" (UID: \"3b110ddf-5eea-4e32-b9f3-f07886d636a2\") " pod="openstack/horizon-59d66dd7b7-rjtfk" Jan 29 15:48:10 crc kubenswrapper[5008]: I0129 15:48:10.272613 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8457b44a-814e-403f-a2c9-71907f5cb2d2-scripts\") pod \"ceilometer-0\" (UID: \"8457b44a-814e-403f-a2c9-71907f5cb2d2\") " pod="openstack/ceilometer-0" Jan 29 15:48:10 crc kubenswrapper[5008]: I0129 15:48:10.272682 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ngjqg\" (UniqueName: \"kubernetes.io/projected/8457b44a-814e-403f-a2c9-71907f5cb2d2-kube-api-access-ngjqg\") pod \"ceilometer-0\" (UID: \"8457b44a-814e-403f-a2c9-71907f5cb2d2\") " pod="openstack/ceilometer-0" Jan 29 15:48:10 crc kubenswrapper[5008]: I0129 15:48:10.272706 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8457b44a-814e-403f-a2c9-71907f5cb2d2-log-httpd\") pod \"ceilometer-0\" (UID: \"8457b44a-814e-403f-a2c9-71907f5cb2d2\") " pod="openstack/ceilometer-0" Jan 29 15:48:10 crc kubenswrapper[5008]: I0129 15:48:10.272748 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8457b44a-814e-403f-a2c9-71907f5cb2d2-config-data\") pod \"ceilometer-0\" (UID: \"8457b44a-814e-403f-a2c9-71907f5cb2d2\") " pod="openstack/ceilometer-0" Jan 29 15:48:10 crc kubenswrapper[5008]: I0129 15:48:10.272767 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8457b44a-814e-403f-a2c9-71907f5cb2d2-run-httpd\") pod \"ceilometer-0\" (UID: \"8457b44a-814e-403f-a2c9-71907f5cb2d2\") " pod="openstack/ceilometer-0" Jan 29 15:48:10 crc kubenswrapper[5008]: I0129 15:48:10.272840 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/8457b44a-814e-403f-a2c9-71907f5cb2d2-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"8457b44a-814e-403f-a2c9-71907f5cb2d2\") " pod="openstack/ceilometer-0" Jan 29 15:48:10 crc kubenswrapper[5008]: I0129 15:48:10.272873 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8457b44a-814e-403f-a2c9-71907f5cb2d2-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"8457b44a-814e-403f-a2c9-71907f5cb2d2\") " pod="openstack/ceilometer-0" Jan 29 15:48:10 crc kubenswrapper[5008]: I0129 15:48:10.273670 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8457b44a-814e-403f-a2c9-71907f5cb2d2-log-httpd\") pod \"ceilometer-0\" (UID: \"8457b44a-814e-403f-a2c9-71907f5cb2d2\") " pod="openstack/ceilometer-0" Jan 29 15:48:10 crc kubenswrapper[5008]: I0129 15:48:10.273803 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8457b44a-814e-403f-a2c9-71907f5cb2d2-run-httpd\") pod \"ceilometer-0\" (UID: \"8457b44a-814e-403f-a2c9-71907f5cb2d2\") " pod="openstack/ceilometer-0" Jan 29 15:48:10 crc kubenswrapper[5008]: I0129 15:48:10.277063 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8457b44a-814e-403f-a2c9-71907f5cb2d2-scripts\") pod \"ceilometer-0\" (UID: \"8457b44a-814e-403f-a2c9-71907f5cb2d2\") " pod="openstack/ceilometer-0" Jan 29 15:48:10 crc kubenswrapper[5008]: I0129 15:48:10.282637 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/8457b44a-814e-403f-a2c9-71907f5cb2d2-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"8457b44a-814e-403f-a2c9-71907f5cb2d2\") " pod="openstack/ceilometer-0" Jan 29 15:48:10 crc kubenswrapper[5008]: I0129 15:48:10.283749 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8457b44a-814e-403f-a2c9-71907f5cb2d2-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"8457b44a-814e-403f-a2c9-71907f5cb2d2\") " pod="openstack/ceilometer-0" Jan 29 15:48:10 crc kubenswrapper[5008]: I0129 15:48:10.284195 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8457b44a-814e-403f-a2c9-71907f5cb2d2-config-data\") pod \"ceilometer-0\" (UID: \"8457b44a-814e-403f-a2c9-71907f5cb2d2\") " pod="openstack/ceilometer-0" Jan 29 15:48:10 crc kubenswrapper[5008]: I0129 15:48:10.288060 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ngjqg\" (UniqueName: \"kubernetes.io/projected/8457b44a-814e-403f-a2c9-71907f5cb2d2-kube-api-access-ngjqg\") pod \"ceilometer-0\" (UID: \"8457b44a-814e-403f-a2c9-71907f5cb2d2\") " pod="openstack/ceilometer-0" Jan 29 15:48:10 crc kubenswrapper[5008]: I0129 15:48:10.309952 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-4h8lc" Jan 29 15:48:10 crc kubenswrapper[5008]: I0129 15:48:10.364231 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-59d66dd7b7-rjtfk" Jan 29 15:48:10 crc kubenswrapper[5008]: I0129 15:48:10.420921 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 29 15:48:10 crc kubenswrapper[5008]: I0129 15:48:10.423514 5008 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5b868669f-l96nk"] Jan 29 15:48:10 crc kubenswrapper[5008]: W0129 15:48:10.462969 5008 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod32d4f252_93b9_4d91_9501_7fac414b7b47.slice/crio-1acb032ed25ef12c73d855be7174e50b33e647d61af0aafcd05a6e8ee53ae527 WatchSource:0}: Error finding container 1acb032ed25ef12c73d855be7174e50b33e647d61af0aafcd05a6e8ee53ae527: Status 404 returned error can't find the container with id 1acb032ed25ef12c73d855be7174e50b33e647d61af0aafcd05a6e8ee53ae527 Jan 29 15:48:10 crc kubenswrapper[5008]: I0129 15:48:10.589544 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-b8gfd"] Jan 29 15:48:10 crc kubenswrapper[5008]: I0129 15:48:10.731208 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-fwhd5"] Jan 29 15:48:10 crc kubenswrapper[5008]: W0129 15:48:10.740815 5008 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9069f34b_ed91_4ced_8b05_91b83dd02938.slice/crio-87157863b5fd88414615bafc24d16f0a62d9f4319c320d4d86a810d58443cfe6 WatchSource:0}: Error finding container 87157863b5fd88414615bafc24d16f0a62d9f4319c320d4d86a810d58443cfe6: Status 404 returned error can't find the container with id 87157863b5fd88414615bafc24d16f0a62d9f4319c320d4d86a810d58443cfe6 Jan 29 15:48:10 crc kubenswrapper[5008]: I0129 15:48:10.837377 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-tqc26"] Jan 29 15:48:10 crc kubenswrapper[5008]: I0129 15:48:10.842466 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-rcl2z"] Jan 29 15:48:10 crc kubenswrapper[5008]: I0129 15:48:10.983339 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-4h8lc"] Jan 29 15:48:11 crc kubenswrapper[5008]: I0129 15:48:11.000599 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-cf78879c9-f77w7"] Jan 29 15:48:11 crc kubenswrapper[5008]: W0129 15:48:11.002952 5008 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8e9e19dd_550a_467d_bd79_03ee07c2f470.slice/crio-1409d01f2c501abf5116a293f455b4ede7359b5dd6f401ad59f4bc1ff5e27560 WatchSource:0}: Error finding container 1409d01f2c501abf5116a293f455b4ede7359b5dd6f401ad59f4bc1ff5e27560: Status 404 returned error can't find the container with id 1409d01f2c501abf5116a293f455b4ede7359b5dd6f401ad59f4bc1ff5e27560 Jan 29 15:48:11 crc kubenswrapper[5008]: I0129 15:48:11.013191 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-66f4589f77-j49wf"] Jan 29 15:48:11 crc kubenswrapper[5008]: I0129 15:48:11.144791 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-cf78879c9-f77w7" event={"ID":"771d4fdc-7731-4bfc-a65a-7c3b8624eb32","Type":"ContainerStarted","Data":"0855c1b3124d74f066ce8585049d7c108a1ae142bfe48dd2fe48b76c9a87b4b0"} Jan 29 15:48:11 crc kubenswrapper[5008]: I0129 15:48:11.146945 5008 generic.go:334] "Generic (PLEG): container finished" podID="32d4f252-93b9-4d91-9501-7fac414b7b47" containerID="162e7c392841dddbcd1aa2020766cf167422ce4a22d288e65690e63fcf74ed9c" exitCode=0 Jan 29 15:48:11 crc kubenswrapper[5008]: I0129 15:48:11.147112 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b868669f-l96nk" event={"ID":"32d4f252-93b9-4d91-9501-7fac414b7b47","Type":"ContainerDied","Data":"162e7c392841dddbcd1aa2020766cf167422ce4a22d288e65690e63fcf74ed9c"} Jan 29 15:48:11 crc kubenswrapper[5008]: I0129 15:48:11.147205 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b868669f-l96nk" event={"ID":"32d4f252-93b9-4d91-9501-7fac414b7b47","Type":"ContainerStarted","Data":"1acb032ed25ef12c73d855be7174e50b33e647d61af0aafcd05a6e8ee53ae527"} Jan 29 15:48:11 crc kubenswrapper[5008]: I0129 15:48:11.151535 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-b8gfd" event={"ID":"f8408515-bbd2-46aa-b98f-a331b6659aa8","Type":"ContainerStarted","Data":"82015428914e1b8d83489174480b3a04643dbd25b377d65c00407eb4dfbc5a91"} Jan 29 15:48:11 crc kubenswrapper[5008]: I0129 15:48:11.151647 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-b8gfd" event={"ID":"f8408515-bbd2-46aa-b98f-a331b6659aa8","Type":"ContainerStarted","Data":"405bad21fefa05b3e90ec899e50725ce7823c20297242fc88f79da9c15e44ffd"} Jan 29 15:48:11 crc kubenswrapper[5008]: I0129 15:48:11.153291 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-fwhd5" event={"ID":"9069f34b-ed91-4ced-8b05-91b83dd02938","Type":"ContainerStarted","Data":"87157863b5fd88414615bafc24d16f0a62d9f4319c320d4d86a810d58443cfe6"} Jan 29 15:48:11 crc kubenswrapper[5008]: I0129 15:48:11.154172 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-tqc26" event={"ID":"c3a233d5-bf7f-4906-881c-5e81ea64e0e8","Type":"ContainerStarted","Data":"7463a1c0c912427b5643e45ef8f082d31f897a9969a145430140c8f0d851f2fa"} Jan 29 15:48:11 crc kubenswrapper[5008]: I0129 15:48:11.155270 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-66f4589f77-j49wf" event={"ID":"8e9e19dd-550a-467d-bd79-03ee07c2f470","Type":"ContainerStarted","Data":"1409d01f2c501abf5116a293f455b4ede7359b5dd6f401ad59f4bc1ff5e27560"} Jan 29 15:48:11 crc kubenswrapper[5008]: I0129 15:48:11.158020 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-4h8lc" event={"ID":"6c2a1a18-16ff-4419-b233-8649579edbea","Type":"ContainerStarted","Data":"07e336009f3d0d4bad7a27492f349aabeb9348d525d8a5111ca33499deca9afe"} Jan 29 15:48:11 crc kubenswrapper[5008]: I0129 15:48:11.159414 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-rcl2z" event={"ID":"4ec0e696-652d-463e-b97e-dad0065a543b","Type":"ContainerStarted","Data":"748398d1ff4ce764be647594fea290f65e925f9a2636d8aeb85a205a07c6aff2"} Jan 29 15:48:11 crc kubenswrapper[5008]: I0129 15:48:11.178155 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 29 15:48:11 crc kubenswrapper[5008]: I0129 15:48:11.194214 5008 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-b8gfd" podStartSLOduration=2.194175825 podStartE2EDuration="2.194175825s" podCreationTimestamp="2026-01-29 15:48:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 15:48:11.193483138 +0000 UTC m=+1234.866337375" watchObservedRunningTime="2026-01-29 15:48:11.194175825 +0000 UTC m=+1234.867030062" Jan 29 15:48:11 crc kubenswrapper[5008]: I0129 15:48:11.284047 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-59d66dd7b7-rjtfk"] Jan 29 15:48:11 crc kubenswrapper[5008]: W0129 15:48:11.286940 5008 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3b110ddf_5eea_4e32_b9f3_f07886d636a2.slice/crio-edb6a5e3eecc88a8d2bbfb0fdbece87ea6a4b28d555c22d32a7db25bc8e06e84 WatchSource:0}: Error finding container edb6a5e3eecc88a8d2bbfb0fdbece87ea6a4b28d555c22d32a7db25bc8e06e84: Status 404 returned error can't find the container with id edb6a5e3eecc88a8d2bbfb0fdbece87ea6a4b28d555c22d32a7db25bc8e06e84 Jan 29 15:48:11 crc kubenswrapper[5008]: I0129 15:48:11.359894 5008 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="536998c7-ad3f-4b4c-ad9e-342343eded97" path="/var/lib/kubelet/pods/536998c7-ad3f-4b4c-ad9e-342343eded97/volumes" Jan 29 15:48:11 crc kubenswrapper[5008]: I0129 15:48:11.517882 5008 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5b868669f-l96nk" Jan 29 15:48:11 crc kubenswrapper[5008]: I0129 15:48:11.598231 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/32d4f252-93b9-4d91-9501-7fac414b7b47-dns-swift-storage-0\") pod \"32d4f252-93b9-4d91-9501-7fac414b7b47\" (UID: \"32d4f252-93b9-4d91-9501-7fac414b7b47\") " Jan 29 15:48:11 crc kubenswrapper[5008]: I0129 15:48:11.598336 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/32d4f252-93b9-4d91-9501-7fac414b7b47-dns-svc\") pod \"32d4f252-93b9-4d91-9501-7fac414b7b47\" (UID: \"32d4f252-93b9-4d91-9501-7fac414b7b47\") " Jan 29 15:48:11 crc kubenswrapper[5008]: I0129 15:48:11.598385 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/32d4f252-93b9-4d91-9501-7fac414b7b47-config\") pod \"32d4f252-93b9-4d91-9501-7fac414b7b47\" (UID: \"32d4f252-93b9-4d91-9501-7fac414b7b47\") " Jan 29 15:48:11 crc kubenswrapper[5008]: I0129 15:48:11.598458 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/32d4f252-93b9-4d91-9501-7fac414b7b47-ovsdbserver-sb\") pod \"32d4f252-93b9-4d91-9501-7fac414b7b47\" (UID: \"32d4f252-93b9-4d91-9501-7fac414b7b47\") " Jan 29 15:48:11 crc kubenswrapper[5008]: I0129 15:48:11.598514 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/32d4f252-93b9-4d91-9501-7fac414b7b47-ovsdbserver-nb\") pod \"32d4f252-93b9-4d91-9501-7fac414b7b47\" (UID: \"32d4f252-93b9-4d91-9501-7fac414b7b47\") " Jan 29 15:48:11 crc kubenswrapper[5008]: I0129 15:48:11.598540 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g58mz\" (UniqueName: \"kubernetes.io/projected/32d4f252-93b9-4d91-9501-7fac414b7b47-kube-api-access-g58mz\") pod \"32d4f252-93b9-4d91-9501-7fac414b7b47\" (UID: \"32d4f252-93b9-4d91-9501-7fac414b7b47\") " Jan 29 15:48:11 crc kubenswrapper[5008]: I0129 15:48:11.609608 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/32d4f252-93b9-4d91-9501-7fac414b7b47-kube-api-access-g58mz" (OuterVolumeSpecName: "kube-api-access-g58mz") pod "32d4f252-93b9-4d91-9501-7fac414b7b47" (UID: "32d4f252-93b9-4d91-9501-7fac414b7b47"). InnerVolumeSpecName "kube-api-access-g58mz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:48:11 crc kubenswrapper[5008]: I0129 15:48:11.620314 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/32d4f252-93b9-4d91-9501-7fac414b7b47-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "32d4f252-93b9-4d91-9501-7fac414b7b47" (UID: "32d4f252-93b9-4d91-9501-7fac414b7b47"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:48:11 crc kubenswrapper[5008]: I0129 15:48:11.626177 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/32d4f252-93b9-4d91-9501-7fac414b7b47-config" (OuterVolumeSpecName: "config") pod "32d4f252-93b9-4d91-9501-7fac414b7b47" (UID: "32d4f252-93b9-4d91-9501-7fac414b7b47"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:48:11 crc kubenswrapper[5008]: I0129 15:48:11.642723 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/32d4f252-93b9-4d91-9501-7fac414b7b47-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "32d4f252-93b9-4d91-9501-7fac414b7b47" (UID: "32d4f252-93b9-4d91-9501-7fac414b7b47"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:48:11 crc kubenswrapper[5008]: I0129 15:48:11.646150 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/32d4f252-93b9-4d91-9501-7fac414b7b47-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "32d4f252-93b9-4d91-9501-7fac414b7b47" (UID: "32d4f252-93b9-4d91-9501-7fac414b7b47"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:48:11 crc kubenswrapper[5008]: I0129 15:48:11.655627 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/32d4f252-93b9-4d91-9501-7fac414b7b47-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "32d4f252-93b9-4d91-9501-7fac414b7b47" (UID: "32d4f252-93b9-4d91-9501-7fac414b7b47"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:48:11 crc kubenswrapper[5008]: I0129 15:48:11.700404 5008 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/32d4f252-93b9-4d91-9501-7fac414b7b47-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 29 15:48:11 crc kubenswrapper[5008]: I0129 15:48:11.700447 5008 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/32d4f252-93b9-4d91-9501-7fac414b7b47-config\") on node \"crc\" DevicePath \"\"" Jan 29 15:48:11 crc kubenswrapper[5008]: I0129 15:48:11.700464 5008 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/32d4f252-93b9-4d91-9501-7fac414b7b47-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 29 15:48:11 crc kubenswrapper[5008]: I0129 15:48:11.700476 5008 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/32d4f252-93b9-4d91-9501-7fac414b7b47-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 29 15:48:11 crc kubenswrapper[5008]: I0129 15:48:11.700487 5008 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g58mz\" (UniqueName: \"kubernetes.io/projected/32d4f252-93b9-4d91-9501-7fac414b7b47-kube-api-access-g58mz\") on node \"crc\" DevicePath \"\"" Jan 29 15:48:11 crc kubenswrapper[5008]: I0129 15:48:11.700499 5008 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/32d4f252-93b9-4d91-9501-7fac414b7b47-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 29 15:48:11 crc kubenswrapper[5008]: I0129 15:48:11.947037 5008 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-59d66dd7b7-rjtfk"] Jan 29 15:48:11 crc kubenswrapper[5008]: I0129 15:48:11.990007 5008 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-65975bb757-q7xqt"] Jan 29 15:48:11 crc kubenswrapper[5008]: E0129 15:48:11.990370 5008 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="32d4f252-93b9-4d91-9501-7fac414b7b47" containerName="init" Jan 29 15:48:11 crc kubenswrapper[5008]: I0129 15:48:11.990383 5008 state_mem.go:107] "Deleted CPUSet assignment" podUID="32d4f252-93b9-4d91-9501-7fac414b7b47" containerName="init" Jan 29 15:48:11 crc kubenswrapper[5008]: I0129 15:48:11.990534 5008 memory_manager.go:354] "RemoveStaleState removing state" podUID="32d4f252-93b9-4d91-9501-7fac414b7b47" containerName="init" Jan 29 15:48:11 crc kubenswrapper[5008]: I0129 15:48:11.991662 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-65975bb757-q7xqt" Jan 29 15:48:12 crc kubenswrapper[5008]: I0129 15:48:12.013233 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-65975bb757-q7xqt"] Jan 29 15:48:12 crc kubenswrapper[5008]: I0129 15:48:12.013466 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xnq52\" (UniqueName: \"kubernetes.io/projected/5f86a518-6363-4796-a4f4-7208aacccc99-kube-api-access-xnq52\") pod \"horizon-65975bb757-q7xqt\" (UID: \"5f86a518-6363-4796-a4f4-7208aacccc99\") " pod="openstack/horizon-65975bb757-q7xqt" Jan 29 15:48:12 crc kubenswrapper[5008]: I0129 15:48:12.013563 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/5f86a518-6363-4796-a4f4-7208aacccc99-config-data\") pod \"horizon-65975bb757-q7xqt\" (UID: \"5f86a518-6363-4796-a4f4-7208aacccc99\") " pod="openstack/horizon-65975bb757-q7xqt" Jan 29 15:48:12 crc kubenswrapper[5008]: I0129 15:48:12.013631 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/5f86a518-6363-4796-a4f4-7208aacccc99-horizon-secret-key\") pod \"horizon-65975bb757-q7xqt\" (UID: \"5f86a518-6363-4796-a4f4-7208aacccc99\") " pod="openstack/horizon-65975bb757-q7xqt" Jan 29 15:48:12 crc kubenswrapper[5008]: I0129 15:48:12.013728 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5f86a518-6363-4796-a4f4-7208aacccc99-logs\") pod \"horizon-65975bb757-q7xqt\" (UID: \"5f86a518-6363-4796-a4f4-7208aacccc99\") " pod="openstack/horizon-65975bb757-q7xqt" Jan 29 15:48:12 crc kubenswrapper[5008]: I0129 15:48:12.014066 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/5f86a518-6363-4796-a4f4-7208aacccc99-scripts\") pod \"horizon-65975bb757-q7xqt\" (UID: \"5f86a518-6363-4796-a4f4-7208aacccc99\") " pod="openstack/horizon-65975bb757-q7xqt" Jan 29 15:48:12 crc kubenswrapper[5008]: I0129 15:48:12.055224 5008 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 29 15:48:12 crc kubenswrapper[5008]: I0129 15:48:12.117062 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/5f86a518-6363-4796-a4f4-7208aacccc99-scripts\") pod \"horizon-65975bb757-q7xqt\" (UID: \"5f86a518-6363-4796-a4f4-7208aacccc99\") " pod="openstack/horizon-65975bb757-q7xqt" Jan 29 15:48:12 crc kubenswrapper[5008]: I0129 15:48:12.117122 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xnq52\" (UniqueName: \"kubernetes.io/projected/5f86a518-6363-4796-a4f4-7208aacccc99-kube-api-access-xnq52\") pod \"horizon-65975bb757-q7xqt\" (UID: \"5f86a518-6363-4796-a4f4-7208aacccc99\") " pod="openstack/horizon-65975bb757-q7xqt" Jan 29 15:48:12 crc kubenswrapper[5008]: I0129 15:48:12.117163 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/5f86a518-6363-4796-a4f4-7208aacccc99-config-data\") pod \"horizon-65975bb757-q7xqt\" (UID: \"5f86a518-6363-4796-a4f4-7208aacccc99\") " pod="openstack/horizon-65975bb757-q7xqt" Jan 29 15:48:12 crc kubenswrapper[5008]: I0129 15:48:12.117188 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/5f86a518-6363-4796-a4f4-7208aacccc99-horizon-secret-key\") pod \"horizon-65975bb757-q7xqt\" (UID: \"5f86a518-6363-4796-a4f4-7208aacccc99\") " pod="openstack/horizon-65975bb757-q7xqt" Jan 29 15:48:12 crc kubenswrapper[5008]: I0129 15:48:12.117221 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5f86a518-6363-4796-a4f4-7208aacccc99-logs\") pod \"horizon-65975bb757-q7xqt\" (UID: \"5f86a518-6363-4796-a4f4-7208aacccc99\") " pod="openstack/horizon-65975bb757-q7xqt" Jan 29 15:48:12 crc kubenswrapper[5008]: I0129 15:48:12.117673 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5f86a518-6363-4796-a4f4-7208aacccc99-logs\") pod \"horizon-65975bb757-q7xqt\" (UID: \"5f86a518-6363-4796-a4f4-7208aacccc99\") " pod="openstack/horizon-65975bb757-q7xqt" Jan 29 15:48:12 crc kubenswrapper[5008]: I0129 15:48:12.118501 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/5f86a518-6363-4796-a4f4-7208aacccc99-scripts\") pod \"horizon-65975bb757-q7xqt\" (UID: \"5f86a518-6363-4796-a4f4-7208aacccc99\") " pod="openstack/horizon-65975bb757-q7xqt" Jan 29 15:48:12 crc kubenswrapper[5008]: I0129 15:48:12.120027 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/5f86a518-6363-4796-a4f4-7208aacccc99-config-data\") pod \"horizon-65975bb757-q7xqt\" (UID: \"5f86a518-6363-4796-a4f4-7208aacccc99\") " pod="openstack/horizon-65975bb757-q7xqt" Jan 29 15:48:12 crc kubenswrapper[5008]: I0129 15:48:12.124259 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/5f86a518-6363-4796-a4f4-7208aacccc99-horizon-secret-key\") pod \"horizon-65975bb757-q7xqt\" (UID: \"5f86a518-6363-4796-a4f4-7208aacccc99\") " pod="openstack/horizon-65975bb757-q7xqt" Jan 29 15:48:12 crc kubenswrapper[5008]: I0129 15:48:12.142045 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xnq52\" (UniqueName: \"kubernetes.io/projected/5f86a518-6363-4796-a4f4-7208aacccc99-kube-api-access-xnq52\") pod \"horizon-65975bb757-q7xqt\" (UID: \"5f86a518-6363-4796-a4f4-7208aacccc99\") " pod="openstack/horizon-65975bb757-q7xqt" Jan 29 15:48:12 crc kubenswrapper[5008]: I0129 15:48:12.176716 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-4h8lc" event={"ID":"6c2a1a18-16ff-4419-b233-8649579edbea","Type":"ContainerStarted","Data":"ea56cb31969ede4dc77690e8380474b589122f4e8ba458f2575d15b6351054fb"} Jan 29 15:48:12 crc kubenswrapper[5008]: I0129 15:48:12.178757 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8457b44a-814e-403f-a2c9-71907f5cb2d2","Type":"ContainerStarted","Data":"c97bf01c6b949d39e9bc8fa902a0c1cf304eedee9dbe4194b2055c35de3ec4ce"} Jan 29 15:48:12 crc kubenswrapper[5008]: I0129 15:48:12.181262 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-59d66dd7b7-rjtfk" event={"ID":"3b110ddf-5eea-4e32-b9f3-f07886d636a2","Type":"ContainerStarted","Data":"edb6a5e3eecc88a8d2bbfb0fdbece87ea6a4b28d555c22d32a7db25bc8e06e84"} Jan 29 15:48:12 crc kubenswrapper[5008]: I0129 15:48:12.183145 5008 generic.go:334] "Generic (PLEG): container finished" podID="771d4fdc-7731-4bfc-a65a-7c3b8624eb32" containerID="3fec96d0d9b6bf3046f7029a3dc91f246cf551ca6e017f8896e18866aed96699" exitCode=0 Jan 29 15:48:12 crc kubenswrapper[5008]: I0129 15:48:12.183261 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-cf78879c9-f77w7" event={"ID":"771d4fdc-7731-4bfc-a65a-7c3b8624eb32","Type":"ContainerDied","Data":"3fec96d0d9b6bf3046f7029a3dc91f246cf551ca6e017f8896e18866aed96699"} Jan 29 15:48:12 crc kubenswrapper[5008]: I0129 15:48:12.193078 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b868669f-l96nk" event={"ID":"32d4f252-93b9-4d91-9501-7fac414b7b47","Type":"ContainerDied","Data":"1acb032ed25ef12c73d855be7174e50b33e647d61af0aafcd05a6e8ee53ae527"} Jan 29 15:48:12 crc kubenswrapper[5008]: I0129 15:48:12.193158 5008 scope.go:117] "RemoveContainer" containerID="162e7c392841dddbcd1aa2020766cf167422ce4a22d288e65690e63fcf74ed9c" Jan 29 15:48:12 crc kubenswrapper[5008]: I0129 15:48:12.193381 5008 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5b868669f-l96nk" Jan 29 15:48:12 crc kubenswrapper[5008]: I0129 15:48:12.206746 5008 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-db-sync-4h8lc" podStartSLOduration=3.206717958 podStartE2EDuration="3.206717958s" podCreationTimestamp="2026-01-29 15:48:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 15:48:12.195800173 +0000 UTC m=+1235.868654420" watchObservedRunningTime="2026-01-29 15:48:12.206717958 +0000 UTC m=+1235.879572195" Jan 29 15:48:12 crc kubenswrapper[5008]: I0129 15:48:12.289766 5008 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5b868669f-l96nk"] Jan 29 15:48:12 crc kubenswrapper[5008]: I0129 15:48:12.292841 5008 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5b868669f-l96nk"] Jan 29 15:48:12 crc kubenswrapper[5008]: I0129 15:48:12.339388 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-65975bb757-q7xqt" Jan 29 15:48:12 crc kubenswrapper[5008]: I0129 15:48:12.904803 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-65975bb757-q7xqt"] Jan 29 15:48:13 crc kubenswrapper[5008]: I0129 15:48:13.210368 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-65975bb757-q7xqt" event={"ID":"5f86a518-6363-4796-a4f4-7208aacccc99","Type":"ContainerStarted","Data":"115aa46cd8290b427be260b9a17520dfb8392c574f35bae7cb7c624f65477597"} Jan 29 15:48:13 crc kubenswrapper[5008]: I0129 15:48:13.214853 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-cf78879c9-f77w7" event={"ID":"771d4fdc-7731-4bfc-a65a-7c3b8624eb32","Type":"ContainerStarted","Data":"7c2adc3a463437940f2209966bd51450818f3254391e12503b2d25eac2fb47ae"} Jan 29 15:48:13 crc kubenswrapper[5008]: I0129 15:48:13.214954 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-cf78879c9-f77w7" Jan 29 15:48:13 crc kubenswrapper[5008]: I0129 15:48:13.237008 5008 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-cf78879c9-f77w7" podStartSLOduration=4.236991291 podStartE2EDuration="4.236991291s" podCreationTimestamp="2026-01-29 15:48:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 15:48:13.231673432 +0000 UTC m=+1236.904527669" watchObservedRunningTime="2026-01-29 15:48:13.236991291 +0000 UTC m=+1236.909845518" Jan 29 15:48:13 crc kubenswrapper[5008]: I0129 15:48:13.336518 5008 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="32d4f252-93b9-4d91-9501-7fac414b7b47" path="/var/lib/kubelet/pods/32d4f252-93b9-4d91-9501-7fac414b7b47/volumes" Jan 29 15:48:13 crc kubenswrapper[5008]: I0129 15:48:13.990896 5008 patch_prober.go:28] interesting pod/machine-config-daemon-gk9q8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 15:48:13 crc kubenswrapper[5008]: I0129 15:48:13.990955 5008 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-gk9q8" podUID="ca0fcb2d-733d-4bde-9bbf-3f7082d0e244" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 15:48:18 crc kubenswrapper[5008]: I0129 15:48:18.776363 5008 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-66f4589f77-j49wf"] Jan 29 15:48:18 crc kubenswrapper[5008]: I0129 15:48:18.811543 5008 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-7f49b8c48b-x77zl"] Jan 29 15:48:18 crc kubenswrapper[5008]: I0129 15:48:18.813275 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-7f49b8c48b-x77zl" Jan 29 15:48:18 crc kubenswrapper[5008]: I0129 15:48:18.815630 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-horizon-svc" Jan 29 15:48:18 crc kubenswrapper[5008]: I0129 15:48:18.829348 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-7f49b8c48b-x77zl"] Jan 29 15:48:18 crc kubenswrapper[5008]: I0129 15:48:18.845754 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8c3bbcd6-6512-4439-b70d-f46dd6382cfe-logs\") pod \"horizon-7f49b8c48b-x77zl\" (UID: \"8c3bbcd6-6512-4439-b70d-f46dd6382cfe\") " pod="openstack/horizon-7f49b8c48b-x77zl" Jan 29 15:48:18 crc kubenswrapper[5008]: I0129 15:48:18.845830 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vxxxg\" (UniqueName: \"kubernetes.io/projected/8c3bbcd6-6512-4439-b70d-f46dd6382cfe-kube-api-access-vxxxg\") pod \"horizon-7f49b8c48b-x77zl\" (UID: \"8c3bbcd6-6512-4439-b70d-f46dd6382cfe\") " pod="openstack/horizon-7f49b8c48b-x77zl" Jan 29 15:48:18 crc kubenswrapper[5008]: I0129 15:48:18.845867 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/8c3bbcd6-6512-4439-b70d-f46dd6382cfe-config-data\") pod \"horizon-7f49b8c48b-x77zl\" (UID: \"8c3bbcd6-6512-4439-b70d-f46dd6382cfe\") " pod="openstack/horizon-7f49b8c48b-x77zl" Jan 29 15:48:18 crc kubenswrapper[5008]: I0129 15:48:18.845931 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/8c3bbcd6-6512-4439-b70d-f46dd6382cfe-horizon-secret-key\") pod \"horizon-7f49b8c48b-x77zl\" (UID: \"8c3bbcd6-6512-4439-b70d-f46dd6382cfe\") " pod="openstack/horizon-7f49b8c48b-x77zl" Jan 29 15:48:18 crc kubenswrapper[5008]: I0129 15:48:18.845958 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/8c3bbcd6-6512-4439-b70d-f46dd6382cfe-scripts\") pod \"horizon-7f49b8c48b-x77zl\" (UID: \"8c3bbcd6-6512-4439-b70d-f46dd6382cfe\") " pod="openstack/horizon-7f49b8c48b-x77zl" Jan 29 15:48:18 crc kubenswrapper[5008]: I0129 15:48:18.846032 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/8c3bbcd6-6512-4439-b70d-f46dd6382cfe-horizon-tls-certs\") pod \"horizon-7f49b8c48b-x77zl\" (UID: \"8c3bbcd6-6512-4439-b70d-f46dd6382cfe\") " pod="openstack/horizon-7f49b8c48b-x77zl" Jan 29 15:48:18 crc kubenswrapper[5008]: I0129 15:48:18.846056 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8c3bbcd6-6512-4439-b70d-f46dd6382cfe-combined-ca-bundle\") pod \"horizon-7f49b8c48b-x77zl\" (UID: \"8c3bbcd6-6512-4439-b70d-f46dd6382cfe\") " pod="openstack/horizon-7f49b8c48b-x77zl" Jan 29 15:48:18 crc kubenswrapper[5008]: I0129 15:48:18.881927 5008 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-65975bb757-q7xqt"] Jan 29 15:48:18 crc kubenswrapper[5008]: I0129 15:48:18.912121 5008 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-bf5f5fc4b-t9vk7"] Jan 29 15:48:18 crc kubenswrapper[5008]: I0129 15:48:18.933892 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-bf5f5fc4b-t9vk7" Jan 29 15:48:18 crc kubenswrapper[5008]: I0129 15:48:18.948077 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/8c3bbcd6-6512-4439-b70d-f46dd6382cfe-horizon-secret-key\") pod \"horizon-7f49b8c48b-x77zl\" (UID: \"8c3bbcd6-6512-4439-b70d-f46dd6382cfe\") " pod="openstack/horizon-7f49b8c48b-x77zl" Jan 29 15:48:18 crc kubenswrapper[5008]: I0129 15:48:18.948121 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/8c3bbcd6-6512-4439-b70d-f46dd6382cfe-scripts\") pod \"horizon-7f49b8c48b-x77zl\" (UID: \"8c3bbcd6-6512-4439-b70d-f46dd6382cfe\") " pod="openstack/horizon-7f49b8c48b-x77zl" Jan 29 15:48:18 crc kubenswrapper[5008]: I0129 15:48:18.948188 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/8c3bbcd6-6512-4439-b70d-f46dd6382cfe-horizon-tls-certs\") pod \"horizon-7f49b8c48b-x77zl\" (UID: \"8c3bbcd6-6512-4439-b70d-f46dd6382cfe\") " pod="openstack/horizon-7f49b8c48b-x77zl" Jan 29 15:48:18 crc kubenswrapper[5008]: I0129 15:48:18.948214 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8c3bbcd6-6512-4439-b70d-f46dd6382cfe-combined-ca-bundle\") pod \"horizon-7f49b8c48b-x77zl\" (UID: \"8c3bbcd6-6512-4439-b70d-f46dd6382cfe\") " pod="openstack/horizon-7f49b8c48b-x77zl" Jan 29 15:48:18 crc kubenswrapper[5008]: I0129 15:48:18.948263 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8c3bbcd6-6512-4439-b70d-f46dd6382cfe-logs\") pod \"horizon-7f49b8c48b-x77zl\" (UID: \"8c3bbcd6-6512-4439-b70d-f46dd6382cfe\") " pod="openstack/horizon-7f49b8c48b-x77zl" Jan 29 15:48:18 crc kubenswrapper[5008]: I0129 15:48:18.948309 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vxxxg\" (UniqueName: \"kubernetes.io/projected/8c3bbcd6-6512-4439-b70d-f46dd6382cfe-kube-api-access-vxxxg\") pod \"horizon-7f49b8c48b-x77zl\" (UID: \"8c3bbcd6-6512-4439-b70d-f46dd6382cfe\") " pod="openstack/horizon-7f49b8c48b-x77zl" Jan 29 15:48:18 crc kubenswrapper[5008]: I0129 15:48:18.948352 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/8c3bbcd6-6512-4439-b70d-f46dd6382cfe-config-data\") pod \"horizon-7f49b8c48b-x77zl\" (UID: \"8c3bbcd6-6512-4439-b70d-f46dd6382cfe\") " pod="openstack/horizon-7f49b8c48b-x77zl" Jan 29 15:48:18 crc kubenswrapper[5008]: I0129 15:48:18.950131 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/8c3bbcd6-6512-4439-b70d-f46dd6382cfe-config-data\") pod \"horizon-7f49b8c48b-x77zl\" (UID: \"8c3bbcd6-6512-4439-b70d-f46dd6382cfe\") " pod="openstack/horizon-7f49b8c48b-x77zl" Jan 29 15:48:18 crc kubenswrapper[5008]: I0129 15:48:18.951270 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8c3bbcd6-6512-4439-b70d-f46dd6382cfe-logs\") pod \"horizon-7f49b8c48b-x77zl\" (UID: \"8c3bbcd6-6512-4439-b70d-f46dd6382cfe\") " pod="openstack/horizon-7f49b8c48b-x77zl" Jan 29 15:48:18 crc kubenswrapper[5008]: I0129 15:48:18.955346 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/8c3bbcd6-6512-4439-b70d-f46dd6382cfe-scripts\") pod \"horizon-7f49b8c48b-x77zl\" (UID: \"8c3bbcd6-6512-4439-b70d-f46dd6382cfe\") " pod="openstack/horizon-7f49b8c48b-x77zl" Jan 29 15:48:18 crc kubenswrapper[5008]: I0129 15:48:18.958112 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/8c3bbcd6-6512-4439-b70d-f46dd6382cfe-horizon-tls-certs\") pod \"horizon-7f49b8c48b-x77zl\" (UID: \"8c3bbcd6-6512-4439-b70d-f46dd6382cfe\") " pod="openstack/horizon-7f49b8c48b-x77zl" Jan 29 15:48:18 crc kubenswrapper[5008]: I0129 15:48:18.958966 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/8c3bbcd6-6512-4439-b70d-f46dd6382cfe-horizon-secret-key\") pod \"horizon-7f49b8c48b-x77zl\" (UID: \"8c3bbcd6-6512-4439-b70d-f46dd6382cfe\") " pod="openstack/horizon-7f49b8c48b-x77zl" Jan 29 15:48:18 crc kubenswrapper[5008]: I0129 15:48:18.971464 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8c3bbcd6-6512-4439-b70d-f46dd6382cfe-combined-ca-bundle\") pod \"horizon-7f49b8c48b-x77zl\" (UID: \"8c3bbcd6-6512-4439-b70d-f46dd6382cfe\") " pod="openstack/horizon-7f49b8c48b-x77zl" Jan 29 15:48:18 crc kubenswrapper[5008]: I0129 15:48:18.975056 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-bf5f5fc4b-t9vk7"] Jan 29 15:48:18 crc kubenswrapper[5008]: I0129 15:48:18.975140 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vxxxg\" (UniqueName: \"kubernetes.io/projected/8c3bbcd6-6512-4439-b70d-f46dd6382cfe-kube-api-access-vxxxg\") pod \"horizon-7f49b8c48b-x77zl\" (UID: \"8c3bbcd6-6512-4439-b70d-f46dd6382cfe\") " pod="openstack/horizon-7f49b8c48b-x77zl" Jan 29 15:48:19 crc kubenswrapper[5008]: I0129 15:48:19.050435 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/fc599e48-62d0-4908-b4ed-cd3f13094665-scripts\") pod \"horizon-bf5f5fc4b-t9vk7\" (UID: \"fc599e48-62d0-4908-b4ed-cd3f13094665\") " pod="openstack/horizon-bf5f5fc4b-t9vk7" Jan 29 15:48:19 crc kubenswrapper[5008]: I0129 15:48:19.050735 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/fc599e48-62d0-4908-b4ed-cd3f13094665-horizon-tls-certs\") pod \"horizon-bf5f5fc4b-t9vk7\" (UID: \"fc599e48-62d0-4908-b4ed-cd3f13094665\") " pod="openstack/horizon-bf5f5fc4b-t9vk7" Jan 29 15:48:19 crc kubenswrapper[5008]: I0129 15:48:19.050888 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fc599e48-62d0-4908-b4ed-cd3f13094665-logs\") pod \"horizon-bf5f5fc4b-t9vk7\" (UID: \"fc599e48-62d0-4908-b4ed-cd3f13094665\") " pod="openstack/horizon-bf5f5fc4b-t9vk7" Jan 29 15:48:19 crc kubenswrapper[5008]: I0129 15:48:19.051023 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/fc599e48-62d0-4908-b4ed-cd3f13094665-horizon-secret-key\") pod \"horizon-bf5f5fc4b-t9vk7\" (UID: \"fc599e48-62d0-4908-b4ed-cd3f13094665\") " pod="openstack/horizon-bf5f5fc4b-t9vk7" Jan 29 15:48:19 crc kubenswrapper[5008]: I0129 15:48:19.051118 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fc599e48-62d0-4908-b4ed-cd3f13094665-combined-ca-bundle\") pod \"horizon-bf5f5fc4b-t9vk7\" (UID: \"fc599e48-62d0-4908-b4ed-cd3f13094665\") " pod="openstack/horizon-bf5f5fc4b-t9vk7" Jan 29 15:48:19 crc kubenswrapper[5008]: I0129 15:48:19.051755 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6jc45\" (UniqueName: \"kubernetes.io/projected/fc599e48-62d0-4908-b4ed-cd3f13094665-kube-api-access-6jc45\") pod \"horizon-bf5f5fc4b-t9vk7\" (UID: \"fc599e48-62d0-4908-b4ed-cd3f13094665\") " pod="openstack/horizon-bf5f5fc4b-t9vk7" Jan 29 15:48:19 crc kubenswrapper[5008]: I0129 15:48:19.051930 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/fc599e48-62d0-4908-b4ed-cd3f13094665-config-data\") pod \"horizon-bf5f5fc4b-t9vk7\" (UID: \"fc599e48-62d0-4908-b4ed-cd3f13094665\") " pod="openstack/horizon-bf5f5fc4b-t9vk7" Jan 29 15:48:19 crc kubenswrapper[5008]: I0129 15:48:19.134257 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-7f49b8c48b-x77zl" Jan 29 15:48:19 crc kubenswrapper[5008]: I0129 15:48:19.153843 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/fc599e48-62d0-4908-b4ed-cd3f13094665-horizon-secret-key\") pod \"horizon-bf5f5fc4b-t9vk7\" (UID: \"fc599e48-62d0-4908-b4ed-cd3f13094665\") " pod="openstack/horizon-bf5f5fc4b-t9vk7" Jan 29 15:48:19 crc kubenswrapper[5008]: I0129 15:48:19.154224 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fc599e48-62d0-4908-b4ed-cd3f13094665-combined-ca-bundle\") pod \"horizon-bf5f5fc4b-t9vk7\" (UID: \"fc599e48-62d0-4908-b4ed-cd3f13094665\") " pod="openstack/horizon-bf5f5fc4b-t9vk7" Jan 29 15:48:19 crc kubenswrapper[5008]: I0129 15:48:19.154478 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6jc45\" (UniqueName: \"kubernetes.io/projected/fc599e48-62d0-4908-b4ed-cd3f13094665-kube-api-access-6jc45\") pod \"horizon-bf5f5fc4b-t9vk7\" (UID: \"fc599e48-62d0-4908-b4ed-cd3f13094665\") " pod="openstack/horizon-bf5f5fc4b-t9vk7" Jan 29 15:48:19 crc kubenswrapper[5008]: I0129 15:48:19.154684 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/fc599e48-62d0-4908-b4ed-cd3f13094665-config-data\") pod \"horizon-bf5f5fc4b-t9vk7\" (UID: \"fc599e48-62d0-4908-b4ed-cd3f13094665\") " pod="openstack/horizon-bf5f5fc4b-t9vk7" Jan 29 15:48:19 crc kubenswrapper[5008]: I0129 15:48:19.154985 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/fc599e48-62d0-4908-b4ed-cd3f13094665-scripts\") pod \"horizon-bf5f5fc4b-t9vk7\" (UID: \"fc599e48-62d0-4908-b4ed-cd3f13094665\") " pod="openstack/horizon-bf5f5fc4b-t9vk7" Jan 29 15:48:19 crc kubenswrapper[5008]: I0129 15:48:19.155209 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/fc599e48-62d0-4908-b4ed-cd3f13094665-horizon-tls-certs\") pod \"horizon-bf5f5fc4b-t9vk7\" (UID: \"fc599e48-62d0-4908-b4ed-cd3f13094665\") " pod="openstack/horizon-bf5f5fc4b-t9vk7" Jan 29 15:48:19 crc kubenswrapper[5008]: I0129 15:48:19.155409 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fc599e48-62d0-4908-b4ed-cd3f13094665-logs\") pod \"horizon-bf5f5fc4b-t9vk7\" (UID: \"fc599e48-62d0-4908-b4ed-cd3f13094665\") " pod="openstack/horizon-bf5f5fc4b-t9vk7" Jan 29 15:48:19 crc kubenswrapper[5008]: I0129 15:48:19.156159 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fc599e48-62d0-4908-b4ed-cd3f13094665-logs\") pod \"horizon-bf5f5fc4b-t9vk7\" (UID: \"fc599e48-62d0-4908-b4ed-cd3f13094665\") " pod="openstack/horizon-bf5f5fc4b-t9vk7" Jan 29 15:48:19 crc kubenswrapper[5008]: I0129 15:48:19.156209 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/fc599e48-62d0-4908-b4ed-cd3f13094665-config-data\") pod \"horizon-bf5f5fc4b-t9vk7\" (UID: \"fc599e48-62d0-4908-b4ed-cd3f13094665\") " pod="openstack/horizon-bf5f5fc4b-t9vk7" Jan 29 15:48:19 crc kubenswrapper[5008]: I0129 15:48:19.157583 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/fc599e48-62d0-4908-b4ed-cd3f13094665-scripts\") pod \"horizon-bf5f5fc4b-t9vk7\" (UID: \"fc599e48-62d0-4908-b4ed-cd3f13094665\") " pod="openstack/horizon-bf5f5fc4b-t9vk7" Jan 29 15:48:19 crc kubenswrapper[5008]: I0129 15:48:19.158776 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/fc599e48-62d0-4908-b4ed-cd3f13094665-horizon-tls-certs\") pod \"horizon-bf5f5fc4b-t9vk7\" (UID: \"fc599e48-62d0-4908-b4ed-cd3f13094665\") " pod="openstack/horizon-bf5f5fc4b-t9vk7" Jan 29 15:48:19 crc kubenswrapper[5008]: I0129 15:48:19.159110 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/fc599e48-62d0-4908-b4ed-cd3f13094665-horizon-secret-key\") pod \"horizon-bf5f5fc4b-t9vk7\" (UID: \"fc599e48-62d0-4908-b4ed-cd3f13094665\") " pod="openstack/horizon-bf5f5fc4b-t9vk7" Jan 29 15:48:19 crc kubenswrapper[5008]: I0129 15:48:19.159577 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fc599e48-62d0-4908-b4ed-cd3f13094665-combined-ca-bundle\") pod \"horizon-bf5f5fc4b-t9vk7\" (UID: \"fc599e48-62d0-4908-b4ed-cd3f13094665\") " pod="openstack/horizon-bf5f5fc4b-t9vk7" Jan 29 15:48:19 crc kubenswrapper[5008]: I0129 15:48:19.170623 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6jc45\" (UniqueName: \"kubernetes.io/projected/fc599e48-62d0-4908-b4ed-cd3f13094665-kube-api-access-6jc45\") pod \"horizon-bf5f5fc4b-t9vk7\" (UID: \"fc599e48-62d0-4908-b4ed-cd3f13094665\") " pod="openstack/horizon-bf5f5fc4b-t9vk7" Jan 29 15:48:19 crc kubenswrapper[5008]: I0129 15:48:19.258421 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-bf5f5fc4b-t9vk7" Jan 29 15:48:20 crc kubenswrapper[5008]: I0129 15:48:20.170085 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-cf78879c9-f77w7" Jan 29 15:48:20 crc kubenswrapper[5008]: I0129 15:48:20.258729 5008 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5c79d794d7-k22kg"] Jan 29 15:48:20 crc kubenswrapper[5008]: I0129 15:48:20.259029 5008 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5c79d794d7-k22kg" podUID="1d24d44a-1e0f-43ea-a065-9c4f369e0045" containerName="dnsmasq-dns" containerID="cri-o://8c955580cc84bdb7c729644dacf0097c59885b458cef63ff2bf7694209b8b51b" gracePeriod=10 Jan 29 15:48:21 crc kubenswrapper[5008]: I0129 15:48:21.292082 5008 generic.go:334] "Generic (PLEG): container finished" podID="1d24d44a-1e0f-43ea-a065-9c4f369e0045" containerID="8c955580cc84bdb7c729644dacf0097c59885b458cef63ff2bf7694209b8b51b" exitCode=0 Jan 29 15:48:21 crc kubenswrapper[5008]: I0129 15:48:21.292150 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c79d794d7-k22kg" event={"ID":"1d24d44a-1e0f-43ea-a065-9c4f369e0045","Type":"ContainerDied","Data":"8c955580cc84bdb7c729644dacf0097c59885b458cef63ff2bf7694209b8b51b"} Jan 29 15:48:23 crc kubenswrapper[5008]: I0129 15:48:23.457278 5008 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-5c79d794d7-k22kg" podUID="1d24d44a-1e0f-43ea-a065-9c4f369e0045" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.133:5353: connect: connection refused" Jan 29 15:48:28 crc kubenswrapper[5008]: I0129 15:48:28.458299 5008 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-5c79d794d7-k22kg" podUID="1d24d44a-1e0f-43ea-a065-9c4f369e0045" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.133:5353: connect: connection refused" Jan 29 15:48:33 crc kubenswrapper[5008]: I0129 15:48:33.465084 5008 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-5c79d794d7-k22kg" podUID="1d24d44a-1e0f-43ea-a065-9c4f369e0045" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.133:5353: connect: connection refused" Jan 29 15:48:33 crc kubenswrapper[5008]: I0129 15:48:33.465952 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5c79d794d7-k22kg" Jan 29 15:48:35 crc kubenswrapper[5008]: E0129 15:48:35.976265 5008 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-barbican-api:current-podified" Jan 29 15:48:35 crc kubenswrapper[5008]: E0129 15:48:35.976970 5008 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:barbican-db-sync,Image:quay.io/podified-antelope-centos9/openstack-barbican-api:current-podified,Command:[/bin/bash],Args:[-c barbican-manage db upgrade],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/barbican/barbican.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-5jftb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42403,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:*42403,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod barbican-db-sync-rcl2z_openstack(4ec0e696-652d-463e-b97e-dad0065a543b): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 29 15:48:35 crc kubenswrapper[5008]: E0129 15:48:35.978134 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"barbican-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/barbican-db-sync-rcl2z" podUID="4ec0e696-652d-463e-b97e-dad0065a543b" Jan 29 15:48:36 crc kubenswrapper[5008]: E0129 15:48:36.485528 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"barbican-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-barbican-api:current-podified\\\"\"" pod="openstack/barbican-db-sync-rcl2z" podUID="4ec0e696-652d-463e-b97e-dad0065a543b" Jan 29 15:48:43 crc kubenswrapper[5008]: I0129 15:48:43.457392 5008 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-5c79d794d7-k22kg" podUID="1d24d44a-1e0f-43ea-a065-9c4f369e0045" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.133:5353: i/o timeout" Jan 29 15:48:43 crc kubenswrapper[5008]: I0129 15:48:43.990758 5008 patch_prober.go:28] interesting pod/machine-config-daemon-gk9q8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 15:48:43 crc kubenswrapper[5008]: I0129 15:48:43.990845 5008 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-gk9q8" podUID="ca0fcb2d-733d-4bde-9bbf-3f7082d0e244" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 15:48:43 crc kubenswrapper[5008]: I0129 15:48:43.990896 5008 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-gk9q8" Jan 29 15:48:43 crc kubenswrapper[5008]: I0129 15:48:43.991625 5008 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"afcf72806e2f44481eaccbb425ccc0452067f0e28ee8224a454fe6d6fab03a1b"} pod="openshift-machine-config-operator/machine-config-daemon-gk9q8" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 29 15:48:43 crc kubenswrapper[5008]: I0129 15:48:43.992458 5008 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-gk9q8" podUID="ca0fcb2d-733d-4bde-9bbf-3f7082d0e244" containerName="machine-config-daemon" containerID="cri-o://afcf72806e2f44481eaccbb425ccc0452067f0e28ee8224a454fe6d6fab03a1b" gracePeriod=600 Jan 29 15:48:46 crc kubenswrapper[5008]: I0129 15:48:46.574462 5008 generic.go:334] "Generic (PLEG): container finished" podID="ca0fcb2d-733d-4bde-9bbf-3f7082d0e244" containerID="afcf72806e2f44481eaccbb425ccc0452067f0e28ee8224a454fe6d6fab03a1b" exitCode=0 Jan 29 15:48:46 crc kubenswrapper[5008]: I0129 15:48:46.574550 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-gk9q8" event={"ID":"ca0fcb2d-733d-4bde-9bbf-3f7082d0e244","Type":"ContainerDied","Data":"afcf72806e2f44481eaccbb425ccc0452067f0e28ee8224a454fe6d6fab03a1b"} Jan 29 15:48:46 crc kubenswrapper[5008]: I0129 15:48:46.575274 5008 scope.go:117] "RemoveContainer" containerID="f87de1e980db0bd16d914932ff79d49ee9898f73c25f93235e4e1fda574d4c5a" Jan 29 15:48:48 crc kubenswrapper[5008]: I0129 15:48:48.459164 5008 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-5c79d794d7-k22kg" podUID="1d24d44a-1e0f-43ea-a065-9c4f369e0045" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.133:5353: i/o timeout" Jan 29 15:48:51 crc kubenswrapper[5008]: E0129 15:48:51.730378 5008 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-horizon:current-podified" Jan 29 15:48:51 crc kubenswrapper[5008]: E0129 15:48:51.730933 5008 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:horizon-log,Image:quay.io/podified-antelope-centos9/openstack-horizon:current-podified,Command:[/bin/bash],Args:[-c tail -n+1 -F /var/log/horizon/horizon.log],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n695h68fh68fh64bh65bhc8h5fhfch668hfdh64fh66bh8dh64h674hdbh697h544h57ch59ch554h595h575h664h64bh689h549h5bdh65h6h5c8h88q,ValueFrom:nil,},EnvVar{Name:ENABLE_DESIGNATE,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_HEAT,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_IRONIC,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_MANILA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_OCTAVIA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_WATCHER,Value:no,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},EnvVar{Name:UNPACK_THEME,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:logs,ReadOnly:false,MountPath:/var/log/horizon,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-6jtf7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*48,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*42400,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod horizon-66f4589f77-j49wf_openstack(8e9e19dd-550a-467d-bd79-03ee07c2f470): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 29 15:48:51 crc kubenswrapper[5008]: E0129 15:48:51.733140 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"horizon-log\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\", failed to \"StartContainer\" for \"horizon\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-horizon:current-podified\\\"\"]" pod="openstack/horizon-66f4589f77-j49wf" podUID="8e9e19dd-550a-467d-bd79-03ee07c2f470" Jan 29 15:48:53 crc kubenswrapper[5008]: I0129 15:48:53.460260 5008 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-5c79d794d7-k22kg" podUID="1d24d44a-1e0f-43ea-a065-9c4f369e0045" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.133:5353: i/o timeout" Jan 29 15:48:58 crc kubenswrapper[5008]: I0129 15:48:58.461858 5008 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-5c79d794d7-k22kg" podUID="1d24d44a-1e0f-43ea-a065-9c4f369e0045" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.133:5353: i/o timeout" Jan 29 15:49:03 crc kubenswrapper[5008]: I0129 15:49:03.462633 5008 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-5c79d794d7-k22kg" podUID="1d24d44a-1e0f-43ea-a065-9c4f369e0045" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.133:5353: i/o timeout" Jan 29 15:49:04 crc kubenswrapper[5008]: E0129 15:49:04.290997 5008 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-horizon:current-podified" Jan 29 15:49:04 crc kubenswrapper[5008]: E0129 15:49:04.291404 5008 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:horizon-log,Image:quay.io/podified-antelope-centos9/openstack-horizon:current-podified,Command:[/bin/bash],Args:[-c tail -n+1 -F /var/log/horizon/horizon.log],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n5cch654h59h55fh685h76hdh5d6hb8h5c9hbh645hfh54h695hcch67h696h5f6h5d8h5dbh585h5h576h644hc9h5f9h5cbh5b6h6h597h55bq,ValueFrom:nil,},EnvVar{Name:ENABLE_DESIGNATE,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_HEAT,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_IRONIC,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_MANILA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_OCTAVIA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_WATCHER,Value:no,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},EnvVar{Name:UNPACK_THEME,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:logs,ReadOnly:false,MountPath:/var/log/horizon,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-zfh8n,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*48,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*42400,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod horizon-59d66dd7b7-rjtfk_openstack(3b110ddf-5eea-4e32-b9f3-f07886d636a2): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 29 15:49:04 crc kubenswrapper[5008]: E0129 15:49:04.295773 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"horizon-log\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\", failed to \"StartContainer\" for \"horizon\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-horizon:current-podified\\\"\"]" pod="openstack/horizon-59d66dd7b7-rjtfk" podUID="3b110ddf-5eea-4e32-b9f3-f07886d636a2" Jan 29 15:49:08 crc kubenswrapper[5008]: I0129 15:49:08.463750 5008 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-5c79d794d7-k22kg" podUID="1d24d44a-1e0f-43ea-a065-9c4f369e0045" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.133:5353: i/o timeout" Jan 29 15:49:13 crc kubenswrapper[5008]: I0129 15:49:13.465632 5008 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-5c79d794d7-k22kg" podUID="1d24d44a-1e0f-43ea-a065-9c4f369e0045" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.133:5353: i/o timeout" Jan 29 15:49:18 crc kubenswrapper[5008]: I0129 15:49:18.467244 5008 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-5c79d794d7-k22kg" podUID="1d24d44a-1e0f-43ea-a065-9c4f369e0045" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.133:5353: i/o timeout" Jan 29 15:49:20 crc kubenswrapper[5008]: E0129 15:49:20.800306 5008 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-horizon:current-podified" Jan 29 15:49:20 crc kubenswrapper[5008]: E0129 15:49:20.800874 5008 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:horizon-log,Image:quay.io/podified-antelope-centos9/openstack-horizon:current-podified,Command:[/bin/bash],Args:[-c tail -n+1 -F /var/log/horizon/horizon.log],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:nbbh89h654h697h589h59fh65ch559h9bh676h5c5h55bhcbh555h55h698h66hc9h646h59bh5c9h557h659h669hf6h54h676h57h665h5dfhb4hd8q,ValueFrom:nil,},EnvVar{Name:ENABLE_DESIGNATE,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_HEAT,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_IRONIC,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_MANILA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_OCTAVIA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_WATCHER,Value:no,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},EnvVar{Name:UNPACK_THEME,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:logs,ReadOnly:false,MountPath:/var/log/horizon,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-xnq52,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*48,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*42400,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod horizon-65975bb757-q7xqt_openstack(5f86a518-6363-4796-a4f4-7208aacccc99): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 29 15:49:20 crc kubenswrapper[5008]: E0129 15:49:20.803639 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"horizon-log\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\", failed to \"StartContainer\" for \"horizon\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-horizon:current-podified\\\"\"]" pod="openstack/horizon-65975bb757-q7xqt" podUID="5f86a518-6363-4796-a4f4-7208aacccc99" Jan 29 15:49:20 crc kubenswrapper[5008]: I0129 15:49:20.917832 5008 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c79d794d7-k22kg" Jan 29 15:49:20 crc kubenswrapper[5008]: I0129 15:49:20.932350 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zsqfw\" (UniqueName: \"kubernetes.io/projected/1d24d44a-1e0f-43ea-a065-9c4f369e0045-kube-api-access-zsqfw\") pod \"1d24d44a-1e0f-43ea-a065-9c4f369e0045\" (UID: \"1d24d44a-1e0f-43ea-a065-9c4f369e0045\") " Jan 29 15:49:20 crc kubenswrapper[5008]: I0129 15:49:20.932582 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1d24d44a-1e0f-43ea-a065-9c4f369e0045-ovsdbserver-nb\") pod \"1d24d44a-1e0f-43ea-a065-9c4f369e0045\" (UID: \"1d24d44a-1e0f-43ea-a065-9c4f369e0045\") " Jan 29 15:49:20 crc kubenswrapper[5008]: I0129 15:49:20.932651 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1d24d44a-1e0f-43ea-a065-9c4f369e0045-dns-svc\") pod \"1d24d44a-1e0f-43ea-a065-9c4f369e0045\" (UID: \"1d24d44a-1e0f-43ea-a065-9c4f369e0045\") " Jan 29 15:49:20 crc kubenswrapper[5008]: I0129 15:49:20.932723 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1d24d44a-1e0f-43ea-a065-9c4f369e0045-ovsdbserver-sb\") pod \"1d24d44a-1e0f-43ea-a065-9c4f369e0045\" (UID: \"1d24d44a-1e0f-43ea-a065-9c4f369e0045\") " Jan 29 15:49:20 crc kubenswrapper[5008]: I0129 15:49:20.933077 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/1d24d44a-1e0f-43ea-a065-9c4f369e0045-dns-swift-storage-0\") pod \"1d24d44a-1e0f-43ea-a065-9c4f369e0045\" (UID: \"1d24d44a-1e0f-43ea-a065-9c4f369e0045\") " Jan 29 15:49:20 crc kubenswrapper[5008]: I0129 15:49:20.933174 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1d24d44a-1e0f-43ea-a065-9c4f369e0045-config\") pod \"1d24d44a-1e0f-43ea-a065-9c4f369e0045\" (UID: \"1d24d44a-1e0f-43ea-a065-9c4f369e0045\") " Jan 29 15:49:20 crc kubenswrapper[5008]: I0129 15:49:20.942062 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d24d44a-1e0f-43ea-a065-9c4f369e0045-kube-api-access-zsqfw" (OuterVolumeSpecName: "kube-api-access-zsqfw") pod "1d24d44a-1e0f-43ea-a065-9c4f369e0045" (UID: "1d24d44a-1e0f-43ea-a065-9c4f369e0045"). InnerVolumeSpecName "kube-api-access-zsqfw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:49:20 crc kubenswrapper[5008]: I0129 15:49:20.964874 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c79d794d7-k22kg" event={"ID":"1d24d44a-1e0f-43ea-a065-9c4f369e0045","Type":"ContainerDied","Data":"ce4f811545cec808190704383cf9c2a75b48fb0966a323612a8e888c6a8f70bd"} Jan 29 15:49:20 crc kubenswrapper[5008]: I0129 15:49:20.964900 5008 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c79d794d7-k22kg" Jan 29 15:49:21 crc kubenswrapper[5008]: I0129 15:49:21.013794 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1d24d44a-1e0f-43ea-a065-9c4f369e0045-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "1d24d44a-1e0f-43ea-a065-9c4f369e0045" (UID: "1d24d44a-1e0f-43ea-a065-9c4f369e0045"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:49:21 crc kubenswrapper[5008]: I0129 15:49:21.018353 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1d24d44a-1e0f-43ea-a065-9c4f369e0045-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "1d24d44a-1e0f-43ea-a065-9c4f369e0045" (UID: "1d24d44a-1e0f-43ea-a065-9c4f369e0045"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:49:21 crc kubenswrapper[5008]: I0129 15:49:21.029370 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1d24d44a-1e0f-43ea-a065-9c4f369e0045-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "1d24d44a-1e0f-43ea-a065-9c4f369e0045" (UID: "1d24d44a-1e0f-43ea-a065-9c4f369e0045"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:49:21 crc kubenswrapper[5008]: I0129 15:49:21.040264 5008 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1d24d44a-1e0f-43ea-a065-9c4f369e0045-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 29 15:49:21 crc kubenswrapper[5008]: I0129 15:49:21.040289 5008 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/1d24d44a-1e0f-43ea-a065-9c4f369e0045-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 29 15:49:21 crc kubenswrapper[5008]: I0129 15:49:21.040303 5008 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zsqfw\" (UniqueName: \"kubernetes.io/projected/1d24d44a-1e0f-43ea-a065-9c4f369e0045-kube-api-access-zsqfw\") on node \"crc\" DevicePath \"\"" Jan 29 15:49:21 crc kubenswrapper[5008]: I0129 15:49:21.040312 5008 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1d24d44a-1e0f-43ea-a065-9c4f369e0045-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 29 15:49:21 crc kubenswrapper[5008]: I0129 15:49:21.043909 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1d24d44a-1e0f-43ea-a065-9c4f369e0045-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "1d24d44a-1e0f-43ea-a065-9c4f369e0045" (UID: "1d24d44a-1e0f-43ea-a065-9c4f369e0045"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:49:21 crc kubenswrapper[5008]: I0129 15:49:21.061323 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1d24d44a-1e0f-43ea-a065-9c4f369e0045-config" (OuterVolumeSpecName: "config") pod "1d24d44a-1e0f-43ea-a065-9c4f369e0045" (UID: "1d24d44a-1e0f-43ea-a065-9c4f369e0045"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:49:21 crc kubenswrapper[5008]: I0129 15:49:21.142349 5008 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1d24d44a-1e0f-43ea-a065-9c4f369e0045-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 29 15:49:21 crc kubenswrapper[5008]: I0129 15:49:21.142385 5008 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1d24d44a-1e0f-43ea-a065-9c4f369e0045-config\") on node \"crc\" DevicePath \"\"" Jan 29 15:49:21 crc kubenswrapper[5008]: I0129 15:49:21.237102 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-7f49b8c48b-x77zl"] Jan 29 15:49:21 crc kubenswrapper[5008]: I0129 15:49:21.297561 5008 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5c79d794d7-k22kg"] Jan 29 15:49:21 crc kubenswrapper[5008]: I0129 15:49:21.303502 5008 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5c79d794d7-k22kg"] Jan 29 15:49:21 crc kubenswrapper[5008]: I0129 15:49:21.343012 5008 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1d24d44a-1e0f-43ea-a065-9c4f369e0045" path="/var/lib/kubelet/pods/1d24d44a-1e0f-43ea-a065-9c4f369e0045/volumes" Jan 29 15:49:21 crc kubenswrapper[5008]: I0129 15:49:21.465201 5008 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-59d66dd7b7-rjtfk" Jan 29 15:49:21 crc kubenswrapper[5008]: I0129 15:49:21.473321 5008 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-66f4589f77-j49wf" Jan 29 15:49:21 crc kubenswrapper[5008]: E0129 15:49:21.507689 5008 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-ceilometer-central:current-podified" Jan 29 15:49:21 crc kubenswrapper[5008]: E0129 15:49:21.507932 5008 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ceilometer-central-agent,Image:quay.io/podified-antelope-centos9/openstack-ceilometer-central:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n68fh666h94h85h96h57fh59fh588h5fdh647h66chbbh67ch6ch5dch68ch677h5d8h599h5fbh64ch5b7h68fhfbhbh58fh556h67dh5f6h5c8hc9h5b5q,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:ceilometer-central-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-ngjqg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/python3 /var/lib/openstack/bin/centralhealth.py],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(8457b44a-814e-403f-a2c9-71907f5cb2d2): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 29 15:49:21 crc kubenswrapper[5008]: I0129 15:49:21.552941 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8e9e19dd-550a-467d-bd79-03ee07c2f470-logs\") pod \"8e9e19dd-550a-467d-bd79-03ee07c2f470\" (UID: \"8e9e19dd-550a-467d-bd79-03ee07c2f470\") " Jan 29 15:49:21 crc kubenswrapper[5008]: I0129 15:49:21.552989 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6jtf7\" (UniqueName: \"kubernetes.io/projected/8e9e19dd-550a-467d-bd79-03ee07c2f470-kube-api-access-6jtf7\") pod \"8e9e19dd-550a-467d-bd79-03ee07c2f470\" (UID: \"8e9e19dd-550a-467d-bd79-03ee07c2f470\") " Jan 29 15:49:21 crc kubenswrapper[5008]: I0129 15:49:21.553054 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/8e9e19dd-550a-467d-bd79-03ee07c2f470-horizon-secret-key\") pod \"8e9e19dd-550a-467d-bd79-03ee07c2f470\" (UID: \"8e9e19dd-550a-467d-bd79-03ee07c2f470\") " Jan 29 15:49:21 crc kubenswrapper[5008]: I0129 15:49:21.553100 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/3b110ddf-5eea-4e32-b9f3-f07886d636a2-horizon-secret-key\") pod \"3b110ddf-5eea-4e32-b9f3-f07886d636a2\" (UID: \"3b110ddf-5eea-4e32-b9f3-f07886d636a2\") " Jan 29 15:49:21 crc kubenswrapper[5008]: I0129 15:49:21.553211 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/8e9e19dd-550a-467d-bd79-03ee07c2f470-scripts\") pod \"8e9e19dd-550a-467d-bd79-03ee07c2f470\" (UID: \"8e9e19dd-550a-467d-bd79-03ee07c2f470\") " Jan 29 15:49:21 crc kubenswrapper[5008]: I0129 15:49:21.553252 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zfh8n\" (UniqueName: \"kubernetes.io/projected/3b110ddf-5eea-4e32-b9f3-f07886d636a2-kube-api-access-zfh8n\") pod \"3b110ddf-5eea-4e32-b9f3-f07886d636a2\" (UID: \"3b110ddf-5eea-4e32-b9f3-f07886d636a2\") " Jan 29 15:49:21 crc kubenswrapper[5008]: I0129 15:49:21.553285 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/3b110ddf-5eea-4e32-b9f3-f07886d636a2-scripts\") pod \"3b110ddf-5eea-4e32-b9f3-f07886d636a2\" (UID: \"3b110ddf-5eea-4e32-b9f3-f07886d636a2\") " Jan 29 15:49:21 crc kubenswrapper[5008]: I0129 15:49:21.553309 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3b110ddf-5eea-4e32-b9f3-f07886d636a2-logs\") pod \"3b110ddf-5eea-4e32-b9f3-f07886d636a2\" (UID: \"3b110ddf-5eea-4e32-b9f3-f07886d636a2\") " Jan 29 15:49:21 crc kubenswrapper[5008]: I0129 15:49:21.553361 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/8e9e19dd-550a-467d-bd79-03ee07c2f470-config-data\") pod \"8e9e19dd-550a-467d-bd79-03ee07c2f470\" (UID: \"8e9e19dd-550a-467d-bd79-03ee07c2f470\") " Jan 29 15:49:21 crc kubenswrapper[5008]: I0129 15:49:21.553497 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/3b110ddf-5eea-4e32-b9f3-f07886d636a2-config-data\") pod \"3b110ddf-5eea-4e32-b9f3-f07886d636a2\" (UID: \"3b110ddf-5eea-4e32-b9f3-f07886d636a2\") " Jan 29 15:49:21 crc kubenswrapper[5008]: I0129 15:49:21.553545 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8e9e19dd-550a-467d-bd79-03ee07c2f470-logs" (OuterVolumeSpecName: "logs") pod "8e9e19dd-550a-467d-bd79-03ee07c2f470" (UID: "8e9e19dd-550a-467d-bd79-03ee07c2f470"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 15:49:21 crc kubenswrapper[5008]: I0129 15:49:21.553971 5008 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8e9e19dd-550a-467d-bd79-03ee07c2f470-logs\") on node \"crc\" DevicePath \"\"" Jan 29 15:49:21 crc kubenswrapper[5008]: I0129 15:49:21.555796 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3b110ddf-5eea-4e32-b9f3-f07886d636a2-config-data" (OuterVolumeSpecName: "config-data") pod "3b110ddf-5eea-4e32-b9f3-f07886d636a2" (UID: "3b110ddf-5eea-4e32-b9f3-f07886d636a2"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:49:21 crc kubenswrapper[5008]: I0129 15:49:21.556542 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3b110ddf-5eea-4e32-b9f3-f07886d636a2-logs" (OuterVolumeSpecName: "logs") pod "3b110ddf-5eea-4e32-b9f3-f07886d636a2" (UID: "3b110ddf-5eea-4e32-b9f3-f07886d636a2"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 15:49:21 crc kubenswrapper[5008]: I0129 15:49:21.556696 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8e9e19dd-550a-467d-bd79-03ee07c2f470-scripts" (OuterVolumeSpecName: "scripts") pod "8e9e19dd-550a-467d-bd79-03ee07c2f470" (UID: "8e9e19dd-550a-467d-bd79-03ee07c2f470"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:49:21 crc kubenswrapper[5008]: I0129 15:49:21.557267 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3b110ddf-5eea-4e32-b9f3-f07886d636a2-scripts" (OuterVolumeSpecName: "scripts") pod "3b110ddf-5eea-4e32-b9f3-f07886d636a2" (UID: "3b110ddf-5eea-4e32-b9f3-f07886d636a2"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:49:21 crc kubenswrapper[5008]: I0129 15:49:21.557464 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8e9e19dd-550a-467d-bd79-03ee07c2f470-config-data" (OuterVolumeSpecName: "config-data") pod "8e9e19dd-550a-467d-bd79-03ee07c2f470" (UID: "8e9e19dd-550a-467d-bd79-03ee07c2f470"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:49:21 crc kubenswrapper[5008]: I0129 15:49:21.559318 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3b110ddf-5eea-4e32-b9f3-f07886d636a2-kube-api-access-zfh8n" (OuterVolumeSpecName: "kube-api-access-zfh8n") pod "3b110ddf-5eea-4e32-b9f3-f07886d636a2" (UID: "3b110ddf-5eea-4e32-b9f3-f07886d636a2"). InnerVolumeSpecName "kube-api-access-zfh8n". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:49:21 crc kubenswrapper[5008]: I0129 15:49:21.567012 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8e9e19dd-550a-467d-bd79-03ee07c2f470-kube-api-access-6jtf7" (OuterVolumeSpecName: "kube-api-access-6jtf7") pod "8e9e19dd-550a-467d-bd79-03ee07c2f470" (UID: "8e9e19dd-550a-467d-bd79-03ee07c2f470"). InnerVolumeSpecName "kube-api-access-6jtf7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:49:21 crc kubenswrapper[5008]: I0129 15:49:21.578679 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8e9e19dd-550a-467d-bd79-03ee07c2f470-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "8e9e19dd-550a-467d-bd79-03ee07c2f470" (UID: "8e9e19dd-550a-467d-bd79-03ee07c2f470"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:49:21 crc kubenswrapper[5008]: I0129 15:49:21.581284 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3b110ddf-5eea-4e32-b9f3-f07886d636a2-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "3b110ddf-5eea-4e32-b9f3-f07886d636a2" (UID: "3b110ddf-5eea-4e32-b9f3-f07886d636a2"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:49:21 crc kubenswrapper[5008]: I0129 15:49:21.655453 5008 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3b110ddf-5eea-4e32-b9f3-f07886d636a2-logs\") on node \"crc\" DevicePath \"\"" Jan 29 15:49:21 crc kubenswrapper[5008]: I0129 15:49:21.655487 5008 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/8e9e19dd-550a-467d-bd79-03ee07c2f470-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 15:49:21 crc kubenswrapper[5008]: I0129 15:49:21.655502 5008 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/3b110ddf-5eea-4e32-b9f3-f07886d636a2-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 15:49:21 crc kubenswrapper[5008]: I0129 15:49:21.655515 5008 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6jtf7\" (UniqueName: \"kubernetes.io/projected/8e9e19dd-550a-467d-bd79-03ee07c2f470-kube-api-access-6jtf7\") on node \"crc\" DevicePath \"\"" Jan 29 15:49:21 crc kubenswrapper[5008]: I0129 15:49:21.655531 5008 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/8e9e19dd-550a-467d-bd79-03ee07c2f470-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Jan 29 15:49:21 crc kubenswrapper[5008]: I0129 15:49:21.655542 5008 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/3b110ddf-5eea-4e32-b9f3-f07886d636a2-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Jan 29 15:49:21 crc kubenswrapper[5008]: I0129 15:49:21.655553 5008 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/8e9e19dd-550a-467d-bd79-03ee07c2f470-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 15:49:21 crc kubenswrapper[5008]: I0129 15:49:21.655565 5008 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zfh8n\" (UniqueName: \"kubernetes.io/projected/3b110ddf-5eea-4e32-b9f3-f07886d636a2-kube-api-access-zfh8n\") on node \"crc\" DevicePath \"\"" Jan 29 15:49:21 crc kubenswrapper[5008]: I0129 15:49:21.655576 5008 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/3b110ddf-5eea-4e32-b9f3-f07886d636a2-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 15:49:21 crc kubenswrapper[5008]: I0129 15:49:21.976583 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-59d66dd7b7-rjtfk" event={"ID":"3b110ddf-5eea-4e32-b9f3-f07886d636a2","Type":"ContainerDied","Data":"edb6a5e3eecc88a8d2bbfb0fdbece87ea6a4b28d555c22d32a7db25bc8e06e84"} Jan 29 15:49:21 crc kubenswrapper[5008]: I0129 15:49:21.976655 5008 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-59d66dd7b7-rjtfk" Jan 29 15:49:21 crc kubenswrapper[5008]: I0129 15:49:21.981266 5008 generic.go:334] "Generic (PLEG): container finished" podID="f8408515-bbd2-46aa-b98f-a331b6659aa8" containerID="82015428914e1b8d83489174480b3a04643dbd25b377d65c00407eb4dfbc5a91" exitCode=0 Jan 29 15:49:21 crc kubenswrapper[5008]: I0129 15:49:21.981407 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-b8gfd" event={"ID":"f8408515-bbd2-46aa-b98f-a331b6659aa8","Type":"ContainerDied","Data":"82015428914e1b8d83489174480b3a04643dbd25b377d65c00407eb4dfbc5a91"} Jan 29 15:49:21 crc kubenswrapper[5008]: I0129 15:49:21.988160 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-66f4589f77-j49wf" event={"ID":"8e9e19dd-550a-467d-bd79-03ee07c2f470","Type":"ContainerDied","Data":"1409d01f2c501abf5116a293f455b4ede7359b5dd6f401ad59f4bc1ff5e27560"} Jan 29 15:49:21 crc kubenswrapper[5008]: I0129 15:49:21.988221 5008 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-66f4589f77-j49wf" Jan 29 15:49:22 crc kubenswrapper[5008]: I0129 15:49:22.075194 5008 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-59d66dd7b7-rjtfk"] Jan 29 15:49:22 crc kubenswrapper[5008]: I0129 15:49:22.082487 5008 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-59d66dd7b7-rjtfk"] Jan 29 15:49:22 crc kubenswrapper[5008]: I0129 15:49:22.099433 5008 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-66f4589f77-j49wf"] Jan 29 15:49:22 crc kubenswrapper[5008]: I0129 15:49:22.107671 5008 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-66f4589f77-j49wf"] Jan 29 15:49:23 crc kubenswrapper[5008]: I0129 15:49:23.345844 5008 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3b110ddf-5eea-4e32-b9f3-f07886d636a2" path="/var/lib/kubelet/pods/3b110ddf-5eea-4e32-b9f3-f07886d636a2/volumes" Jan 29 15:49:23 crc kubenswrapper[5008]: I0129 15:49:23.347442 5008 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8e9e19dd-550a-467d-bd79-03ee07c2f470" path="/var/lib/kubelet/pods/8e9e19dd-550a-467d-bd79-03ee07c2f470/volumes" Jan 29 15:49:23 crc kubenswrapper[5008]: E0129 15:49:23.390311 5008 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-barbican-api:current-podified" Jan 29 15:49:23 crc kubenswrapper[5008]: E0129 15:49:23.390518 5008 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:barbican-db-sync,Image:quay.io/podified-antelope-centos9/openstack-barbican-api:current-podified,Command:[/bin/bash],Args:[-c barbican-manage db upgrade],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/barbican/barbican.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-5jftb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42403,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:*42403,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod barbican-db-sync-rcl2z_openstack(4ec0e696-652d-463e-b97e-dad0065a543b): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 29 15:49:23 crc kubenswrapper[5008]: E0129 15:49:23.391694 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"barbican-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/barbican-db-sync-rcl2z" podUID="4ec0e696-652d-463e-b97e-dad0065a543b" Jan 29 15:49:23 crc kubenswrapper[5008]: I0129 15:49:23.468154 5008 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-5c79d794d7-k22kg" podUID="1d24d44a-1e0f-43ea-a065-9c4f369e0045" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.133:5353: i/o timeout" Jan 29 15:49:23 crc kubenswrapper[5008]: I0129 15:49:23.468649 5008 scope.go:117] "RemoveContainer" containerID="8c955580cc84bdb7c729644dacf0097c59885b458cef63ff2bf7694209b8b51b" Jan 29 15:49:23 crc kubenswrapper[5008]: I0129 15:49:23.497967 5008 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-b8gfd" Jan 29 15:49:23 crc kubenswrapper[5008]: I0129 15:49:23.503848 5008 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-65975bb757-q7xqt" Jan 29 15:49:23 crc kubenswrapper[5008]: I0129 15:49:23.518979 5008 scope.go:117] "RemoveContainer" containerID="083f5bd0f3b73b9e5442787b14d42aed7700b0e82373d83000e080c51c1d585e" Jan 29 15:49:23 crc kubenswrapper[5008]: E0129 15:49:23.546732 5008 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified" Jan 29 15:49:23 crc kubenswrapper[5008]: E0129 15:49:23.546893 5008 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:cinder-db-sync,Image:quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_set_configs && /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:etc-machine-id,ReadOnly:true,MountPath:/etc/machine-id,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/config-data/merged,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/cinder/cinder.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:db-sync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-6b5fh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cinder-db-sync-fwhd5_openstack(9069f34b-ed91-4ced-8b05-91b83dd02938): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 29 15:49:23 crc kubenswrapper[5008]: E0129 15:49:23.548126 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/cinder-db-sync-fwhd5" podUID="9069f34b-ed91-4ced-8b05-91b83dd02938" Jan 29 15:49:23 crc kubenswrapper[5008]: I0129 15:49:23.591735 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f8408515-bbd2-46aa-b98f-a331b6659aa8-config-data\") pod \"f8408515-bbd2-46aa-b98f-a331b6659aa8\" (UID: \"f8408515-bbd2-46aa-b98f-a331b6659aa8\") " Jan 29 15:49:23 crc kubenswrapper[5008]: I0129 15:49:23.591880 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/5f86a518-6363-4796-a4f4-7208aacccc99-horizon-secret-key\") pod \"5f86a518-6363-4796-a4f4-7208aacccc99\" (UID: \"5f86a518-6363-4796-a4f4-7208aacccc99\") " Jan 29 15:49:23 crc kubenswrapper[5008]: I0129 15:49:23.591946 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5f86a518-6363-4796-a4f4-7208aacccc99-logs\") pod \"5f86a518-6363-4796-a4f4-7208aacccc99\" (UID: \"5f86a518-6363-4796-a4f4-7208aacccc99\") " Jan 29 15:49:23 crc kubenswrapper[5008]: I0129 15:49:23.592001 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/f8408515-bbd2-46aa-b98f-a331b6659aa8-credential-keys\") pod \"f8408515-bbd2-46aa-b98f-a331b6659aa8\" (UID: \"f8408515-bbd2-46aa-b98f-a331b6659aa8\") " Jan 29 15:49:23 crc kubenswrapper[5008]: I0129 15:49:23.592031 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-btnzd\" (UniqueName: \"kubernetes.io/projected/f8408515-bbd2-46aa-b98f-a331b6659aa8-kube-api-access-btnzd\") pod \"f8408515-bbd2-46aa-b98f-a331b6659aa8\" (UID: \"f8408515-bbd2-46aa-b98f-a331b6659aa8\") " Jan 29 15:49:23 crc kubenswrapper[5008]: I0129 15:49:23.592077 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xnq52\" (UniqueName: \"kubernetes.io/projected/5f86a518-6363-4796-a4f4-7208aacccc99-kube-api-access-xnq52\") pod \"5f86a518-6363-4796-a4f4-7208aacccc99\" (UID: \"5f86a518-6363-4796-a4f4-7208aacccc99\") " Jan 29 15:49:23 crc kubenswrapper[5008]: I0129 15:49:23.592143 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f8408515-bbd2-46aa-b98f-a331b6659aa8-combined-ca-bundle\") pod \"f8408515-bbd2-46aa-b98f-a331b6659aa8\" (UID: \"f8408515-bbd2-46aa-b98f-a331b6659aa8\") " Jan 29 15:49:23 crc kubenswrapper[5008]: I0129 15:49:23.592169 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/5f86a518-6363-4796-a4f4-7208aacccc99-scripts\") pod \"5f86a518-6363-4796-a4f4-7208aacccc99\" (UID: \"5f86a518-6363-4796-a4f4-7208aacccc99\") " Jan 29 15:49:23 crc kubenswrapper[5008]: I0129 15:49:23.592202 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/f8408515-bbd2-46aa-b98f-a331b6659aa8-fernet-keys\") pod \"f8408515-bbd2-46aa-b98f-a331b6659aa8\" (UID: \"f8408515-bbd2-46aa-b98f-a331b6659aa8\") " Jan 29 15:49:23 crc kubenswrapper[5008]: I0129 15:49:23.592225 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f8408515-bbd2-46aa-b98f-a331b6659aa8-scripts\") pod \"f8408515-bbd2-46aa-b98f-a331b6659aa8\" (UID: \"f8408515-bbd2-46aa-b98f-a331b6659aa8\") " Jan 29 15:49:23 crc kubenswrapper[5008]: I0129 15:49:23.592247 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/5f86a518-6363-4796-a4f4-7208aacccc99-config-data\") pod \"5f86a518-6363-4796-a4f4-7208aacccc99\" (UID: \"5f86a518-6363-4796-a4f4-7208aacccc99\") " Jan 29 15:49:23 crc kubenswrapper[5008]: I0129 15:49:23.592273 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5f86a518-6363-4796-a4f4-7208aacccc99-logs" (OuterVolumeSpecName: "logs") pod "5f86a518-6363-4796-a4f4-7208aacccc99" (UID: "5f86a518-6363-4796-a4f4-7208aacccc99"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 15:49:23 crc kubenswrapper[5008]: I0129 15:49:23.592686 5008 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5f86a518-6363-4796-a4f4-7208aacccc99-logs\") on node \"crc\" DevicePath \"\"" Jan 29 15:49:23 crc kubenswrapper[5008]: I0129 15:49:23.593310 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5f86a518-6363-4796-a4f4-7208aacccc99-config-data" (OuterVolumeSpecName: "config-data") pod "5f86a518-6363-4796-a4f4-7208aacccc99" (UID: "5f86a518-6363-4796-a4f4-7208aacccc99"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:49:23 crc kubenswrapper[5008]: I0129 15:49:23.597833 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f8408515-bbd2-46aa-b98f-a331b6659aa8-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "f8408515-bbd2-46aa-b98f-a331b6659aa8" (UID: "f8408515-bbd2-46aa-b98f-a331b6659aa8"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:49:23 crc kubenswrapper[5008]: I0129 15:49:23.597910 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5f86a518-6363-4796-a4f4-7208aacccc99-scripts" (OuterVolumeSpecName: "scripts") pod "5f86a518-6363-4796-a4f4-7208aacccc99" (UID: "5f86a518-6363-4796-a4f4-7208aacccc99"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:49:23 crc kubenswrapper[5008]: I0129 15:49:23.598173 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f8408515-bbd2-46aa-b98f-a331b6659aa8-scripts" (OuterVolumeSpecName: "scripts") pod "f8408515-bbd2-46aa-b98f-a331b6659aa8" (UID: "f8408515-bbd2-46aa-b98f-a331b6659aa8"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:49:23 crc kubenswrapper[5008]: I0129 15:49:23.598369 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5f86a518-6363-4796-a4f4-7208aacccc99-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "5f86a518-6363-4796-a4f4-7208aacccc99" (UID: "5f86a518-6363-4796-a4f4-7208aacccc99"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:49:23 crc kubenswrapper[5008]: I0129 15:49:23.598417 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f8408515-bbd2-46aa-b98f-a331b6659aa8-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "f8408515-bbd2-46aa-b98f-a331b6659aa8" (UID: "f8408515-bbd2-46aa-b98f-a331b6659aa8"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:49:23 crc kubenswrapper[5008]: I0129 15:49:23.601881 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5f86a518-6363-4796-a4f4-7208aacccc99-kube-api-access-xnq52" (OuterVolumeSpecName: "kube-api-access-xnq52") pod "5f86a518-6363-4796-a4f4-7208aacccc99" (UID: "5f86a518-6363-4796-a4f4-7208aacccc99"). InnerVolumeSpecName "kube-api-access-xnq52". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:49:23 crc kubenswrapper[5008]: I0129 15:49:23.604994 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f8408515-bbd2-46aa-b98f-a331b6659aa8-kube-api-access-btnzd" (OuterVolumeSpecName: "kube-api-access-btnzd") pod "f8408515-bbd2-46aa-b98f-a331b6659aa8" (UID: "f8408515-bbd2-46aa-b98f-a331b6659aa8"). InnerVolumeSpecName "kube-api-access-btnzd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:49:23 crc kubenswrapper[5008]: I0129 15:49:23.627026 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f8408515-bbd2-46aa-b98f-a331b6659aa8-config-data" (OuterVolumeSpecName: "config-data") pod "f8408515-bbd2-46aa-b98f-a331b6659aa8" (UID: "f8408515-bbd2-46aa-b98f-a331b6659aa8"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:49:23 crc kubenswrapper[5008]: I0129 15:49:23.627796 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f8408515-bbd2-46aa-b98f-a331b6659aa8-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f8408515-bbd2-46aa-b98f-a331b6659aa8" (UID: "f8408515-bbd2-46aa-b98f-a331b6659aa8"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:49:23 crc kubenswrapper[5008]: I0129 15:49:23.693926 5008 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/5f86a518-6363-4796-a4f4-7208aacccc99-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Jan 29 15:49:23 crc kubenswrapper[5008]: I0129 15:49:23.693957 5008 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/f8408515-bbd2-46aa-b98f-a331b6659aa8-credential-keys\") on node \"crc\" DevicePath \"\"" Jan 29 15:49:23 crc kubenswrapper[5008]: I0129 15:49:23.693969 5008 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-btnzd\" (UniqueName: \"kubernetes.io/projected/f8408515-bbd2-46aa-b98f-a331b6659aa8-kube-api-access-btnzd\") on node \"crc\" DevicePath \"\"" Jan 29 15:49:23 crc kubenswrapper[5008]: I0129 15:49:23.693982 5008 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xnq52\" (UniqueName: \"kubernetes.io/projected/5f86a518-6363-4796-a4f4-7208aacccc99-kube-api-access-xnq52\") on node \"crc\" DevicePath \"\"" Jan 29 15:49:23 crc kubenswrapper[5008]: I0129 15:49:23.693994 5008 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/5f86a518-6363-4796-a4f4-7208aacccc99-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 15:49:23 crc kubenswrapper[5008]: I0129 15:49:23.694006 5008 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f8408515-bbd2-46aa-b98f-a331b6659aa8-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 15:49:23 crc kubenswrapper[5008]: I0129 15:49:23.694018 5008 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/f8408515-bbd2-46aa-b98f-a331b6659aa8-fernet-keys\") on node \"crc\" DevicePath \"\"" Jan 29 15:49:23 crc kubenswrapper[5008]: I0129 15:49:23.694027 5008 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f8408515-bbd2-46aa-b98f-a331b6659aa8-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 15:49:23 crc kubenswrapper[5008]: I0129 15:49:23.694037 5008 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/5f86a518-6363-4796-a4f4-7208aacccc99-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 15:49:23 crc kubenswrapper[5008]: I0129 15:49:23.694047 5008 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f8408515-bbd2-46aa-b98f-a331b6659aa8-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 15:49:23 crc kubenswrapper[5008]: I0129 15:49:23.926286 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-bf5f5fc4b-t9vk7"] Jan 29 15:49:24 crc kubenswrapper[5008]: I0129 15:49:24.009123 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-65975bb757-q7xqt" event={"ID":"5f86a518-6363-4796-a4f4-7208aacccc99","Type":"ContainerDied","Data":"115aa46cd8290b427be260b9a17520dfb8392c574f35bae7cb7c624f65477597"} Jan 29 15:49:24 crc kubenswrapper[5008]: I0129 15:49:24.009219 5008 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-65975bb757-q7xqt" Jan 29 15:49:24 crc kubenswrapper[5008]: I0129 15:49:24.010798 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7f49b8c48b-x77zl" event={"ID":"8c3bbcd6-6512-4439-b70d-f46dd6382cfe","Type":"ContainerStarted","Data":"dac0f8e5f596bebb7822b413588359e7076b890b5ffed6cda246c2680781b018"} Jan 29 15:49:24 crc kubenswrapper[5008]: I0129 15:49:24.014975 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-gk9q8" event={"ID":"ca0fcb2d-733d-4bde-9bbf-3f7082d0e244","Type":"ContainerStarted","Data":"65ae63639c2ed32e45710e52e6b068b2f105163d6a00247deb197db6c3e0b41c"} Jan 29 15:49:24 crc kubenswrapper[5008]: I0129 15:49:24.018711 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-b8gfd" event={"ID":"f8408515-bbd2-46aa-b98f-a331b6659aa8","Type":"ContainerDied","Data":"405bad21fefa05b3e90ec899e50725ce7823c20297242fc88f79da9c15e44ffd"} Jan 29 15:49:24 crc kubenswrapper[5008]: I0129 15:49:24.018750 5008 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="405bad21fefa05b3e90ec899e50725ce7823c20297242fc88f79da9c15e44ffd" Jan 29 15:49:24 crc kubenswrapper[5008]: I0129 15:49:24.018817 5008 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-b8gfd" Jan 29 15:49:24 crc kubenswrapper[5008]: I0129 15:49:24.022933 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-tqc26" event={"ID":"c3a233d5-bf7f-4906-881c-5e81ea64e0e8","Type":"ContainerStarted","Data":"d1071455a85ae82bd88cb84ca9e9539c64ca11a3c5fff1412a478114adf32c80"} Jan 29 15:49:24 crc kubenswrapper[5008]: E0129 15:49:24.025254 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified\\\"\"" pod="openstack/cinder-db-sync-fwhd5" podUID="9069f34b-ed91-4ced-8b05-91b83dd02938" Jan 29 15:49:24 crc kubenswrapper[5008]: I0129 15:49:24.083485 5008 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-db-sync-tqc26" podStartSLOduration=5.144152025 podStartE2EDuration="1m15.08346618s" podCreationTimestamp="2026-01-29 15:48:09 +0000 UTC" firstStartedPulling="2026-01-29 15:48:10.85183948 +0000 UTC m=+1234.524693717" lastFinishedPulling="2026-01-29 15:49:20.791153605 +0000 UTC m=+1304.464007872" observedRunningTime="2026-01-29 15:49:24.071860279 +0000 UTC m=+1307.744714536" watchObservedRunningTime="2026-01-29 15:49:24.08346618 +0000 UTC m=+1307.756320417" Jan 29 15:49:24 crc kubenswrapper[5008]: I0129 15:49:24.132015 5008 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-b8gfd"] Jan 29 15:49:24 crc kubenswrapper[5008]: I0129 15:49:24.149062 5008 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-b8gfd"] Jan 29 15:49:24 crc kubenswrapper[5008]: I0129 15:49:24.173036 5008 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-65975bb757-q7xqt"] Jan 29 15:49:24 crc kubenswrapper[5008]: I0129 15:49:24.188230 5008 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-65975bb757-q7xqt"] Jan 29 15:49:24 crc kubenswrapper[5008]: W0129 15:49:24.196818 5008 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfc599e48_62d0_4908_b4ed_cd3f13094665.slice/crio-3b8b028495714be6330f2e40152ee2298496c4252f560d1c6d186ee015deaff1 WatchSource:0}: Error finding container 3b8b028495714be6330f2e40152ee2298496c4252f560d1c6d186ee015deaff1: Status 404 returned error can't find the container with id 3b8b028495714be6330f2e40152ee2298496c4252f560d1c6d186ee015deaff1 Jan 29 15:49:24 crc kubenswrapper[5008]: I0129 15:49:24.198213 5008 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-dkqkc"] Jan 29 15:49:24 crc kubenswrapper[5008]: E0129 15:49:24.198639 5008 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f8408515-bbd2-46aa-b98f-a331b6659aa8" containerName="keystone-bootstrap" Jan 29 15:49:24 crc kubenswrapper[5008]: I0129 15:49:24.198654 5008 state_mem.go:107] "Deleted CPUSet assignment" podUID="f8408515-bbd2-46aa-b98f-a331b6659aa8" containerName="keystone-bootstrap" Jan 29 15:49:24 crc kubenswrapper[5008]: E0129 15:49:24.198686 5008 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1d24d44a-1e0f-43ea-a065-9c4f369e0045" containerName="dnsmasq-dns" Jan 29 15:49:24 crc kubenswrapper[5008]: I0129 15:49:24.198714 5008 state_mem.go:107] "Deleted CPUSet assignment" podUID="1d24d44a-1e0f-43ea-a065-9c4f369e0045" containerName="dnsmasq-dns" Jan 29 15:49:24 crc kubenswrapper[5008]: E0129 15:49:24.198736 5008 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1d24d44a-1e0f-43ea-a065-9c4f369e0045" containerName="init" Jan 29 15:49:24 crc kubenswrapper[5008]: I0129 15:49:24.198744 5008 state_mem.go:107] "Deleted CPUSet assignment" podUID="1d24d44a-1e0f-43ea-a065-9c4f369e0045" containerName="init" Jan 29 15:49:24 crc kubenswrapper[5008]: I0129 15:49:24.199003 5008 memory_manager.go:354] "RemoveStaleState removing state" podUID="1d24d44a-1e0f-43ea-a065-9c4f369e0045" containerName="dnsmasq-dns" Jan 29 15:49:24 crc kubenswrapper[5008]: I0129 15:49:24.199022 5008 memory_manager.go:354] "RemoveStaleState removing state" podUID="f8408515-bbd2-46aa-b98f-a331b6659aa8" containerName="keystone-bootstrap" Jan 29 15:49:24 crc kubenswrapper[5008]: I0129 15:49:24.199538 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-dkqkc" Jan 29 15:49:24 crc kubenswrapper[5008]: I0129 15:49:24.204846 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Jan 29 15:49:24 crc kubenswrapper[5008]: I0129 15:49:24.205120 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Jan 29 15:49:24 crc kubenswrapper[5008]: I0129 15:49:24.205309 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Jan 29 15:49:24 crc kubenswrapper[5008]: I0129 15:49:24.205546 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Jan 29 15:49:24 crc kubenswrapper[5008]: I0129 15:49:24.205619 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-sgcvh" Jan 29 15:49:24 crc kubenswrapper[5008]: I0129 15:49:24.210120 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-dkqkc"] Jan 29 15:49:24 crc kubenswrapper[5008]: I0129 15:49:24.313375 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/39abc131-ba3e-4cd8-916a-520789627dd5-credential-keys\") pod \"keystone-bootstrap-dkqkc\" (UID: \"39abc131-ba3e-4cd8-916a-520789627dd5\") " pod="openstack/keystone-bootstrap-dkqkc" Jan 29 15:49:24 crc kubenswrapper[5008]: I0129 15:49:24.313497 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/39abc131-ba3e-4cd8-916a-520789627dd5-combined-ca-bundle\") pod \"keystone-bootstrap-dkqkc\" (UID: \"39abc131-ba3e-4cd8-916a-520789627dd5\") " pod="openstack/keystone-bootstrap-dkqkc" Jan 29 15:49:24 crc kubenswrapper[5008]: I0129 15:49:24.313548 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/39abc131-ba3e-4cd8-916a-520789627dd5-config-data\") pod \"keystone-bootstrap-dkqkc\" (UID: \"39abc131-ba3e-4cd8-916a-520789627dd5\") " pod="openstack/keystone-bootstrap-dkqkc" Jan 29 15:49:24 crc kubenswrapper[5008]: I0129 15:49:24.313591 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/39abc131-ba3e-4cd8-916a-520789627dd5-fernet-keys\") pod \"keystone-bootstrap-dkqkc\" (UID: \"39abc131-ba3e-4cd8-916a-520789627dd5\") " pod="openstack/keystone-bootstrap-dkqkc" Jan 29 15:49:24 crc kubenswrapper[5008]: I0129 15:49:24.313692 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/39abc131-ba3e-4cd8-916a-520789627dd5-scripts\") pod \"keystone-bootstrap-dkqkc\" (UID: \"39abc131-ba3e-4cd8-916a-520789627dd5\") " pod="openstack/keystone-bootstrap-dkqkc" Jan 29 15:49:24 crc kubenswrapper[5008]: I0129 15:49:24.313773 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x26nk\" (UniqueName: \"kubernetes.io/projected/39abc131-ba3e-4cd8-916a-520789627dd5-kube-api-access-x26nk\") pod \"keystone-bootstrap-dkqkc\" (UID: \"39abc131-ba3e-4cd8-916a-520789627dd5\") " pod="openstack/keystone-bootstrap-dkqkc" Jan 29 15:49:24 crc kubenswrapper[5008]: I0129 15:49:24.414936 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/39abc131-ba3e-4cd8-916a-520789627dd5-credential-keys\") pod \"keystone-bootstrap-dkqkc\" (UID: \"39abc131-ba3e-4cd8-916a-520789627dd5\") " pod="openstack/keystone-bootstrap-dkqkc" Jan 29 15:49:24 crc kubenswrapper[5008]: I0129 15:49:24.415014 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/39abc131-ba3e-4cd8-916a-520789627dd5-combined-ca-bundle\") pod \"keystone-bootstrap-dkqkc\" (UID: \"39abc131-ba3e-4cd8-916a-520789627dd5\") " pod="openstack/keystone-bootstrap-dkqkc" Jan 29 15:49:24 crc kubenswrapper[5008]: I0129 15:49:24.415039 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/39abc131-ba3e-4cd8-916a-520789627dd5-config-data\") pod \"keystone-bootstrap-dkqkc\" (UID: \"39abc131-ba3e-4cd8-916a-520789627dd5\") " pod="openstack/keystone-bootstrap-dkqkc" Jan 29 15:49:24 crc kubenswrapper[5008]: I0129 15:49:24.415067 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/39abc131-ba3e-4cd8-916a-520789627dd5-fernet-keys\") pod \"keystone-bootstrap-dkqkc\" (UID: \"39abc131-ba3e-4cd8-916a-520789627dd5\") " pod="openstack/keystone-bootstrap-dkqkc" Jan 29 15:49:24 crc kubenswrapper[5008]: I0129 15:49:24.415105 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/39abc131-ba3e-4cd8-916a-520789627dd5-scripts\") pod \"keystone-bootstrap-dkqkc\" (UID: \"39abc131-ba3e-4cd8-916a-520789627dd5\") " pod="openstack/keystone-bootstrap-dkqkc" Jan 29 15:49:24 crc kubenswrapper[5008]: I0129 15:49:24.415172 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x26nk\" (UniqueName: \"kubernetes.io/projected/39abc131-ba3e-4cd8-916a-520789627dd5-kube-api-access-x26nk\") pod \"keystone-bootstrap-dkqkc\" (UID: \"39abc131-ba3e-4cd8-916a-520789627dd5\") " pod="openstack/keystone-bootstrap-dkqkc" Jan 29 15:49:24 crc kubenswrapper[5008]: I0129 15:49:24.422670 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/39abc131-ba3e-4cd8-916a-520789627dd5-combined-ca-bundle\") pod \"keystone-bootstrap-dkqkc\" (UID: \"39abc131-ba3e-4cd8-916a-520789627dd5\") " pod="openstack/keystone-bootstrap-dkqkc" Jan 29 15:49:24 crc kubenswrapper[5008]: I0129 15:49:24.423806 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/39abc131-ba3e-4cd8-916a-520789627dd5-credential-keys\") pod \"keystone-bootstrap-dkqkc\" (UID: \"39abc131-ba3e-4cd8-916a-520789627dd5\") " pod="openstack/keystone-bootstrap-dkqkc" Jan 29 15:49:24 crc kubenswrapper[5008]: I0129 15:49:24.424342 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/39abc131-ba3e-4cd8-916a-520789627dd5-fernet-keys\") pod \"keystone-bootstrap-dkqkc\" (UID: \"39abc131-ba3e-4cd8-916a-520789627dd5\") " pod="openstack/keystone-bootstrap-dkqkc" Jan 29 15:49:24 crc kubenswrapper[5008]: I0129 15:49:24.434293 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x26nk\" (UniqueName: \"kubernetes.io/projected/39abc131-ba3e-4cd8-916a-520789627dd5-kube-api-access-x26nk\") pod \"keystone-bootstrap-dkqkc\" (UID: \"39abc131-ba3e-4cd8-916a-520789627dd5\") " pod="openstack/keystone-bootstrap-dkqkc" Jan 29 15:49:24 crc kubenswrapper[5008]: I0129 15:49:24.434356 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/39abc131-ba3e-4cd8-916a-520789627dd5-config-data\") pod \"keystone-bootstrap-dkqkc\" (UID: \"39abc131-ba3e-4cd8-916a-520789627dd5\") " pod="openstack/keystone-bootstrap-dkqkc" Jan 29 15:49:24 crc kubenswrapper[5008]: I0129 15:49:24.434884 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/39abc131-ba3e-4cd8-916a-520789627dd5-scripts\") pod \"keystone-bootstrap-dkqkc\" (UID: \"39abc131-ba3e-4cd8-916a-520789627dd5\") " pod="openstack/keystone-bootstrap-dkqkc" Jan 29 15:49:24 crc kubenswrapper[5008]: I0129 15:49:24.527046 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-dkqkc" Jan 29 15:49:24 crc kubenswrapper[5008]: I0129 15:49:24.792753 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-dkqkc"] Jan 29 15:49:25 crc kubenswrapper[5008]: I0129 15:49:25.034633 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-dkqkc" event={"ID":"39abc131-ba3e-4cd8-916a-520789627dd5","Type":"ContainerStarted","Data":"9b5824f48cc959e52e85d63863855d59e169e89e7ec31bd5ec6b371bffc34475"} Jan 29 15:49:25 crc kubenswrapper[5008]: I0129 15:49:25.034944 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-dkqkc" event={"ID":"39abc131-ba3e-4cd8-916a-520789627dd5","Type":"ContainerStarted","Data":"49bd383a96da543cdc3197d5abfd843e95829c564775027bdeab41c6985acadd"} Jan 29 15:49:25 crc kubenswrapper[5008]: I0129 15:49:25.040516 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8457b44a-814e-403f-a2c9-71907f5cb2d2","Type":"ContainerStarted","Data":"c73a64288c02c3985aea7548e9fdb8867b747089e767ede40e25dba325344234"} Jan 29 15:49:25 crc kubenswrapper[5008]: I0129 15:49:25.043534 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7f49b8c48b-x77zl" event={"ID":"8c3bbcd6-6512-4439-b70d-f46dd6382cfe","Type":"ContainerStarted","Data":"864603c565caf07038d917f5b4aaaeae46b873a4ad67b66ea1932218a20e7fdd"} Jan 29 15:49:25 crc kubenswrapper[5008]: I0129 15:49:25.043576 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7f49b8c48b-x77zl" event={"ID":"8c3bbcd6-6512-4439-b70d-f46dd6382cfe","Type":"ContainerStarted","Data":"c27f9304d6725c80976f2a7ffbaadb3b415bca1c1d26fe7cd46a2a94470354ae"} Jan 29 15:49:25 crc kubenswrapper[5008]: I0129 15:49:25.046172 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-bf5f5fc4b-t9vk7" event={"ID":"fc599e48-62d0-4908-b4ed-cd3f13094665","Type":"ContainerStarted","Data":"5f5aecf8bd63fb893c6a35270d50e8046b10807028399e2cdfea7069233a8cd3"} Jan 29 15:49:25 crc kubenswrapper[5008]: I0129 15:49:25.046218 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-bf5f5fc4b-t9vk7" event={"ID":"fc599e48-62d0-4908-b4ed-cd3f13094665","Type":"ContainerStarted","Data":"24754d131e8a0251ba19391e948a3c3a2c435f2c51496c5aec749d99571d090c"} Jan 29 15:49:25 crc kubenswrapper[5008]: I0129 15:49:25.046233 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-bf5f5fc4b-t9vk7" event={"ID":"fc599e48-62d0-4908-b4ed-cd3f13094665","Type":"ContainerStarted","Data":"3b8b028495714be6330f2e40152ee2298496c4252f560d1c6d186ee015deaff1"} Jan 29 15:49:25 crc kubenswrapper[5008]: I0129 15:49:25.060299 5008 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-dkqkc" podStartSLOduration=1.060281477 podStartE2EDuration="1.060281477s" podCreationTimestamp="2026-01-29 15:49:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 15:49:25.053616525 +0000 UTC m=+1308.726470792" watchObservedRunningTime="2026-01-29 15:49:25.060281477 +0000 UTC m=+1308.733135714" Jan 29 15:49:25 crc kubenswrapper[5008]: I0129 15:49:25.069037 5008 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-7f49b8c48b-x77zl" podStartSLOduration=66.129417814 podStartE2EDuration="1m7.069017698s" podCreationTimestamp="2026-01-29 15:48:18 +0000 UTC" firstStartedPulling="2026-01-29 15:49:23.406660802 +0000 UTC m=+1307.079515069" lastFinishedPulling="2026-01-29 15:49:24.346260716 +0000 UTC m=+1308.019114953" observedRunningTime="2026-01-29 15:49:25.068394904 +0000 UTC m=+1308.741249151" watchObservedRunningTime="2026-01-29 15:49:25.069017698 +0000 UTC m=+1308.741871945" Jan 29 15:49:25 crc kubenswrapper[5008]: I0129 15:49:25.092686 5008 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-bf5f5fc4b-t9vk7" podStartSLOduration=66.708414261 podStartE2EDuration="1m7.092664192s" podCreationTimestamp="2026-01-29 15:48:18 +0000 UTC" firstStartedPulling="2026-01-29 15:49:24.198890701 +0000 UTC m=+1307.871744938" lastFinishedPulling="2026-01-29 15:49:24.583140622 +0000 UTC m=+1308.255994869" observedRunningTime="2026-01-29 15:49:25.087739622 +0000 UTC m=+1308.760593879" watchObservedRunningTime="2026-01-29 15:49:25.092664192 +0000 UTC m=+1308.765518429" Jan 29 15:49:25 crc kubenswrapper[5008]: I0129 15:49:25.340017 5008 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5f86a518-6363-4796-a4f4-7208aacccc99" path="/var/lib/kubelet/pods/5f86a518-6363-4796-a4f4-7208aacccc99/volumes" Jan 29 15:49:25 crc kubenswrapper[5008]: I0129 15:49:25.340729 5008 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f8408515-bbd2-46aa-b98f-a331b6659aa8" path="/var/lib/kubelet/pods/f8408515-bbd2-46aa-b98f-a331b6659aa8/volumes" Jan 29 15:49:27 crc kubenswrapper[5008]: I0129 15:49:27.064244 5008 generic.go:334] "Generic (PLEG): container finished" podID="c3a233d5-bf7f-4906-881c-5e81ea64e0e8" containerID="d1071455a85ae82bd88cb84ca9e9539c64ca11a3c5fff1412a478114adf32c80" exitCode=0 Jan 29 15:49:27 crc kubenswrapper[5008]: I0129 15:49:27.064895 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-tqc26" event={"ID":"c3a233d5-bf7f-4906-881c-5e81ea64e0e8","Type":"ContainerDied","Data":"d1071455a85ae82bd88cb84ca9e9539c64ca11a3c5fff1412a478114adf32c80"} Jan 29 15:49:29 crc kubenswrapper[5008]: I0129 15:49:29.136119 5008 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-7f49b8c48b-x77zl" Jan 29 15:49:29 crc kubenswrapper[5008]: I0129 15:49:29.138205 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-7f49b8c48b-x77zl" Jan 29 15:49:29 crc kubenswrapper[5008]: I0129 15:49:29.258891 5008 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-bf5f5fc4b-t9vk7" Jan 29 15:49:29 crc kubenswrapper[5008]: I0129 15:49:29.259227 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-bf5f5fc4b-t9vk7" Jan 29 15:49:32 crc kubenswrapper[5008]: I0129 15:49:32.119841 5008 generic.go:334] "Generic (PLEG): container finished" podID="39abc131-ba3e-4cd8-916a-520789627dd5" containerID="9b5824f48cc959e52e85d63863855d59e169e89e7ec31bd5ec6b371bffc34475" exitCode=0 Jan 29 15:49:32 crc kubenswrapper[5008]: I0129 15:49:32.120181 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-dkqkc" event={"ID":"39abc131-ba3e-4cd8-916a-520789627dd5","Type":"ContainerDied","Data":"9b5824f48cc959e52e85d63863855d59e169e89e7ec31bd5ec6b371bffc34475"} Jan 29 15:49:34 crc kubenswrapper[5008]: I0129 15:49:34.879684 5008 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-tqc26" Jan 29 15:49:34 crc kubenswrapper[5008]: I0129 15:49:34.919314 5008 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-dkqkc" Jan 29 15:49:35 crc kubenswrapper[5008]: I0129 15:49:35.010985 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c3a233d5-bf7f-4906-881c-5e81ea64e0e8-config-data\") pod \"c3a233d5-bf7f-4906-881c-5e81ea64e0e8\" (UID: \"c3a233d5-bf7f-4906-881c-5e81ea64e0e8\") " Jan 29 15:49:35 crc kubenswrapper[5008]: I0129 15:49:35.011245 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/39abc131-ba3e-4cd8-916a-520789627dd5-fernet-keys\") pod \"39abc131-ba3e-4cd8-916a-520789627dd5\" (UID: \"39abc131-ba3e-4cd8-916a-520789627dd5\") " Jan 29 15:49:35 crc kubenswrapper[5008]: I0129 15:49:35.011300 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c3a233d5-bf7f-4906-881c-5e81ea64e0e8-scripts\") pod \"c3a233d5-bf7f-4906-881c-5e81ea64e0e8\" (UID: \"c3a233d5-bf7f-4906-881c-5e81ea64e0e8\") " Jan 29 15:49:35 crc kubenswrapper[5008]: I0129 15:49:35.011424 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-62fxx\" (UniqueName: \"kubernetes.io/projected/c3a233d5-bf7f-4906-881c-5e81ea64e0e8-kube-api-access-62fxx\") pod \"c3a233d5-bf7f-4906-881c-5e81ea64e0e8\" (UID: \"c3a233d5-bf7f-4906-881c-5e81ea64e0e8\") " Jan 29 15:49:35 crc kubenswrapper[5008]: I0129 15:49:35.011484 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/39abc131-ba3e-4cd8-916a-520789627dd5-scripts\") pod \"39abc131-ba3e-4cd8-916a-520789627dd5\" (UID: \"39abc131-ba3e-4cd8-916a-520789627dd5\") " Jan 29 15:49:35 crc kubenswrapper[5008]: I0129 15:49:35.011510 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x26nk\" (UniqueName: \"kubernetes.io/projected/39abc131-ba3e-4cd8-916a-520789627dd5-kube-api-access-x26nk\") pod \"39abc131-ba3e-4cd8-916a-520789627dd5\" (UID: \"39abc131-ba3e-4cd8-916a-520789627dd5\") " Jan 29 15:49:35 crc kubenswrapper[5008]: I0129 15:49:35.011545 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/39abc131-ba3e-4cd8-916a-520789627dd5-config-data\") pod \"39abc131-ba3e-4cd8-916a-520789627dd5\" (UID: \"39abc131-ba3e-4cd8-916a-520789627dd5\") " Jan 29 15:49:35 crc kubenswrapper[5008]: I0129 15:49:35.011574 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c3a233d5-bf7f-4906-881c-5e81ea64e0e8-combined-ca-bundle\") pod \"c3a233d5-bf7f-4906-881c-5e81ea64e0e8\" (UID: \"c3a233d5-bf7f-4906-881c-5e81ea64e0e8\") " Jan 29 15:49:35 crc kubenswrapper[5008]: I0129 15:49:35.011652 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c3a233d5-bf7f-4906-881c-5e81ea64e0e8-logs\") pod \"c3a233d5-bf7f-4906-881c-5e81ea64e0e8\" (UID: \"c3a233d5-bf7f-4906-881c-5e81ea64e0e8\") " Jan 29 15:49:35 crc kubenswrapper[5008]: I0129 15:49:35.011684 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/39abc131-ba3e-4cd8-916a-520789627dd5-combined-ca-bundle\") pod \"39abc131-ba3e-4cd8-916a-520789627dd5\" (UID: \"39abc131-ba3e-4cd8-916a-520789627dd5\") " Jan 29 15:49:35 crc kubenswrapper[5008]: I0129 15:49:35.011712 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/39abc131-ba3e-4cd8-916a-520789627dd5-credential-keys\") pod \"39abc131-ba3e-4cd8-916a-520789627dd5\" (UID: \"39abc131-ba3e-4cd8-916a-520789627dd5\") " Jan 29 15:49:35 crc kubenswrapper[5008]: I0129 15:49:35.012771 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c3a233d5-bf7f-4906-881c-5e81ea64e0e8-logs" (OuterVolumeSpecName: "logs") pod "c3a233d5-bf7f-4906-881c-5e81ea64e0e8" (UID: "c3a233d5-bf7f-4906-881c-5e81ea64e0e8"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 15:49:35 crc kubenswrapper[5008]: I0129 15:49:35.016279 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/39abc131-ba3e-4cd8-916a-520789627dd5-scripts" (OuterVolumeSpecName: "scripts") pod "39abc131-ba3e-4cd8-916a-520789627dd5" (UID: "39abc131-ba3e-4cd8-916a-520789627dd5"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:49:35 crc kubenswrapper[5008]: I0129 15:49:35.016748 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/39abc131-ba3e-4cd8-916a-520789627dd5-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "39abc131-ba3e-4cd8-916a-520789627dd5" (UID: "39abc131-ba3e-4cd8-916a-520789627dd5"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:49:35 crc kubenswrapper[5008]: I0129 15:49:35.017582 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c3a233d5-bf7f-4906-881c-5e81ea64e0e8-kube-api-access-62fxx" (OuterVolumeSpecName: "kube-api-access-62fxx") pod "c3a233d5-bf7f-4906-881c-5e81ea64e0e8" (UID: "c3a233d5-bf7f-4906-881c-5e81ea64e0e8"). InnerVolumeSpecName "kube-api-access-62fxx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:49:35 crc kubenswrapper[5008]: I0129 15:49:35.020913 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/39abc131-ba3e-4cd8-916a-520789627dd5-kube-api-access-x26nk" (OuterVolumeSpecName: "kube-api-access-x26nk") pod "39abc131-ba3e-4cd8-916a-520789627dd5" (UID: "39abc131-ba3e-4cd8-916a-520789627dd5"). InnerVolumeSpecName "kube-api-access-x26nk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:49:35 crc kubenswrapper[5008]: I0129 15:49:35.021430 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/39abc131-ba3e-4cd8-916a-520789627dd5-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "39abc131-ba3e-4cd8-916a-520789627dd5" (UID: "39abc131-ba3e-4cd8-916a-520789627dd5"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:49:35 crc kubenswrapper[5008]: I0129 15:49:35.023401 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c3a233d5-bf7f-4906-881c-5e81ea64e0e8-scripts" (OuterVolumeSpecName: "scripts") pod "c3a233d5-bf7f-4906-881c-5e81ea64e0e8" (UID: "c3a233d5-bf7f-4906-881c-5e81ea64e0e8"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:49:35 crc kubenswrapper[5008]: I0129 15:49:35.044490 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c3a233d5-bf7f-4906-881c-5e81ea64e0e8-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c3a233d5-bf7f-4906-881c-5e81ea64e0e8" (UID: "c3a233d5-bf7f-4906-881c-5e81ea64e0e8"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:49:35 crc kubenswrapper[5008]: I0129 15:49:35.051020 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/39abc131-ba3e-4cd8-916a-520789627dd5-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "39abc131-ba3e-4cd8-916a-520789627dd5" (UID: "39abc131-ba3e-4cd8-916a-520789627dd5"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:49:35 crc kubenswrapper[5008]: I0129 15:49:35.058636 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/39abc131-ba3e-4cd8-916a-520789627dd5-config-data" (OuterVolumeSpecName: "config-data") pod "39abc131-ba3e-4cd8-916a-520789627dd5" (UID: "39abc131-ba3e-4cd8-916a-520789627dd5"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:49:35 crc kubenswrapper[5008]: I0129 15:49:35.059133 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c3a233d5-bf7f-4906-881c-5e81ea64e0e8-config-data" (OuterVolumeSpecName: "config-data") pod "c3a233d5-bf7f-4906-881c-5e81ea64e0e8" (UID: "c3a233d5-bf7f-4906-881c-5e81ea64e0e8"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:49:35 crc kubenswrapper[5008]: I0129 15:49:35.113415 5008 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c3a233d5-bf7f-4906-881c-5e81ea64e0e8-logs\") on node \"crc\" DevicePath \"\"" Jan 29 15:49:35 crc kubenswrapper[5008]: I0129 15:49:35.113453 5008 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/39abc131-ba3e-4cd8-916a-520789627dd5-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 15:49:35 crc kubenswrapper[5008]: I0129 15:49:35.113463 5008 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/39abc131-ba3e-4cd8-916a-520789627dd5-credential-keys\") on node \"crc\" DevicePath \"\"" Jan 29 15:49:35 crc kubenswrapper[5008]: I0129 15:49:35.113472 5008 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c3a233d5-bf7f-4906-881c-5e81ea64e0e8-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 15:49:35 crc kubenswrapper[5008]: I0129 15:49:35.113480 5008 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/39abc131-ba3e-4cd8-916a-520789627dd5-fernet-keys\") on node \"crc\" DevicePath \"\"" Jan 29 15:49:35 crc kubenswrapper[5008]: I0129 15:49:35.113488 5008 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c3a233d5-bf7f-4906-881c-5e81ea64e0e8-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 15:49:35 crc kubenswrapper[5008]: I0129 15:49:35.113497 5008 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-62fxx\" (UniqueName: \"kubernetes.io/projected/c3a233d5-bf7f-4906-881c-5e81ea64e0e8-kube-api-access-62fxx\") on node \"crc\" DevicePath \"\"" Jan 29 15:49:35 crc kubenswrapper[5008]: I0129 15:49:35.113507 5008 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/39abc131-ba3e-4cd8-916a-520789627dd5-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 15:49:35 crc kubenswrapper[5008]: I0129 15:49:35.113514 5008 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x26nk\" (UniqueName: \"kubernetes.io/projected/39abc131-ba3e-4cd8-916a-520789627dd5-kube-api-access-x26nk\") on node \"crc\" DevicePath \"\"" Jan 29 15:49:35 crc kubenswrapper[5008]: I0129 15:49:35.113522 5008 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/39abc131-ba3e-4cd8-916a-520789627dd5-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 15:49:35 crc kubenswrapper[5008]: I0129 15:49:35.113529 5008 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c3a233d5-bf7f-4906-881c-5e81ea64e0e8-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 15:49:35 crc kubenswrapper[5008]: I0129 15:49:35.155356 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-tqc26" event={"ID":"c3a233d5-bf7f-4906-881c-5e81ea64e0e8","Type":"ContainerDied","Data":"7463a1c0c912427b5643e45ef8f082d31f897a9969a145430140c8f0d851f2fa"} Jan 29 15:49:35 crc kubenswrapper[5008]: I0129 15:49:35.155392 5008 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7463a1c0c912427b5643e45ef8f082d31f897a9969a145430140c8f0d851f2fa" Jan 29 15:49:35 crc kubenswrapper[5008]: I0129 15:49:35.155637 5008 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-tqc26" Jan 29 15:49:35 crc kubenswrapper[5008]: I0129 15:49:35.157236 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-dkqkc" event={"ID":"39abc131-ba3e-4cd8-916a-520789627dd5","Type":"ContainerDied","Data":"49bd383a96da543cdc3197d5abfd843e95829c564775027bdeab41c6985acadd"} Jan 29 15:49:35 crc kubenswrapper[5008]: I0129 15:49:35.157268 5008 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="49bd383a96da543cdc3197d5abfd843e95829c564775027bdeab41c6985acadd" Jan 29 15:49:35 crc kubenswrapper[5008]: I0129 15:49:35.157317 5008 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-dkqkc" Jan 29 15:49:35 crc kubenswrapper[5008]: I0129 15:49:35.158624 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8457b44a-814e-403f-a2c9-71907f5cb2d2","Type":"ContainerStarted","Data":"73570da7fb4cd60403415b8ef7560376566de89eb802bb7bc549402efb543a24"} Jan 29 15:49:36 crc kubenswrapper[5008]: I0129 15:49:36.000480 5008 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-6445bd445b-mhznq"] Jan 29 15:49:36 crc kubenswrapper[5008]: E0129 15:49:36.001333 5008 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="39abc131-ba3e-4cd8-916a-520789627dd5" containerName="keystone-bootstrap" Jan 29 15:49:36 crc kubenswrapper[5008]: I0129 15:49:36.001356 5008 state_mem.go:107] "Deleted CPUSet assignment" podUID="39abc131-ba3e-4cd8-916a-520789627dd5" containerName="keystone-bootstrap" Jan 29 15:49:36 crc kubenswrapper[5008]: E0129 15:49:36.001410 5008 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c3a233d5-bf7f-4906-881c-5e81ea64e0e8" containerName="placement-db-sync" Jan 29 15:49:36 crc kubenswrapper[5008]: I0129 15:49:36.001422 5008 state_mem.go:107] "Deleted CPUSet assignment" podUID="c3a233d5-bf7f-4906-881c-5e81ea64e0e8" containerName="placement-db-sync" Jan 29 15:49:36 crc kubenswrapper[5008]: I0129 15:49:36.001703 5008 memory_manager.go:354] "RemoveStaleState removing state" podUID="c3a233d5-bf7f-4906-881c-5e81ea64e0e8" containerName="placement-db-sync" Jan 29 15:49:36 crc kubenswrapper[5008]: I0129 15:49:36.001754 5008 memory_manager.go:354] "RemoveStaleState removing state" podUID="39abc131-ba3e-4cd8-916a-520789627dd5" containerName="keystone-bootstrap" Jan 29 15:49:36 crc kubenswrapper[5008]: I0129 15:49:36.002965 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-6445bd445b-mhznq" Jan 29 15:49:36 crc kubenswrapper[5008]: I0129 15:49:36.008503 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Jan 29 15:49:36 crc kubenswrapper[5008]: I0129 15:49:36.008940 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Jan 29 15:49:36 crc kubenswrapper[5008]: I0129 15:49:36.009456 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-internal-svc" Jan 29 15:49:36 crc kubenswrapper[5008]: I0129 15:49:36.009442 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-public-svc" Jan 29 15:49:36 crc kubenswrapper[5008]: I0129 15:49:36.009865 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-rlqfr" Jan 29 15:49:36 crc kubenswrapper[5008]: I0129 15:49:36.020231 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-6445bd445b-mhznq"] Jan 29 15:49:36 crc kubenswrapper[5008]: I0129 15:49:36.080058 5008 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-779d6696cc-ltp9g"] Jan 29 15:49:36 crc kubenswrapper[5008]: I0129 15:49:36.081476 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-779d6696cc-ltp9g" Jan 29 15:49:36 crc kubenswrapper[5008]: I0129 15:49:36.085375 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Jan 29 15:49:36 crc kubenswrapper[5008]: I0129 15:49:36.086020 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-public-svc" Jan 29 15:49:36 crc kubenswrapper[5008]: I0129 15:49:36.086125 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Jan 29 15:49:36 crc kubenswrapper[5008]: I0129 15:49:36.086386 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-internal-svc" Jan 29 15:49:36 crc kubenswrapper[5008]: I0129 15:49:36.086423 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-sgcvh" Jan 29 15:49:36 crc kubenswrapper[5008]: I0129 15:49:36.086508 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Jan 29 15:49:36 crc kubenswrapper[5008]: I0129 15:49:36.098233 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-779d6696cc-ltp9g"] Jan 29 15:49:36 crc kubenswrapper[5008]: I0129 15:49:36.129899 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6bb31a7e-2eaf-445f-84d5-50aa5d1d007b-logs\") pod \"placement-6445bd445b-mhznq\" (UID: \"6bb31a7e-2eaf-445f-84d5-50aa5d1d007b\") " pod="openstack/placement-6445bd445b-mhznq" Jan 29 15:49:36 crc kubenswrapper[5008]: I0129 15:49:36.130007 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/6bb31a7e-2eaf-445f-84d5-50aa5d1d007b-internal-tls-certs\") pod \"placement-6445bd445b-mhznq\" (UID: \"6bb31a7e-2eaf-445f-84d5-50aa5d1d007b\") " pod="openstack/placement-6445bd445b-mhznq" Jan 29 15:49:36 crc kubenswrapper[5008]: I0129 15:49:36.130034 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6bb31a7e-2eaf-445f-84d5-50aa5d1d007b-config-data\") pod \"placement-6445bd445b-mhznq\" (UID: \"6bb31a7e-2eaf-445f-84d5-50aa5d1d007b\") " pod="openstack/placement-6445bd445b-mhznq" Jan 29 15:49:36 crc kubenswrapper[5008]: I0129 15:49:36.130139 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6bb31a7e-2eaf-445f-84d5-50aa5d1d007b-combined-ca-bundle\") pod \"placement-6445bd445b-mhznq\" (UID: \"6bb31a7e-2eaf-445f-84d5-50aa5d1d007b\") " pod="openstack/placement-6445bd445b-mhznq" Jan 29 15:49:36 crc kubenswrapper[5008]: I0129 15:49:36.130178 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6bb31a7e-2eaf-445f-84d5-50aa5d1d007b-scripts\") pod \"placement-6445bd445b-mhznq\" (UID: \"6bb31a7e-2eaf-445f-84d5-50aa5d1d007b\") " pod="openstack/placement-6445bd445b-mhznq" Jan 29 15:49:36 crc kubenswrapper[5008]: I0129 15:49:36.130208 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/6bb31a7e-2eaf-445f-84d5-50aa5d1d007b-public-tls-certs\") pod \"placement-6445bd445b-mhznq\" (UID: \"6bb31a7e-2eaf-445f-84d5-50aa5d1d007b\") " pod="openstack/placement-6445bd445b-mhznq" Jan 29 15:49:36 crc kubenswrapper[5008]: I0129 15:49:36.130252 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qhbxw\" (UniqueName: \"kubernetes.io/projected/6bb31a7e-2eaf-445f-84d5-50aa5d1d007b-kube-api-access-qhbxw\") pod \"placement-6445bd445b-mhznq\" (UID: \"6bb31a7e-2eaf-445f-84d5-50aa5d1d007b\") " pod="openstack/placement-6445bd445b-mhznq" Jan 29 15:49:36 crc kubenswrapper[5008]: I0129 15:49:36.232122 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6bb31a7e-2eaf-445f-84d5-50aa5d1d007b-logs\") pod \"placement-6445bd445b-mhznq\" (UID: \"6bb31a7e-2eaf-445f-84d5-50aa5d1d007b\") " pod="openstack/placement-6445bd445b-mhznq" Jan 29 15:49:36 crc kubenswrapper[5008]: I0129 15:49:36.232207 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/6bb31a7e-2eaf-445f-84d5-50aa5d1d007b-internal-tls-certs\") pod \"placement-6445bd445b-mhznq\" (UID: \"6bb31a7e-2eaf-445f-84d5-50aa5d1d007b\") " pod="openstack/placement-6445bd445b-mhznq" Jan 29 15:49:36 crc kubenswrapper[5008]: I0129 15:49:36.232233 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6bb31a7e-2eaf-445f-84d5-50aa5d1d007b-config-data\") pod \"placement-6445bd445b-mhznq\" (UID: \"6bb31a7e-2eaf-445f-84d5-50aa5d1d007b\") " pod="openstack/placement-6445bd445b-mhznq" Jan 29 15:49:36 crc kubenswrapper[5008]: I0129 15:49:36.232283 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4732d1d7-c3d2-4f17-bf74-d92f350a3e2b-scripts\") pod \"keystone-779d6696cc-ltp9g\" (UID: \"4732d1d7-c3d2-4f17-bf74-d92f350a3e2b\") " pod="openstack/keystone-779d6696cc-ltp9g" Jan 29 15:49:36 crc kubenswrapper[5008]: I0129 15:49:36.232327 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4732d1d7-c3d2-4f17-bf74-d92f350a3e2b-combined-ca-bundle\") pod \"keystone-779d6696cc-ltp9g\" (UID: \"4732d1d7-c3d2-4f17-bf74-d92f350a3e2b\") " pod="openstack/keystone-779d6696cc-ltp9g" Jan 29 15:49:36 crc kubenswrapper[5008]: I0129 15:49:36.232368 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/4732d1d7-c3d2-4f17-bf74-d92f350a3e2b-credential-keys\") pod \"keystone-779d6696cc-ltp9g\" (UID: \"4732d1d7-c3d2-4f17-bf74-d92f350a3e2b\") " pod="openstack/keystone-779d6696cc-ltp9g" Jan 29 15:49:36 crc kubenswrapper[5008]: I0129 15:49:36.232403 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/4732d1d7-c3d2-4f17-bf74-d92f350a3e2b-fernet-keys\") pod \"keystone-779d6696cc-ltp9g\" (UID: \"4732d1d7-c3d2-4f17-bf74-d92f350a3e2b\") " pod="openstack/keystone-779d6696cc-ltp9g" Jan 29 15:49:36 crc kubenswrapper[5008]: I0129 15:49:36.232427 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4732d1d7-c3d2-4f17-bf74-d92f350a3e2b-config-data\") pod \"keystone-779d6696cc-ltp9g\" (UID: \"4732d1d7-c3d2-4f17-bf74-d92f350a3e2b\") " pod="openstack/keystone-779d6696cc-ltp9g" Jan 29 15:49:36 crc kubenswrapper[5008]: I0129 15:49:36.232466 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6bb31a7e-2eaf-445f-84d5-50aa5d1d007b-combined-ca-bundle\") pod \"placement-6445bd445b-mhznq\" (UID: \"6bb31a7e-2eaf-445f-84d5-50aa5d1d007b\") " pod="openstack/placement-6445bd445b-mhznq" Jan 29 15:49:36 crc kubenswrapper[5008]: I0129 15:49:36.232491 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6bb31a7e-2eaf-445f-84d5-50aa5d1d007b-scripts\") pod \"placement-6445bd445b-mhznq\" (UID: \"6bb31a7e-2eaf-445f-84d5-50aa5d1d007b\") " pod="openstack/placement-6445bd445b-mhznq" Jan 29 15:49:36 crc kubenswrapper[5008]: I0129 15:49:36.232523 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6c5d7\" (UniqueName: \"kubernetes.io/projected/4732d1d7-c3d2-4f17-bf74-d92f350a3e2b-kube-api-access-6c5d7\") pod \"keystone-779d6696cc-ltp9g\" (UID: \"4732d1d7-c3d2-4f17-bf74-d92f350a3e2b\") " pod="openstack/keystone-779d6696cc-ltp9g" Jan 29 15:49:36 crc kubenswrapper[5008]: I0129 15:49:36.232545 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/4732d1d7-c3d2-4f17-bf74-d92f350a3e2b-internal-tls-certs\") pod \"keystone-779d6696cc-ltp9g\" (UID: \"4732d1d7-c3d2-4f17-bf74-d92f350a3e2b\") " pod="openstack/keystone-779d6696cc-ltp9g" Jan 29 15:49:36 crc kubenswrapper[5008]: I0129 15:49:36.232574 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/4732d1d7-c3d2-4f17-bf74-d92f350a3e2b-public-tls-certs\") pod \"keystone-779d6696cc-ltp9g\" (UID: \"4732d1d7-c3d2-4f17-bf74-d92f350a3e2b\") " pod="openstack/keystone-779d6696cc-ltp9g" Jan 29 15:49:36 crc kubenswrapper[5008]: I0129 15:49:36.232596 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/6bb31a7e-2eaf-445f-84d5-50aa5d1d007b-public-tls-certs\") pod \"placement-6445bd445b-mhznq\" (UID: \"6bb31a7e-2eaf-445f-84d5-50aa5d1d007b\") " pod="openstack/placement-6445bd445b-mhznq" Jan 29 15:49:36 crc kubenswrapper[5008]: I0129 15:49:36.232617 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qhbxw\" (UniqueName: \"kubernetes.io/projected/6bb31a7e-2eaf-445f-84d5-50aa5d1d007b-kube-api-access-qhbxw\") pod \"placement-6445bd445b-mhznq\" (UID: \"6bb31a7e-2eaf-445f-84d5-50aa5d1d007b\") " pod="openstack/placement-6445bd445b-mhznq" Jan 29 15:49:36 crc kubenswrapper[5008]: I0129 15:49:36.233492 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6bb31a7e-2eaf-445f-84d5-50aa5d1d007b-logs\") pod \"placement-6445bd445b-mhznq\" (UID: \"6bb31a7e-2eaf-445f-84d5-50aa5d1d007b\") " pod="openstack/placement-6445bd445b-mhznq" Jan 29 15:49:36 crc kubenswrapper[5008]: I0129 15:49:36.240216 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6bb31a7e-2eaf-445f-84d5-50aa5d1d007b-scripts\") pod \"placement-6445bd445b-mhznq\" (UID: \"6bb31a7e-2eaf-445f-84d5-50aa5d1d007b\") " pod="openstack/placement-6445bd445b-mhznq" Jan 29 15:49:36 crc kubenswrapper[5008]: I0129 15:49:36.240389 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/6bb31a7e-2eaf-445f-84d5-50aa5d1d007b-internal-tls-certs\") pod \"placement-6445bd445b-mhznq\" (UID: \"6bb31a7e-2eaf-445f-84d5-50aa5d1d007b\") " pod="openstack/placement-6445bd445b-mhznq" Jan 29 15:49:36 crc kubenswrapper[5008]: I0129 15:49:36.240467 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/6bb31a7e-2eaf-445f-84d5-50aa5d1d007b-public-tls-certs\") pod \"placement-6445bd445b-mhznq\" (UID: \"6bb31a7e-2eaf-445f-84d5-50aa5d1d007b\") " pod="openstack/placement-6445bd445b-mhznq" Jan 29 15:49:36 crc kubenswrapper[5008]: I0129 15:49:36.241592 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6bb31a7e-2eaf-445f-84d5-50aa5d1d007b-config-data\") pod \"placement-6445bd445b-mhznq\" (UID: \"6bb31a7e-2eaf-445f-84d5-50aa5d1d007b\") " pod="openstack/placement-6445bd445b-mhznq" Jan 29 15:49:36 crc kubenswrapper[5008]: I0129 15:49:36.243313 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6bb31a7e-2eaf-445f-84d5-50aa5d1d007b-combined-ca-bundle\") pod \"placement-6445bd445b-mhznq\" (UID: \"6bb31a7e-2eaf-445f-84d5-50aa5d1d007b\") " pod="openstack/placement-6445bd445b-mhznq" Jan 29 15:49:36 crc kubenswrapper[5008]: I0129 15:49:36.259466 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qhbxw\" (UniqueName: \"kubernetes.io/projected/6bb31a7e-2eaf-445f-84d5-50aa5d1d007b-kube-api-access-qhbxw\") pod \"placement-6445bd445b-mhznq\" (UID: \"6bb31a7e-2eaf-445f-84d5-50aa5d1d007b\") " pod="openstack/placement-6445bd445b-mhznq" Jan 29 15:49:36 crc kubenswrapper[5008]: I0129 15:49:36.321143 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-6445bd445b-mhznq" Jan 29 15:49:36 crc kubenswrapper[5008]: E0129 15:49:36.331997 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"barbican-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-barbican-api:current-podified\\\"\"" pod="openstack/barbican-db-sync-rcl2z" podUID="4ec0e696-652d-463e-b97e-dad0065a543b" Jan 29 15:49:36 crc kubenswrapper[5008]: I0129 15:49:36.334093 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/4732d1d7-c3d2-4f17-bf74-d92f350a3e2b-credential-keys\") pod \"keystone-779d6696cc-ltp9g\" (UID: \"4732d1d7-c3d2-4f17-bf74-d92f350a3e2b\") " pod="openstack/keystone-779d6696cc-ltp9g" Jan 29 15:49:36 crc kubenswrapper[5008]: I0129 15:49:36.334148 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/4732d1d7-c3d2-4f17-bf74-d92f350a3e2b-fernet-keys\") pod \"keystone-779d6696cc-ltp9g\" (UID: \"4732d1d7-c3d2-4f17-bf74-d92f350a3e2b\") " pod="openstack/keystone-779d6696cc-ltp9g" Jan 29 15:49:36 crc kubenswrapper[5008]: I0129 15:49:36.334177 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4732d1d7-c3d2-4f17-bf74-d92f350a3e2b-config-data\") pod \"keystone-779d6696cc-ltp9g\" (UID: \"4732d1d7-c3d2-4f17-bf74-d92f350a3e2b\") " pod="openstack/keystone-779d6696cc-ltp9g" Jan 29 15:49:36 crc kubenswrapper[5008]: I0129 15:49:36.334228 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6c5d7\" (UniqueName: \"kubernetes.io/projected/4732d1d7-c3d2-4f17-bf74-d92f350a3e2b-kube-api-access-6c5d7\") pod \"keystone-779d6696cc-ltp9g\" (UID: \"4732d1d7-c3d2-4f17-bf74-d92f350a3e2b\") " pod="openstack/keystone-779d6696cc-ltp9g" Jan 29 15:49:36 crc kubenswrapper[5008]: I0129 15:49:36.334252 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/4732d1d7-c3d2-4f17-bf74-d92f350a3e2b-internal-tls-certs\") pod \"keystone-779d6696cc-ltp9g\" (UID: \"4732d1d7-c3d2-4f17-bf74-d92f350a3e2b\") " pod="openstack/keystone-779d6696cc-ltp9g" Jan 29 15:49:36 crc kubenswrapper[5008]: I0129 15:49:36.334287 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/4732d1d7-c3d2-4f17-bf74-d92f350a3e2b-public-tls-certs\") pod \"keystone-779d6696cc-ltp9g\" (UID: \"4732d1d7-c3d2-4f17-bf74-d92f350a3e2b\") " pod="openstack/keystone-779d6696cc-ltp9g" Jan 29 15:49:36 crc kubenswrapper[5008]: I0129 15:49:36.334384 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4732d1d7-c3d2-4f17-bf74-d92f350a3e2b-scripts\") pod \"keystone-779d6696cc-ltp9g\" (UID: \"4732d1d7-c3d2-4f17-bf74-d92f350a3e2b\") " pod="openstack/keystone-779d6696cc-ltp9g" Jan 29 15:49:36 crc kubenswrapper[5008]: I0129 15:49:36.334450 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4732d1d7-c3d2-4f17-bf74-d92f350a3e2b-combined-ca-bundle\") pod \"keystone-779d6696cc-ltp9g\" (UID: \"4732d1d7-c3d2-4f17-bf74-d92f350a3e2b\") " pod="openstack/keystone-779d6696cc-ltp9g" Jan 29 15:49:36 crc kubenswrapper[5008]: I0129 15:49:36.338958 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/4732d1d7-c3d2-4f17-bf74-d92f350a3e2b-credential-keys\") pod \"keystone-779d6696cc-ltp9g\" (UID: \"4732d1d7-c3d2-4f17-bf74-d92f350a3e2b\") " pod="openstack/keystone-779d6696cc-ltp9g" Jan 29 15:49:36 crc kubenswrapper[5008]: I0129 15:49:36.339003 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4732d1d7-c3d2-4f17-bf74-d92f350a3e2b-config-data\") pod \"keystone-779d6696cc-ltp9g\" (UID: \"4732d1d7-c3d2-4f17-bf74-d92f350a3e2b\") " pod="openstack/keystone-779d6696cc-ltp9g" Jan 29 15:49:36 crc kubenswrapper[5008]: I0129 15:49:36.339486 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4732d1d7-c3d2-4f17-bf74-d92f350a3e2b-combined-ca-bundle\") pod \"keystone-779d6696cc-ltp9g\" (UID: \"4732d1d7-c3d2-4f17-bf74-d92f350a3e2b\") " pod="openstack/keystone-779d6696cc-ltp9g" Jan 29 15:49:36 crc kubenswrapper[5008]: I0129 15:49:36.340021 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/4732d1d7-c3d2-4f17-bf74-d92f350a3e2b-public-tls-certs\") pod \"keystone-779d6696cc-ltp9g\" (UID: \"4732d1d7-c3d2-4f17-bf74-d92f350a3e2b\") " pod="openstack/keystone-779d6696cc-ltp9g" Jan 29 15:49:36 crc kubenswrapper[5008]: I0129 15:49:36.341594 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/4732d1d7-c3d2-4f17-bf74-d92f350a3e2b-internal-tls-certs\") pod \"keystone-779d6696cc-ltp9g\" (UID: \"4732d1d7-c3d2-4f17-bf74-d92f350a3e2b\") " pod="openstack/keystone-779d6696cc-ltp9g" Jan 29 15:49:36 crc kubenswrapper[5008]: I0129 15:49:36.342024 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/4732d1d7-c3d2-4f17-bf74-d92f350a3e2b-fernet-keys\") pod \"keystone-779d6696cc-ltp9g\" (UID: \"4732d1d7-c3d2-4f17-bf74-d92f350a3e2b\") " pod="openstack/keystone-779d6696cc-ltp9g" Jan 29 15:49:36 crc kubenswrapper[5008]: I0129 15:49:36.344068 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4732d1d7-c3d2-4f17-bf74-d92f350a3e2b-scripts\") pod \"keystone-779d6696cc-ltp9g\" (UID: \"4732d1d7-c3d2-4f17-bf74-d92f350a3e2b\") " pod="openstack/keystone-779d6696cc-ltp9g" Jan 29 15:49:36 crc kubenswrapper[5008]: I0129 15:49:36.359584 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6c5d7\" (UniqueName: \"kubernetes.io/projected/4732d1d7-c3d2-4f17-bf74-d92f350a3e2b-kube-api-access-6c5d7\") pod \"keystone-779d6696cc-ltp9g\" (UID: \"4732d1d7-c3d2-4f17-bf74-d92f350a3e2b\") " pod="openstack/keystone-779d6696cc-ltp9g" Jan 29 15:49:36 crc kubenswrapper[5008]: I0129 15:49:36.399548 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-779d6696cc-ltp9g" Jan 29 15:49:36 crc kubenswrapper[5008]: I0129 15:49:36.829873 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-6445bd445b-mhznq"] Jan 29 15:49:36 crc kubenswrapper[5008]: I0129 15:49:36.902669 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-779d6696cc-ltp9g"] Jan 29 15:49:36 crc kubenswrapper[5008]: W0129 15:49:36.919083 5008 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4732d1d7_c3d2_4f17_bf74_d92f350a3e2b.slice/crio-46c0ae4c762a9ba85e2c5ed7bc28a3370050fbd43d8b6ce834bcc586e01359eb WatchSource:0}: Error finding container 46c0ae4c762a9ba85e2c5ed7bc28a3370050fbd43d8b6ce834bcc586e01359eb: Status 404 returned error can't find the container with id 46c0ae4c762a9ba85e2c5ed7bc28a3370050fbd43d8b6ce834bcc586e01359eb Jan 29 15:49:37 crc kubenswrapper[5008]: I0129 15:49:37.176957 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-779d6696cc-ltp9g" event={"ID":"4732d1d7-c3d2-4f17-bf74-d92f350a3e2b","Type":"ContainerStarted","Data":"90a7a8a001252d6d79a74dafc7f878323b0551fc1bacab57b7f43ca842970005"} Jan 29 15:49:37 crc kubenswrapper[5008]: I0129 15:49:37.177010 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-779d6696cc-ltp9g" event={"ID":"4732d1d7-c3d2-4f17-bf74-d92f350a3e2b","Type":"ContainerStarted","Data":"46c0ae4c762a9ba85e2c5ed7bc28a3370050fbd43d8b6ce834bcc586e01359eb"} Jan 29 15:49:37 crc kubenswrapper[5008]: I0129 15:49:37.177466 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/keystone-779d6696cc-ltp9g" Jan 29 15:49:37 crc kubenswrapper[5008]: I0129 15:49:37.180389 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-6445bd445b-mhznq" event={"ID":"6bb31a7e-2eaf-445f-84d5-50aa5d1d007b","Type":"ContainerStarted","Data":"922dd14c1fc131087530a679c50232179f2527a755c5b35806f14d9f5f69d2cd"} Jan 29 15:49:37 crc kubenswrapper[5008]: I0129 15:49:37.180421 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-6445bd445b-mhznq" event={"ID":"6bb31a7e-2eaf-445f-84d5-50aa5d1d007b","Type":"ContainerStarted","Data":"359a72657c9bfba53abd214342c7a1e93d76aafd5e6beccbea5acec3bf995e32"} Jan 29 15:49:37 crc kubenswrapper[5008]: I0129 15:49:37.203282 5008 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-779d6696cc-ltp9g" podStartSLOduration=1.2032598939999999 podStartE2EDuration="1.203259894s" podCreationTimestamp="2026-01-29 15:49:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 15:49:37.197117886 +0000 UTC m=+1320.869972133" watchObservedRunningTime="2026-01-29 15:49:37.203259894 +0000 UTC m=+1320.876114131" Jan 29 15:49:38 crc kubenswrapper[5008]: I0129 15:49:38.191193 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-6445bd445b-mhznq" event={"ID":"6bb31a7e-2eaf-445f-84d5-50aa5d1d007b","Type":"ContainerStarted","Data":"eda94a8d83e7b9b941d8d728214164666a763004cdb54a95b67730b9ed4bb21a"} Jan 29 15:49:38 crc kubenswrapper[5008]: I0129 15:49:38.191924 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-6445bd445b-mhznq" Jan 29 15:49:38 crc kubenswrapper[5008]: I0129 15:49:38.225581 5008 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-6445bd445b-mhznq" podStartSLOduration=3.225557095 podStartE2EDuration="3.225557095s" podCreationTimestamp="2026-01-29 15:49:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 15:49:38.219908448 +0000 UTC m=+1321.892762725" watchObservedRunningTime="2026-01-29 15:49:38.225557095 +0000 UTC m=+1321.898411362" Jan 29 15:49:39 crc kubenswrapper[5008]: I0129 15:49:39.136661 5008 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-7f49b8c48b-x77zl" podUID="8c3bbcd6-6512-4439-b70d-f46dd6382cfe" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.145:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.145:8443: connect: connection refused" Jan 29 15:49:39 crc kubenswrapper[5008]: I0129 15:49:39.201292 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-6445bd445b-mhznq" Jan 29 15:49:39 crc kubenswrapper[5008]: I0129 15:49:39.260894 5008 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-bf5f5fc4b-t9vk7" podUID="fc599e48-62d0-4908-b4ed-cd3f13094665" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.146:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.146:8443: connect: connection refused" Jan 29 15:49:40 crc kubenswrapper[5008]: I0129 15:49:40.211276 5008 generic.go:334] "Generic (PLEG): container finished" podID="8277eb2b-44f8-4fd9-af92-1832e0272e0e" containerID="bde50669bd65351b30c48ee0e65fb0911aba9f1d7624eae95461658432ebf883" exitCode=0 Jan 29 15:49:40 crc kubenswrapper[5008]: I0129 15:49:40.211476 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-n7wgw" event={"ID":"8277eb2b-44f8-4fd9-af92-1832e0272e0e","Type":"ContainerDied","Data":"bde50669bd65351b30c48ee0e65fb0911aba9f1d7624eae95461658432ebf883"} Jan 29 15:49:40 crc kubenswrapper[5008]: I0129 15:49:40.214250 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-fwhd5" event={"ID":"9069f34b-ed91-4ced-8b05-91b83dd02938","Type":"ContainerStarted","Data":"4235463096f31772a59e698a0a90916f6b2c055027357bae8128e733c3b9757d"} Jan 29 15:49:40 crc kubenswrapper[5008]: I0129 15:49:40.252933 5008 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-db-sync-fwhd5" podStartSLOduration=2.886856579 podStartE2EDuration="1m31.252913705s" podCreationTimestamp="2026-01-29 15:48:09 +0000 UTC" firstStartedPulling="2026-01-29 15:48:10.749747664 +0000 UTC m=+1234.422601901" lastFinishedPulling="2026-01-29 15:49:39.11580477 +0000 UTC m=+1322.788659027" observedRunningTime="2026-01-29 15:49:40.248167159 +0000 UTC m=+1323.921021396" watchObservedRunningTime="2026-01-29 15:49:40.252913705 +0000 UTC m=+1323.925767962" Jan 29 15:49:42 crc kubenswrapper[5008]: I0129 15:49:42.183906 5008 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-n7wgw" Jan 29 15:49:42 crc kubenswrapper[5008]: I0129 15:49:42.233439 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-n7wgw" event={"ID":"8277eb2b-44f8-4fd9-af92-1832e0272e0e","Type":"ContainerDied","Data":"b1174780d2fa3fe7c06477c9d106ea7940e8a6e121cc29c7f9f91c93470ca373"} Jan 29 15:49:42 crc kubenswrapper[5008]: I0129 15:49:42.233681 5008 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b1174780d2fa3fe7c06477c9d106ea7940e8a6e121cc29c7f9f91c93470ca373" Jan 29 15:49:42 crc kubenswrapper[5008]: I0129 15:49:42.233848 5008 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-n7wgw" Jan 29 15:49:42 crc kubenswrapper[5008]: I0129 15:49:42.340877 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9m6lk\" (UniqueName: \"kubernetes.io/projected/8277eb2b-44f8-4fd9-af92-1832e0272e0e-kube-api-access-9m6lk\") pod \"8277eb2b-44f8-4fd9-af92-1832e0272e0e\" (UID: \"8277eb2b-44f8-4fd9-af92-1832e0272e0e\") " Jan 29 15:49:42 crc kubenswrapper[5008]: I0129 15:49:42.341393 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/8277eb2b-44f8-4fd9-af92-1832e0272e0e-db-sync-config-data\") pod \"8277eb2b-44f8-4fd9-af92-1832e0272e0e\" (UID: \"8277eb2b-44f8-4fd9-af92-1832e0272e0e\") " Jan 29 15:49:42 crc kubenswrapper[5008]: I0129 15:49:42.341555 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8277eb2b-44f8-4fd9-af92-1832e0272e0e-combined-ca-bundle\") pod \"8277eb2b-44f8-4fd9-af92-1832e0272e0e\" (UID: \"8277eb2b-44f8-4fd9-af92-1832e0272e0e\") " Jan 29 15:49:42 crc kubenswrapper[5008]: I0129 15:49:42.341595 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8277eb2b-44f8-4fd9-af92-1832e0272e0e-config-data\") pod \"8277eb2b-44f8-4fd9-af92-1832e0272e0e\" (UID: \"8277eb2b-44f8-4fd9-af92-1832e0272e0e\") " Jan 29 15:49:42 crc kubenswrapper[5008]: I0129 15:49:42.347614 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8277eb2b-44f8-4fd9-af92-1832e0272e0e-kube-api-access-9m6lk" (OuterVolumeSpecName: "kube-api-access-9m6lk") pod "8277eb2b-44f8-4fd9-af92-1832e0272e0e" (UID: "8277eb2b-44f8-4fd9-af92-1832e0272e0e"). InnerVolumeSpecName "kube-api-access-9m6lk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:49:42 crc kubenswrapper[5008]: I0129 15:49:42.352165 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8277eb2b-44f8-4fd9-af92-1832e0272e0e-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "8277eb2b-44f8-4fd9-af92-1832e0272e0e" (UID: "8277eb2b-44f8-4fd9-af92-1832e0272e0e"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:49:42 crc kubenswrapper[5008]: I0129 15:49:42.379581 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8277eb2b-44f8-4fd9-af92-1832e0272e0e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "8277eb2b-44f8-4fd9-af92-1832e0272e0e" (UID: "8277eb2b-44f8-4fd9-af92-1832e0272e0e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:49:42 crc kubenswrapper[5008]: I0129 15:49:42.403272 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8277eb2b-44f8-4fd9-af92-1832e0272e0e-config-data" (OuterVolumeSpecName: "config-data") pod "8277eb2b-44f8-4fd9-af92-1832e0272e0e" (UID: "8277eb2b-44f8-4fd9-af92-1832e0272e0e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:49:42 crc kubenswrapper[5008]: I0129 15:49:42.443597 5008 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9m6lk\" (UniqueName: \"kubernetes.io/projected/8277eb2b-44f8-4fd9-af92-1832e0272e0e-kube-api-access-9m6lk\") on node \"crc\" DevicePath \"\"" Jan 29 15:49:42 crc kubenswrapper[5008]: I0129 15:49:42.444355 5008 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/8277eb2b-44f8-4fd9-af92-1832e0272e0e-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 15:49:42 crc kubenswrapper[5008]: I0129 15:49:42.444365 5008 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8277eb2b-44f8-4fd9-af92-1832e0272e0e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 15:49:42 crc kubenswrapper[5008]: I0129 15:49:42.444374 5008 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8277eb2b-44f8-4fd9-af92-1832e0272e0e-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 15:49:42 crc kubenswrapper[5008]: I0129 15:49:42.731006 5008 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-56df8fb6b7-ltv6m"] Jan 29 15:49:42 crc kubenswrapper[5008]: E0129 15:49:42.731353 5008 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8277eb2b-44f8-4fd9-af92-1832e0272e0e" containerName="glance-db-sync" Jan 29 15:49:42 crc kubenswrapper[5008]: I0129 15:49:42.731370 5008 state_mem.go:107] "Deleted CPUSet assignment" podUID="8277eb2b-44f8-4fd9-af92-1832e0272e0e" containerName="glance-db-sync" Jan 29 15:49:42 crc kubenswrapper[5008]: I0129 15:49:42.731557 5008 memory_manager.go:354] "RemoveStaleState removing state" podUID="8277eb2b-44f8-4fd9-af92-1832e0272e0e" containerName="glance-db-sync" Jan 29 15:49:42 crc kubenswrapper[5008]: I0129 15:49:42.732427 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-56df8fb6b7-ltv6m" Jan 29 15:49:42 crc kubenswrapper[5008]: I0129 15:49:42.743504 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-56df8fb6b7-ltv6m"] Jan 29 15:49:42 crc kubenswrapper[5008]: I0129 15:49:42.851185 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/eeec0b0d-d386-486c-9bd7-2dfe88016cd8-dns-svc\") pod \"dnsmasq-dns-56df8fb6b7-ltv6m\" (UID: \"eeec0b0d-d386-486c-9bd7-2dfe88016cd8\") " pod="openstack/dnsmasq-dns-56df8fb6b7-ltv6m" Jan 29 15:49:42 crc kubenswrapper[5008]: I0129 15:49:42.851286 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/eeec0b0d-d386-486c-9bd7-2dfe88016cd8-ovsdbserver-sb\") pod \"dnsmasq-dns-56df8fb6b7-ltv6m\" (UID: \"eeec0b0d-d386-486c-9bd7-2dfe88016cd8\") " pod="openstack/dnsmasq-dns-56df8fb6b7-ltv6m" Jan 29 15:49:42 crc kubenswrapper[5008]: I0129 15:49:42.851317 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/eeec0b0d-d386-486c-9bd7-2dfe88016cd8-config\") pod \"dnsmasq-dns-56df8fb6b7-ltv6m\" (UID: \"eeec0b0d-d386-486c-9bd7-2dfe88016cd8\") " pod="openstack/dnsmasq-dns-56df8fb6b7-ltv6m" Jan 29 15:49:42 crc kubenswrapper[5008]: I0129 15:49:42.851364 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/eeec0b0d-d386-486c-9bd7-2dfe88016cd8-dns-swift-storage-0\") pod \"dnsmasq-dns-56df8fb6b7-ltv6m\" (UID: \"eeec0b0d-d386-486c-9bd7-2dfe88016cd8\") " pod="openstack/dnsmasq-dns-56df8fb6b7-ltv6m" Jan 29 15:49:42 crc kubenswrapper[5008]: I0129 15:49:42.851387 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/eeec0b0d-d386-486c-9bd7-2dfe88016cd8-ovsdbserver-nb\") pod \"dnsmasq-dns-56df8fb6b7-ltv6m\" (UID: \"eeec0b0d-d386-486c-9bd7-2dfe88016cd8\") " pod="openstack/dnsmasq-dns-56df8fb6b7-ltv6m" Jan 29 15:49:42 crc kubenswrapper[5008]: I0129 15:49:42.851410 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n4tmk\" (UniqueName: \"kubernetes.io/projected/eeec0b0d-d386-486c-9bd7-2dfe88016cd8-kube-api-access-n4tmk\") pod \"dnsmasq-dns-56df8fb6b7-ltv6m\" (UID: \"eeec0b0d-d386-486c-9bd7-2dfe88016cd8\") " pod="openstack/dnsmasq-dns-56df8fb6b7-ltv6m" Jan 29 15:49:42 crc kubenswrapper[5008]: I0129 15:49:42.952428 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/eeec0b0d-d386-486c-9bd7-2dfe88016cd8-ovsdbserver-sb\") pod \"dnsmasq-dns-56df8fb6b7-ltv6m\" (UID: \"eeec0b0d-d386-486c-9bd7-2dfe88016cd8\") " pod="openstack/dnsmasq-dns-56df8fb6b7-ltv6m" Jan 29 15:49:42 crc kubenswrapper[5008]: I0129 15:49:42.952482 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/eeec0b0d-d386-486c-9bd7-2dfe88016cd8-config\") pod \"dnsmasq-dns-56df8fb6b7-ltv6m\" (UID: \"eeec0b0d-d386-486c-9bd7-2dfe88016cd8\") " pod="openstack/dnsmasq-dns-56df8fb6b7-ltv6m" Jan 29 15:49:42 crc kubenswrapper[5008]: I0129 15:49:42.952533 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/eeec0b0d-d386-486c-9bd7-2dfe88016cd8-dns-swift-storage-0\") pod \"dnsmasq-dns-56df8fb6b7-ltv6m\" (UID: \"eeec0b0d-d386-486c-9bd7-2dfe88016cd8\") " pod="openstack/dnsmasq-dns-56df8fb6b7-ltv6m" Jan 29 15:49:42 crc kubenswrapper[5008]: I0129 15:49:42.952553 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/eeec0b0d-d386-486c-9bd7-2dfe88016cd8-ovsdbserver-nb\") pod \"dnsmasq-dns-56df8fb6b7-ltv6m\" (UID: \"eeec0b0d-d386-486c-9bd7-2dfe88016cd8\") " pod="openstack/dnsmasq-dns-56df8fb6b7-ltv6m" Jan 29 15:49:42 crc kubenswrapper[5008]: I0129 15:49:42.952570 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n4tmk\" (UniqueName: \"kubernetes.io/projected/eeec0b0d-d386-486c-9bd7-2dfe88016cd8-kube-api-access-n4tmk\") pod \"dnsmasq-dns-56df8fb6b7-ltv6m\" (UID: \"eeec0b0d-d386-486c-9bd7-2dfe88016cd8\") " pod="openstack/dnsmasq-dns-56df8fb6b7-ltv6m" Jan 29 15:49:42 crc kubenswrapper[5008]: I0129 15:49:42.952603 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/eeec0b0d-d386-486c-9bd7-2dfe88016cd8-dns-svc\") pod \"dnsmasq-dns-56df8fb6b7-ltv6m\" (UID: \"eeec0b0d-d386-486c-9bd7-2dfe88016cd8\") " pod="openstack/dnsmasq-dns-56df8fb6b7-ltv6m" Jan 29 15:49:42 crc kubenswrapper[5008]: I0129 15:49:42.953848 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/eeec0b0d-d386-486c-9bd7-2dfe88016cd8-ovsdbserver-nb\") pod \"dnsmasq-dns-56df8fb6b7-ltv6m\" (UID: \"eeec0b0d-d386-486c-9bd7-2dfe88016cd8\") " pod="openstack/dnsmasq-dns-56df8fb6b7-ltv6m" Jan 29 15:49:42 crc kubenswrapper[5008]: I0129 15:49:42.953854 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/eeec0b0d-d386-486c-9bd7-2dfe88016cd8-dns-svc\") pod \"dnsmasq-dns-56df8fb6b7-ltv6m\" (UID: \"eeec0b0d-d386-486c-9bd7-2dfe88016cd8\") " pod="openstack/dnsmasq-dns-56df8fb6b7-ltv6m" Jan 29 15:49:42 crc kubenswrapper[5008]: I0129 15:49:42.953955 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/eeec0b0d-d386-486c-9bd7-2dfe88016cd8-ovsdbserver-sb\") pod \"dnsmasq-dns-56df8fb6b7-ltv6m\" (UID: \"eeec0b0d-d386-486c-9bd7-2dfe88016cd8\") " pod="openstack/dnsmasq-dns-56df8fb6b7-ltv6m" Jan 29 15:49:42 crc kubenswrapper[5008]: I0129 15:49:42.954003 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/eeec0b0d-d386-486c-9bd7-2dfe88016cd8-config\") pod \"dnsmasq-dns-56df8fb6b7-ltv6m\" (UID: \"eeec0b0d-d386-486c-9bd7-2dfe88016cd8\") " pod="openstack/dnsmasq-dns-56df8fb6b7-ltv6m" Jan 29 15:49:42 crc kubenswrapper[5008]: I0129 15:49:42.954458 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/eeec0b0d-d386-486c-9bd7-2dfe88016cd8-dns-swift-storage-0\") pod \"dnsmasq-dns-56df8fb6b7-ltv6m\" (UID: \"eeec0b0d-d386-486c-9bd7-2dfe88016cd8\") " pod="openstack/dnsmasq-dns-56df8fb6b7-ltv6m" Jan 29 15:49:42 crc kubenswrapper[5008]: I0129 15:49:42.975829 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n4tmk\" (UniqueName: \"kubernetes.io/projected/eeec0b0d-d386-486c-9bd7-2dfe88016cd8-kube-api-access-n4tmk\") pod \"dnsmasq-dns-56df8fb6b7-ltv6m\" (UID: \"eeec0b0d-d386-486c-9bd7-2dfe88016cd8\") " pod="openstack/dnsmasq-dns-56df8fb6b7-ltv6m" Jan 29 15:49:43 crc kubenswrapper[5008]: I0129 15:49:43.048840 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-56df8fb6b7-ltv6m" Jan 29 15:49:43 crc kubenswrapper[5008]: I0129 15:49:43.538857 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-56df8fb6b7-ltv6m"] Jan 29 15:49:43 crc kubenswrapper[5008]: I0129 15:49:43.635435 5008 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Jan 29 15:49:43 crc kubenswrapper[5008]: I0129 15:49:43.637142 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 29 15:49:43 crc kubenswrapper[5008]: I0129 15:49:43.643416 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Jan 29 15:49:43 crc kubenswrapper[5008]: I0129 15:49:43.643890 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-scripts" Jan 29 15:49:43 crc kubenswrapper[5008]: I0129 15:49:43.644286 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-2qq6q" Jan 29 15:49:43 crc kubenswrapper[5008]: I0129 15:49:43.655044 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 29 15:49:43 crc kubenswrapper[5008]: I0129 15:49:43.766383 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-956zp\" (UniqueName: \"kubernetes.io/projected/fa21b57d-29c9-4b5d-8712-66e3d5762f26-kube-api-access-956zp\") pod \"glance-default-external-api-0\" (UID: \"fa21b57d-29c9-4b5d-8712-66e3d5762f26\") " pod="openstack/glance-default-external-api-0" Jan 29 15:49:43 crc kubenswrapper[5008]: I0129 15:49:43.766438 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fa21b57d-29c9-4b5d-8712-66e3d5762f26-scripts\") pod \"glance-default-external-api-0\" (UID: \"fa21b57d-29c9-4b5d-8712-66e3d5762f26\") " pod="openstack/glance-default-external-api-0" Jan 29 15:49:43 crc kubenswrapper[5008]: I0129 15:49:43.766539 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fa21b57d-29c9-4b5d-8712-66e3d5762f26-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"fa21b57d-29c9-4b5d-8712-66e3d5762f26\") " pod="openstack/glance-default-external-api-0" Jan 29 15:49:43 crc kubenswrapper[5008]: I0129 15:49:43.766577 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fa21b57d-29c9-4b5d-8712-66e3d5762f26-config-data\") pod \"glance-default-external-api-0\" (UID: \"fa21b57d-29c9-4b5d-8712-66e3d5762f26\") " pod="openstack/glance-default-external-api-0" Jan 29 15:49:43 crc kubenswrapper[5008]: I0129 15:49:43.766598 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/fa21b57d-29c9-4b5d-8712-66e3d5762f26-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"fa21b57d-29c9-4b5d-8712-66e3d5762f26\") " pod="openstack/glance-default-external-api-0" Jan 29 15:49:43 crc kubenswrapper[5008]: I0129 15:49:43.766646 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fa21b57d-29c9-4b5d-8712-66e3d5762f26-logs\") pod \"glance-default-external-api-0\" (UID: \"fa21b57d-29c9-4b5d-8712-66e3d5762f26\") " pod="openstack/glance-default-external-api-0" Jan 29 15:49:43 crc kubenswrapper[5008]: I0129 15:49:43.766868 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"glance-default-external-api-0\" (UID: \"fa21b57d-29c9-4b5d-8712-66e3d5762f26\") " pod="openstack/glance-default-external-api-0" Jan 29 15:49:43 crc kubenswrapper[5008]: I0129 15:49:43.868799 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fa21b57d-29c9-4b5d-8712-66e3d5762f26-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"fa21b57d-29c9-4b5d-8712-66e3d5762f26\") " pod="openstack/glance-default-external-api-0" Jan 29 15:49:43 crc kubenswrapper[5008]: I0129 15:49:43.868871 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fa21b57d-29c9-4b5d-8712-66e3d5762f26-config-data\") pod \"glance-default-external-api-0\" (UID: \"fa21b57d-29c9-4b5d-8712-66e3d5762f26\") " pod="openstack/glance-default-external-api-0" Jan 29 15:49:43 crc kubenswrapper[5008]: I0129 15:49:43.868901 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/fa21b57d-29c9-4b5d-8712-66e3d5762f26-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"fa21b57d-29c9-4b5d-8712-66e3d5762f26\") " pod="openstack/glance-default-external-api-0" Jan 29 15:49:43 crc kubenswrapper[5008]: I0129 15:49:43.868950 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fa21b57d-29c9-4b5d-8712-66e3d5762f26-logs\") pod \"glance-default-external-api-0\" (UID: \"fa21b57d-29c9-4b5d-8712-66e3d5762f26\") " pod="openstack/glance-default-external-api-0" Jan 29 15:49:43 crc kubenswrapper[5008]: I0129 15:49:43.869014 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"glance-default-external-api-0\" (UID: \"fa21b57d-29c9-4b5d-8712-66e3d5762f26\") " pod="openstack/glance-default-external-api-0" Jan 29 15:49:43 crc kubenswrapper[5008]: I0129 15:49:43.869076 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-956zp\" (UniqueName: \"kubernetes.io/projected/fa21b57d-29c9-4b5d-8712-66e3d5762f26-kube-api-access-956zp\") pod \"glance-default-external-api-0\" (UID: \"fa21b57d-29c9-4b5d-8712-66e3d5762f26\") " pod="openstack/glance-default-external-api-0" Jan 29 15:49:43 crc kubenswrapper[5008]: I0129 15:49:43.869099 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fa21b57d-29c9-4b5d-8712-66e3d5762f26-scripts\") pod \"glance-default-external-api-0\" (UID: \"fa21b57d-29c9-4b5d-8712-66e3d5762f26\") " pod="openstack/glance-default-external-api-0" Jan 29 15:49:43 crc kubenswrapper[5008]: I0129 15:49:43.869656 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/fa21b57d-29c9-4b5d-8712-66e3d5762f26-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"fa21b57d-29c9-4b5d-8712-66e3d5762f26\") " pod="openstack/glance-default-external-api-0" Jan 29 15:49:43 crc kubenswrapper[5008]: I0129 15:49:43.869747 5008 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"glance-default-external-api-0\" (UID: \"fa21b57d-29c9-4b5d-8712-66e3d5762f26\") device mount path \"/mnt/openstack/pv04\"" pod="openstack/glance-default-external-api-0" Jan 29 15:49:43 crc kubenswrapper[5008]: I0129 15:49:43.869885 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fa21b57d-29c9-4b5d-8712-66e3d5762f26-logs\") pod \"glance-default-external-api-0\" (UID: \"fa21b57d-29c9-4b5d-8712-66e3d5762f26\") " pod="openstack/glance-default-external-api-0" Jan 29 15:49:43 crc kubenswrapper[5008]: I0129 15:49:43.873134 5008 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 29 15:49:43 crc kubenswrapper[5008]: I0129 15:49:43.873669 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fa21b57d-29c9-4b5d-8712-66e3d5762f26-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"fa21b57d-29c9-4b5d-8712-66e3d5762f26\") " pod="openstack/glance-default-external-api-0" Jan 29 15:49:43 crc kubenswrapper[5008]: I0129 15:49:43.877909 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 29 15:49:43 crc kubenswrapper[5008]: I0129 15:49:43.880067 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fa21b57d-29c9-4b5d-8712-66e3d5762f26-config-data\") pod \"glance-default-external-api-0\" (UID: \"fa21b57d-29c9-4b5d-8712-66e3d5762f26\") " pod="openstack/glance-default-external-api-0" Jan 29 15:49:43 crc kubenswrapper[5008]: I0129 15:49:43.880203 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fa21b57d-29c9-4b5d-8712-66e3d5762f26-scripts\") pod \"glance-default-external-api-0\" (UID: \"fa21b57d-29c9-4b5d-8712-66e3d5762f26\") " pod="openstack/glance-default-external-api-0" Jan 29 15:49:43 crc kubenswrapper[5008]: I0129 15:49:43.880807 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Jan 29 15:49:43 crc kubenswrapper[5008]: I0129 15:49:43.887249 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 29 15:49:43 crc kubenswrapper[5008]: I0129 15:49:43.892712 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-956zp\" (UniqueName: \"kubernetes.io/projected/fa21b57d-29c9-4b5d-8712-66e3d5762f26-kube-api-access-956zp\") pod \"glance-default-external-api-0\" (UID: \"fa21b57d-29c9-4b5d-8712-66e3d5762f26\") " pod="openstack/glance-default-external-api-0" Jan 29 15:49:43 crc kubenswrapper[5008]: I0129 15:49:43.907799 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"glance-default-external-api-0\" (UID: \"fa21b57d-29c9-4b5d-8712-66e3d5762f26\") " pod="openstack/glance-default-external-api-0" Jan 29 15:49:43 crc kubenswrapper[5008]: I0129 15:49:43.971328 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5858a5f6-5bd8-43b0-84bd-fc0cca454905-config-data\") pod \"glance-default-internal-api-0\" (UID: \"5858a5f6-5bd8-43b0-84bd-fc0cca454905\") " pod="openstack/glance-default-internal-api-0" Jan 29 15:49:43 crc kubenswrapper[5008]: I0129 15:49:43.971393 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/5858a5f6-5bd8-43b0-84bd-fc0cca454905-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"5858a5f6-5bd8-43b0-84bd-fc0cca454905\") " pod="openstack/glance-default-internal-api-0" Jan 29 15:49:43 crc kubenswrapper[5008]: I0129 15:49:43.971441 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"glance-default-internal-api-0\" (UID: \"5858a5f6-5bd8-43b0-84bd-fc0cca454905\") " pod="openstack/glance-default-internal-api-0" Jan 29 15:49:43 crc kubenswrapper[5008]: I0129 15:49:43.971463 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6gf7r\" (UniqueName: \"kubernetes.io/projected/5858a5f6-5bd8-43b0-84bd-fc0cca454905-kube-api-access-6gf7r\") pod \"glance-default-internal-api-0\" (UID: \"5858a5f6-5bd8-43b0-84bd-fc0cca454905\") " pod="openstack/glance-default-internal-api-0" Jan 29 15:49:43 crc kubenswrapper[5008]: I0129 15:49:43.971506 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5858a5f6-5bd8-43b0-84bd-fc0cca454905-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"5858a5f6-5bd8-43b0-84bd-fc0cca454905\") " pod="openstack/glance-default-internal-api-0" Jan 29 15:49:43 crc kubenswrapper[5008]: I0129 15:49:43.971547 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5858a5f6-5bd8-43b0-84bd-fc0cca454905-scripts\") pod \"glance-default-internal-api-0\" (UID: \"5858a5f6-5bd8-43b0-84bd-fc0cca454905\") " pod="openstack/glance-default-internal-api-0" Jan 29 15:49:43 crc kubenswrapper[5008]: I0129 15:49:43.971565 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5858a5f6-5bd8-43b0-84bd-fc0cca454905-logs\") pod \"glance-default-internal-api-0\" (UID: \"5858a5f6-5bd8-43b0-84bd-fc0cca454905\") " pod="openstack/glance-default-internal-api-0" Jan 29 15:49:44 crc kubenswrapper[5008]: I0129 15:49:44.002544 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 29 15:49:44 crc kubenswrapper[5008]: I0129 15:49:44.072939 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5858a5f6-5bd8-43b0-84bd-fc0cca454905-config-data\") pod \"glance-default-internal-api-0\" (UID: \"5858a5f6-5bd8-43b0-84bd-fc0cca454905\") " pod="openstack/glance-default-internal-api-0" Jan 29 15:49:44 crc kubenswrapper[5008]: I0129 15:49:44.073009 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/5858a5f6-5bd8-43b0-84bd-fc0cca454905-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"5858a5f6-5bd8-43b0-84bd-fc0cca454905\") " pod="openstack/glance-default-internal-api-0" Jan 29 15:49:44 crc kubenswrapper[5008]: I0129 15:49:44.073099 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"glance-default-internal-api-0\" (UID: \"5858a5f6-5bd8-43b0-84bd-fc0cca454905\") " pod="openstack/glance-default-internal-api-0" Jan 29 15:49:44 crc kubenswrapper[5008]: I0129 15:49:44.073128 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6gf7r\" (UniqueName: \"kubernetes.io/projected/5858a5f6-5bd8-43b0-84bd-fc0cca454905-kube-api-access-6gf7r\") pod \"glance-default-internal-api-0\" (UID: \"5858a5f6-5bd8-43b0-84bd-fc0cca454905\") " pod="openstack/glance-default-internal-api-0" Jan 29 15:49:44 crc kubenswrapper[5008]: I0129 15:49:44.073185 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5858a5f6-5bd8-43b0-84bd-fc0cca454905-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"5858a5f6-5bd8-43b0-84bd-fc0cca454905\") " pod="openstack/glance-default-internal-api-0" Jan 29 15:49:44 crc kubenswrapper[5008]: I0129 15:49:44.073217 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5858a5f6-5bd8-43b0-84bd-fc0cca454905-scripts\") pod \"glance-default-internal-api-0\" (UID: \"5858a5f6-5bd8-43b0-84bd-fc0cca454905\") " pod="openstack/glance-default-internal-api-0" Jan 29 15:49:44 crc kubenswrapper[5008]: I0129 15:49:44.073244 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5858a5f6-5bd8-43b0-84bd-fc0cca454905-logs\") pod \"glance-default-internal-api-0\" (UID: \"5858a5f6-5bd8-43b0-84bd-fc0cca454905\") " pod="openstack/glance-default-internal-api-0" Jan 29 15:49:44 crc kubenswrapper[5008]: I0129 15:49:44.073769 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5858a5f6-5bd8-43b0-84bd-fc0cca454905-logs\") pod \"glance-default-internal-api-0\" (UID: \"5858a5f6-5bd8-43b0-84bd-fc0cca454905\") " pod="openstack/glance-default-internal-api-0" Jan 29 15:49:44 crc kubenswrapper[5008]: I0129 15:49:44.074739 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/5858a5f6-5bd8-43b0-84bd-fc0cca454905-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"5858a5f6-5bd8-43b0-84bd-fc0cca454905\") " pod="openstack/glance-default-internal-api-0" Jan 29 15:49:44 crc kubenswrapper[5008]: I0129 15:49:44.074759 5008 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"glance-default-internal-api-0\" (UID: \"5858a5f6-5bd8-43b0-84bd-fc0cca454905\") device mount path \"/mnt/openstack/pv07\"" pod="openstack/glance-default-internal-api-0" Jan 29 15:49:44 crc kubenswrapper[5008]: I0129 15:49:44.080836 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5858a5f6-5bd8-43b0-84bd-fc0cca454905-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"5858a5f6-5bd8-43b0-84bd-fc0cca454905\") " pod="openstack/glance-default-internal-api-0" Jan 29 15:49:44 crc kubenswrapper[5008]: I0129 15:49:44.088730 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5858a5f6-5bd8-43b0-84bd-fc0cca454905-scripts\") pod \"glance-default-internal-api-0\" (UID: \"5858a5f6-5bd8-43b0-84bd-fc0cca454905\") " pod="openstack/glance-default-internal-api-0" Jan 29 15:49:44 crc kubenswrapper[5008]: I0129 15:49:44.106024 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5858a5f6-5bd8-43b0-84bd-fc0cca454905-config-data\") pod \"glance-default-internal-api-0\" (UID: \"5858a5f6-5bd8-43b0-84bd-fc0cca454905\") " pod="openstack/glance-default-internal-api-0" Jan 29 15:49:44 crc kubenswrapper[5008]: I0129 15:49:44.109807 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6gf7r\" (UniqueName: \"kubernetes.io/projected/5858a5f6-5bd8-43b0-84bd-fc0cca454905-kube-api-access-6gf7r\") pod \"glance-default-internal-api-0\" (UID: \"5858a5f6-5bd8-43b0-84bd-fc0cca454905\") " pod="openstack/glance-default-internal-api-0" Jan 29 15:49:44 crc kubenswrapper[5008]: I0129 15:49:44.117237 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"glance-default-internal-api-0\" (UID: \"5858a5f6-5bd8-43b0-84bd-fc0cca454905\") " pod="openstack/glance-default-internal-api-0" Jan 29 15:49:44 crc kubenswrapper[5008]: I0129 15:49:44.201532 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 29 15:49:44 crc kubenswrapper[5008]: I0129 15:49:44.277055 5008 generic.go:334] "Generic (PLEG): container finished" podID="eeec0b0d-d386-486c-9bd7-2dfe88016cd8" containerID="79dbfd36569d422fdc7006449ff5ac80732d06ddd7a01c876a0b70533ac654e5" exitCode=0 Jan 29 15:49:44 crc kubenswrapper[5008]: I0129 15:49:44.277100 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-56df8fb6b7-ltv6m" event={"ID":"eeec0b0d-d386-486c-9bd7-2dfe88016cd8","Type":"ContainerDied","Data":"79dbfd36569d422fdc7006449ff5ac80732d06ddd7a01c876a0b70533ac654e5"} Jan 29 15:49:44 crc kubenswrapper[5008]: I0129 15:49:44.277126 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-56df8fb6b7-ltv6m" event={"ID":"eeec0b0d-d386-486c-9bd7-2dfe88016cd8","Type":"ContainerStarted","Data":"ae0b8d6c25c2b8b74e6f25f289f7b9be41b0f6b931b8004d5a8d1e2aa3fcb1dc"} Jan 29 15:49:44 crc kubenswrapper[5008]: I0129 15:49:44.558939 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 29 15:49:44 crc kubenswrapper[5008]: W0129 15:49:44.576738 5008 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfa21b57d_29c9_4b5d_8712_66e3d5762f26.slice/crio-cb8d466fac355bf7024eed81be62a68b67397ec6430e1ffe9e4072ffb3b4fd0b WatchSource:0}: Error finding container cb8d466fac355bf7024eed81be62a68b67397ec6430e1ffe9e4072ffb3b4fd0b: Status 404 returned error can't find the container with id cb8d466fac355bf7024eed81be62a68b67397ec6430e1ffe9e4072ffb3b4fd0b Jan 29 15:49:44 crc kubenswrapper[5008]: I0129 15:49:44.800206 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 29 15:49:44 crc kubenswrapper[5008]: W0129 15:49:44.811205 5008 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5858a5f6_5bd8_43b0_84bd_fc0cca454905.slice/crio-68397ad26b3459caddedb82fa3d7628ebbcabc96acf3985abf57176a47336435 WatchSource:0}: Error finding container 68397ad26b3459caddedb82fa3d7628ebbcabc96acf3985abf57176a47336435: Status 404 returned error can't find the container with id 68397ad26b3459caddedb82fa3d7628ebbcabc96acf3985abf57176a47336435 Jan 29 15:49:45 crc kubenswrapper[5008]: I0129 15:49:45.297883 5008 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 29 15:49:45 crc kubenswrapper[5008]: I0129 15:49:45.298747 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"5858a5f6-5bd8-43b0-84bd-fc0cca454905","Type":"ContainerStarted","Data":"68397ad26b3459caddedb82fa3d7628ebbcabc96acf3985abf57176a47336435"} Jan 29 15:49:45 crc kubenswrapper[5008]: I0129 15:49:45.316975 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-56df8fb6b7-ltv6m" event={"ID":"eeec0b0d-d386-486c-9bd7-2dfe88016cd8","Type":"ContainerStarted","Data":"8f1b00e6962ba213860058464826f9ee3c7898cafeff02094c1871a86a85758a"} Jan 29 15:49:45 crc kubenswrapper[5008]: I0129 15:49:45.317197 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-56df8fb6b7-ltv6m" Jan 29 15:49:45 crc kubenswrapper[5008]: I0129 15:49:45.321382 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"fa21b57d-29c9-4b5d-8712-66e3d5762f26","Type":"ContainerStarted","Data":"c39c27dcdd94f4899fd51121a8cc666b5c4972f35e5ccdc41554ad30f32d91f5"} Jan 29 15:49:45 crc kubenswrapper[5008]: I0129 15:49:45.321573 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"fa21b57d-29c9-4b5d-8712-66e3d5762f26","Type":"ContainerStarted","Data":"cb8d466fac355bf7024eed81be62a68b67397ec6430e1ffe9e4072ffb3b4fd0b"} Jan 29 15:49:45 crc kubenswrapper[5008]: I0129 15:49:45.348573 5008 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-56df8fb6b7-ltv6m" podStartSLOduration=3.348551747 podStartE2EDuration="3.348551747s" podCreationTimestamp="2026-01-29 15:49:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 15:49:45.341014184 +0000 UTC m=+1329.013868441" watchObservedRunningTime="2026-01-29 15:49:45.348551747 +0000 UTC m=+1329.021406004" Jan 29 15:49:45 crc kubenswrapper[5008]: I0129 15:49:45.379882 5008 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 29 15:49:46 crc kubenswrapper[5008]: I0129 15:49:46.331776 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"fa21b57d-29c9-4b5d-8712-66e3d5762f26","Type":"ContainerStarted","Data":"15802ef9e7ad1550f97d0465cb45caef2306052c7877bb5028645b985dfd3c44"} Jan 29 15:49:46 crc kubenswrapper[5008]: I0129 15:49:46.332333 5008 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="fa21b57d-29c9-4b5d-8712-66e3d5762f26" containerName="glance-log" containerID="cri-o://c39c27dcdd94f4899fd51121a8cc666b5c4972f35e5ccdc41554ad30f32d91f5" gracePeriod=30 Jan 29 15:49:46 crc kubenswrapper[5008]: I0129 15:49:46.332835 5008 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="fa21b57d-29c9-4b5d-8712-66e3d5762f26" containerName="glance-httpd" containerID="cri-o://15802ef9e7ad1550f97d0465cb45caef2306052c7877bb5028645b985dfd3c44" gracePeriod=30 Jan 29 15:49:46 crc kubenswrapper[5008]: I0129 15:49:46.351921 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"5858a5f6-5bd8-43b0-84bd-fc0cca454905","Type":"ContainerStarted","Data":"0bfc91d2a4701b82935b807bf656dedc91ce5258bac2402d794853c482c9e6ba"} Jan 29 15:49:46 crc kubenswrapper[5008]: I0129 15:49:46.351973 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"5858a5f6-5bd8-43b0-84bd-fc0cca454905","Type":"ContainerStarted","Data":"f8a7a58418b5d32fdb5298004fa279c77a6a8f01505d43cf209a60b4445f0b33"} Jan 29 15:49:46 crc kubenswrapper[5008]: I0129 15:49:46.352192 5008 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="5858a5f6-5bd8-43b0-84bd-fc0cca454905" containerName="glance-log" containerID="cri-o://f8a7a58418b5d32fdb5298004fa279c77a6a8f01505d43cf209a60b4445f0b33" gracePeriod=30 Jan 29 15:49:46 crc kubenswrapper[5008]: I0129 15:49:46.352212 5008 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="5858a5f6-5bd8-43b0-84bd-fc0cca454905" containerName="glance-httpd" containerID="cri-o://0bfc91d2a4701b82935b807bf656dedc91ce5258bac2402d794853c482c9e6ba" gracePeriod=30 Jan 29 15:49:46 crc kubenswrapper[5008]: I0129 15:49:46.358730 5008 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=4.358707231 podStartE2EDuration="4.358707231s" podCreationTimestamp="2026-01-29 15:49:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 15:49:46.35456488 +0000 UTC m=+1330.027419137" watchObservedRunningTime="2026-01-29 15:49:46.358707231 +0000 UTC m=+1330.031561468" Jan 29 15:49:46 crc kubenswrapper[5008]: I0129 15:49:46.390800 5008 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=4.390761299 podStartE2EDuration="4.390761299s" podCreationTimestamp="2026-01-29 15:49:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 15:49:46.382008527 +0000 UTC m=+1330.054862784" watchObservedRunningTime="2026-01-29 15:49:46.390761299 +0000 UTC m=+1330.063615556" Jan 29 15:49:47 crc kubenswrapper[5008]: I0129 15:49:47.017762 5008 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 29 15:49:47 crc kubenswrapper[5008]: I0129 15:49:47.041850 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fa21b57d-29c9-4b5d-8712-66e3d5762f26-scripts\") pod \"fa21b57d-29c9-4b5d-8712-66e3d5762f26\" (UID: \"fa21b57d-29c9-4b5d-8712-66e3d5762f26\") " Jan 29 15:49:47 crc kubenswrapper[5008]: I0129 15:49:47.042061 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fa21b57d-29c9-4b5d-8712-66e3d5762f26-config-data\") pod \"fa21b57d-29c9-4b5d-8712-66e3d5762f26\" (UID: \"fa21b57d-29c9-4b5d-8712-66e3d5762f26\") " Jan 29 15:49:47 crc kubenswrapper[5008]: I0129 15:49:47.042106 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-956zp\" (UniqueName: \"kubernetes.io/projected/fa21b57d-29c9-4b5d-8712-66e3d5762f26-kube-api-access-956zp\") pod \"fa21b57d-29c9-4b5d-8712-66e3d5762f26\" (UID: \"fa21b57d-29c9-4b5d-8712-66e3d5762f26\") " Jan 29 15:49:47 crc kubenswrapper[5008]: I0129 15:49:47.042134 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fa21b57d-29c9-4b5d-8712-66e3d5762f26-logs\") pod \"fa21b57d-29c9-4b5d-8712-66e3d5762f26\" (UID: \"fa21b57d-29c9-4b5d-8712-66e3d5762f26\") " Jan 29 15:49:47 crc kubenswrapper[5008]: I0129 15:49:47.042178 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/fa21b57d-29c9-4b5d-8712-66e3d5762f26-httpd-run\") pod \"fa21b57d-29c9-4b5d-8712-66e3d5762f26\" (UID: \"fa21b57d-29c9-4b5d-8712-66e3d5762f26\") " Jan 29 15:49:47 crc kubenswrapper[5008]: I0129 15:49:47.042242 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fa21b57d-29c9-4b5d-8712-66e3d5762f26-combined-ca-bundle\") pod \"fa21b57d-29c9-4b5d-8712-66e3d5762f26\" (UID: \"fa21b57d-29c9-4b5d-8712-66e3d5762f26\") " Jan 29 15:49:47 crc kubenswrapper[5008]: I0129 15:49:47.042273 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"fa21b57d-29c9-4b5d-8712-66e3d5762f26\" (UID: \"fa21b57d-29c9-4b5d-8712-66e3d5762f26\") " Jan 29 15:49:47 crc kubenswrapper[5008]: I0129 15:49:47.043397 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fa21b57d-29c9-4b5d-8712-66e3d5762f26-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "fa21b57d-29c9-4b5d-8712-66e3d5762f26" (UID: "fa21b57d-29c9-4b5d-8712-66e3d5762f26"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 15:49:47 crc kubenswrapper[5008]: I0129 15:49:47.044346 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fa21b57d-29c9-4b5d-8712-66e3d5762f26-logs" (OuterVolumeSpecName: "logs") pod "fa21b57d-29c9-4b5d-8712-66e3d5762f26" (UID: "fa21b57d-29c9-4b5d-8712-66e3d5762f26"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 15:49:47 crc kubenswrapper[5008]: I0129 15:49:47.057420 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fa21b57d-29c9-4b5d-8712-66e3d5762f26-scripts" (OuterVolumeSpecName: "scripts") pod "fa21b57d-29c9-4b5d-8712-66e3d5762f26" (UID: "fa21b57d-29c9-4b5d-8712-66e3d5762f26"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:49:47 crc kubenswrapper[5008]: I0129 15:49:47.067087 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fa21b57d-29c9-4b5d-8712-66e3d5762f26-kube-api-access-956zp" (OuterVolumeSpecName: "kube-api-access-956zp") pod "fa21b57d-29c9-4b5d-8712-66e3d5762f26" (UID: "fa21b57d-29c9-4b5d-8712-66e3d5762f26"). InnerVolumeSpecName "kube-api-access-956zp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:49:47 crc kubenswrapper[5008]: I0129 15:49:47.081708 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage04-crc" (OuterVolumeSpecName: "glance") pod "fa21b57d-29c9-4b5d-8712-66e3d5762f26" (UID: "fa21b57d-29c9-4b5d-8712-66e3d5762f26"). InnerVolumeSpecName "local-storage04-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 29 15:49:47 crc kubenswrapper[5008]: I0129 15:49:47.082948 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fa21b57d-29c9-4b5d-8712-66e3d5762f26-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "fa21b57d-29c9-4b5d-8712-66e3d5762f26" (UID: "fa21b57d-29c9-4b5d-8712-66e3d5762f26"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:49:47 crc kubenswrapper[5008]: I0129 15:49:47.098802 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fa21b57d-29c9-4b5d-8712-66e3d5762f26-config-data" (OuterVolumeSpecName: "config-data") pod "fa21b57d-29c9-4b5d-8712-66e3d5762f26" (UID: "fa21b57d-29c9-4b5d-8712-66e3d5762f26"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:49:47 crc kubenswrapper[5008]: I0129 15:49:47.144304 5008 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fa21b57d-29c9-4b5d-8712-66e3d5762f26-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 15:49:47 crc kubenswrapper[5008]: I0129 15:49:47.144354 5008 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") on node \"crc\" " Jan 29 15:49:47 crc kubenswrapper[5008]: I0129 15:49:47.144364 5008 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fa21b57d-29c9-4b5d-8712-66e3d5762f26-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 15:49:47 crc kubenswrapper[5008]: I0129 15:49:47.144373 5008 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fa21b57d-29c9-4b5d-8712-66e3d5762f26-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 15:49:47 crc kubenswrapper[5008]: I0129 15:49:47.144382 5008 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-956zp\" (UniqueName: \"kubernetes.io/projected/fa21b57d-29c9-4b5d-8712-66e3d5762f26-kube-api-access-956zp\") on node \"crc\" DevicePath \"\"" Jan 29 15:49:47 crc kubenswrapper[5008]: I0129 15:49:47.144392 5008 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fa21b57d-29c9-4b5d-8712-66e3d5762f26-logs\") on node \"crc\" DevicePath \"\"" Jan 29 15:49:47 crc kubenswrapper[5008]: I0129 15:49:47.144400 5008 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/fa21b57d-29c9-4b5d-8712-66e3d5762f26-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 29 15:49:47 crc kubenswrapper[5008]: I0129 15:49:47.163733 5008 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage04-crc" (UniqueName: "kubernetes.io/local-volume/local-storage04-crc") on node "crc" Jan 29 15:49:47 crc kubenswrapper[5008]: I0129 15:49:47.245456 5008 reconciler_common.go:293] "Volume detached for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") on node \"crc\" DevicePath \"\"" Jan 29 15:49:47 crc kubenswrapper[5008]: I0129 15:49:47.363816 5008 generic.go:334] "Generic (PLEG): container finished" podID="fa21b57d-29c9-4b5d-8712-66e3d5762f26" containerID="15802ef9e7ad1550f97d0465cb45caef2306052c7877bb5028645b985dfd3c44" exitCode=0 Jan 29 15:49:47 crc kubenswrapper[5008]: I0129 15:49:47.363860 5008 generic.go:334] "Generic (PLEG): container finished" podID="fa21b57d-29c9-4b5d-8712-66e3d5762f26" containerID="c39c27dcdd94f4899fd51121a8cc666b5c4972f35e5ccdc41554ad30f32d91f5" exitCode=143 Jan 29 15:49:47 crc kubenswrapper[5008]: I0129 15:49:47.363924 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"fa21b57d-29c9-4b5d-8712-66e3d5762f26","Type":"ContainerDied","Data":"15802ef9e7ad1550f97d0465cb45caef2306052c7877bb5028645b985dfd3c44"} Jan 29 15:49:47 crc kubenswrapper[5008]: I0129 15:49:47.363961 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"fa21b57d-29c9-4b5d-8712-66e3d5762f26","Type":"ContainerDied","Data":"c39c27dcdd94f4899fd51121a8cc666b5c4972f35e5ccdc41554ad30f32d91f5"} Jan 29 15:49:47 crc kubenswrapper[5008]: I0129 15:49:47.363980 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"fa21b57d-29c9-4b5d-8712-66e3d5762f26","Type":"ContainerDied","Data":"cb8d466fac355bf7024eed81be62a68b67397ec6430e1ffe9e4072ffb3b4fd0b"} Jan 29 15:49:47 crc kubenswrapper[5008]: I0129 15:49:47.364004 5008 scope.go:117] "RemoveContainer" containerID="15802ef9e7ad1550f97d0465cb45caef2306052c7877bb5028645b985dfd3c44" Jan 29 15:49:47 crc kubenswrapper[5008]: I0129 15:49:47.364181 5008 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 29 15:49:47 crc kubenswrapper[5008]: I0129 15:49:47.377105 5008 generic.go:334] "Generic (PLEG): container finished" podID="5858a5f6-5bd8-43b0-84bd-fc0cca454905" containerID="0bfc91d2a4701b82935b807bf656dedc91ce5258bac2402d794853c482c9e6ba" exitCode=0 Jan 29 15:49:47 crc kubenswrapper[5008]: I0129 15:49:47.377371 5008 generic.go:334] "Generic (PLEG): container finished" podID="5858a5f6-5bd8-43b0-84bd-fc0cca454905" containerID="f8a7a58418b5d32fdb5298004fa279c77a6a8f01505d43cf209a60b4445f0b33" exitCode=143 Jan 29 15:49:47 crc kubenswrapper[5008]: I0129 15:49:47.377182 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"5858a5f6-5bd8-43b0-84bd-fc0cca454905","Type":"ContainerDied","Data":"0bfc91d2a4701b82935b807bf656dedc91ce5258bac2402d794853c482c9e6ba"} Jan 29 15:49:47 crc kubenswrapper[5008]: I0129 15:49:47.377417 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"5858a5f6-5bd8-43b0-84bd-fc0cca454905","Type":"ContainerDied","Data":"f8a7a58418b5d32fdb5298004fa279c77a6a8f01505d43cf209a60b4445f0b33"} Jan 29 15:49:47 crc kubenswrapper[5008]: I0129 15:49:47.413599 5008 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 29 15:49:47 crc kubenswrapper[5008]: I0129 15:49:47.422753 5008 scope.go:117] "RemoveContainer" containerID="c39c27dcdd94f4899fd51121a8cc666b5c4972f35e5ccdc41554ad30f32d91f5" Jan 29 15:49:47 crc kubenswrapper[5008]: I0129 15:49:47.435588 5008 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 29 15:49:47 crc kubenswrapper[5008]: I0129 15:49:47.456913 5008 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Jan 29 15:49:47 crc kubenswrapper[5008]: E0129 15:49:47.457324 5008 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fa21b57d-29c9-4b5d-8712-66e3d5762f26" containerName="glance-log" Jan 29 15:49:47 crc kubenswrapper[5008]: I0129 15:49:47.457340 5008 state_mem.go:107] "Deleted CPUSet assignment" podUID="fa21b57d-29c9-4b5d-8712-66e3d5762f26" containerName="glance-log" Jan 29 15:49:47 crc kubenswrapper[5008]: E0129 15:49:47.457360 5008 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fa21b57d-29c9-4b5d-8712-66e3d5762f26" containerName="glance-httpd" Jan 29 15:49:47 crc kubenswrapper[5008]: I0129 15:49:47.457367 5008 state_mem.go:107] "Deleted CPUSet assignment" podUID="fa21b57d-29c9-4b5d-8712-66e3d5762f26" containerName="glance-httpd" Jan 29 15:49:47 crc kubenswrapper[5008]: I0129 15:49:47.457563 5008 memory_manager.go:354] "RemoveStaleState removing state" podUID="fa21b57d-29c9-4b5d-8712-66e3d5762f26" containerName="glance-log" Jan 29 15:49:47 crc kubenswrapper[5008]: I0129 15:49:47.457631 5008 memory_manager.go:354] "RemoveStaleState removing state" podUID="fa21b57d-29c9-4b5d-8712-66e3d5762f26" containerName="glance-httpd" Jan 29 15:49:47 crc kubenswrapper[5008]: I0129 15:49:47.462290 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 29 15:49:47 crc kubenswrapper[5008]: I0129 15:49:47.464032 5008 scope.go:117] "RemoveContainer" containerID="15802ef9e7ad1550f97d0465cb45caef2306052c7877bb5028645b985dfd3c44" Jan 29 15:49:47 crc kubenswrapper[5008]: E0129 15:49:47.466578 5008 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"15802ef9e7ad1550f97d0465cb45caef2306052c7877bb5028645b985dfd3c44\": container with ID starting with 15802ef9e7ad1550f97d0465cb45caef2306052c7877bb5028645b985dfd3c44 not found: ID does not exist" containerID="15802ef9e7ad1550f97d0465cb45caef2306052c7877bb5028645b985dfd3c44" Jan 29 15:49:47 crc kubenswrapper[5008]: I0129 15:49:47.466620 5008 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"15802ef9e7ad1550f97d0465cb45caef2306052c7877bb5028645b985dfd3c44"} err="failed to get container status \"15802ef9e7ad1550f97d0465cb45caef2306052c7877bb5028645b985dfd3c44\": rpc error: code = NotFound desc = could not find container \"15802ef9e7ad1550f97d0465cb45caef2306052c7877bb5028645b985dfd3c44\": container with ID starting with 15802ef9e7ad1550f97d0465cb45caef2306052c7877bb5028645b985dfd3c44 not found: ID does not exist" Jan 29 15:49:47 crc kubenswrapper[5008]: I0129 15:49:47.466678 5008 scope.go:117] "RemoveContainer" containerID="c39c27dcdd94f4899fd51121a8cc666b5c4972f35e5ccdc41554ad30f32d91f5" Jan 29 15:49:47 crc kubenswrapper[5008]: E0129 15:49:47.467214 5008 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c39c27dcdd94f4899fd51121a8cc666b5c4972f35e5ccdc41554ad30f32d91f5\": container with ID starting with c39c27dcdd94f4899fd51121a8cc666b5c4972f35e5ccdc41554ad30f32d91f5 not found: ID does not exist" containerID="c39c27dcdd94f4899fd51121a8cc666b5c4972f35e5ccdc41554ad30f32d91f5" Jan 29 15:49:47 crc kubenswrapper[5008]: I0129 15:49:47.467260 5008 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c39c27dcdd94f4899fd51121a8cc666b5c4972f35e5ccdc41554ad30f32d91f5"} err="failed to get container status \"c39c27dcdd94f4899fd51121a8cc666b5c4972f35e5ccdc41554ad30f32d91f5\": rpc error: code = NotFound desc = could not find container \"c39c27dcdd94f4899fd51121a8cc666b5c4972f35e5ccdc41554ad30f32d91f5\": container with ID starting with c39c27dcdd94f4899fd51121a8cc666b5c4972f35e5ccdc41554ad30f32d91f5 not found: ID does not exist" Jan 29 15:49:47 crc kubenswrapper[5008]: I0129 15:49:47.467288 5008 scope.go:117] "RemoveContainer" containerID="15802ef9e7ad1550f97d0465cb45caef2306052c7877bb5028645b985dfd3c44" Jan 29 15:49:47 crc kubenswrapper[5008]: I0129 15:49:47.467667 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Jan 29 15:49:47 crc kubenswrapper[5008]: I0129 15:49:47.467885 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Jan 29 15:49:47 crc kubenswrapper[5008]: I0129 15:49:47.470245 5008 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"15802ef9e7ad1550f97d0465cb45caef2306052c7877bb5028645b985dfd3c44"} err="failed to get container status \"15802ef9e7ad1550f97d0465cb45caef2306052c7877bb5028645b985dfd3c44\": rpc error: code = NotFound desc = could not find container \"15802ef9e7ad1550f97d0465cb45caef2306052c7877bb5028645b985dfd3c44\": container with ID starting with 15802ef9e7ad1550f97d0465cb45caef2306052c7877bb5028645b985dfd3c44 not found: ID does not exist" Jan 29 15:49:47 crc kubenswrapper[5008]: I0129 15:49:47.470275 5008 scope.go:117] "RemoveContainer" containerID="c39c27dcdd94f4899fd51121a8cc666b5c4972f35e5ccdc41554ad30f32d91f5" Jan 29 15:49:47 crc kubenswrapper[5008]: I0129 15:49:47.470474 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 29 15:49:47 crc kubenswrapper[5008]: I0129 15:49:47.471112 5008 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c39c27dcdd94f4899fd51121a8cc666b5c4972f35e5ccdc41554ad30f32d91f5"} err="failed to get container status \"c39c27dcdd94f4899fd51121a8cc666b5c4972f35e5ccdc41554ad30f32d91f5\": rpc error: code = NotFound desc = could not find container \"c39c27dcdd94f4899fd51121a8cc666b5c4972f35e5ccdc41554ad30f32d91f5\": container with ID starting with c39c27dcdd94f4899fd51121a8cc666b5c4972f35e5ccdc41554ad30f32d91f5 not found: ID does not exist" Jan 29 15:49:47 crc kubenswrapper[5008]: E0129 15:49:47.473210 5008 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfa21b57d_29c9_4b5d_8712_66e3d5762f26.slice\": RecentStats: unable to find data in memory cache]" Jan 29 15:49:47 crc kubenswrapper[5008]: I0129 15:49:47.551232 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/a4572386-a7c3-434a-8bcb-d1643d6893c9-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"a4572386-a7c3-434a-8bcb-d1643d6893c9\") " pod="openstack/glance-default-external-api-0" Jan 29 15:49:47 crc kubenswrapper[5008]: I0129 15:49:47.551358 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rw62q\" (UniqueName: \"kubernetes.io/projected/a4572386-a7c3-434a-8bcb-d1643d6893c9-kube-api-access-rw62q\") pod \"glance-default-external-api-0\" (UID: \"a4572386-a7c3-434a-8bcb-d1643d6893c9\") " pod="openstack/glance-default-external-api-0" Jan 29 15:49:47 crc kubenswrapper[5008]: I0129 15:49:47.551453 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a4572386-a7c3-434a-8bcb-d1643d6893c9-logs\") pod \"glance-default-external-api-0\" (UID: \"a4572386-a7c3-434a-8bcb-d1643d6893c9\") " pod="openstack/glance-default-external-api-0" Jan 29 15:49:47 crc kubenswrapper[5008]: I0129 15:49:47.551486 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a4572386-a7c3-434a-8bcb-d1643d6893c9-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"a4572386-a7c3-434a-8bcb-d1643d6893c9\") " pod="openstack/glance-default-external-api-0" Jan 29 15:49:47 crc kubenswrapper[5008]: I0129 15:49:47.551594 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a4572386-a7c3-434a-8bcb-d1643d6893c9-config-data\") pod \"glance-default-external-api-0\" (UID: \"a4572386-a7c3-434a-8bcb-d1643d6893c9\") " pod="openstack/glance-default-external-api-0" Jan 29 15:49:47 crc kubenswrapper[5008]: I0129 15:49:47.551716 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/a4572386-a7c3-434a-8bcb-d1643d6893c9-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"a4572386-a7c3-434a-8bcb-d1643d6893c9\") " pod="openstack/glance-default-external-api-0" Jan 29 15:49:47 crc kubenswrapper[5008]: I0129 15:49:47.551741 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"glance-default-external-api-0\" (UID: \"a4572386-a7c3-434a-8bcb-d1643d6893c9\") " pod="openstack/glance-default-external-api-0" Jan 29 15:49:47 crc kubenswrapper[5008]: I0129 15:49:47.551825 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a4572386-a7c3-434a-8bcb-d1643d6893c9-scripts\") pod \"glance-default-external-api-0\" (UID: \"a4572386-a7c3-434a-8bcb-d1643d6893c9\") " pod="openstack/glance-default-external-api-0" Jan 29 15:49:47 crc kubenswrapper[5008]: I0129 15:49:47.653281 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/a4572386-a7c3-434a-8bcb-d1643d6893c9-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"a4572386-a7c3-434a-8bcb-d1643d6893c9\") " pod="openstack/glance-default-external-api-0" Jan 29 15:49:47 crc kubenswrapper[5008]: I0129 15:49:47.655266 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rw62q\" (UniqueName: \"kubernetes.io/projected/a4572386-a7c3-434a-8bcb-d1643d6893c9-kube-api-access-rw62q\") pod \"glance-default-external-api-0\" (UID: \"a4572386-a7c3-434a-8bcb-d1643d6893c9\") " pod="openstack/glance-default-external-api-0" Jan 29 15:49:47 crc kubenswrapper[5008]: I0129 15:49:47.655295 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a4572386-a7c3-434a-8bcb-d1643d6893c9-logs\") pod \"glance-default-external-api-0\" (UID: \"a4572386-a7c3-434a-8bcb-d1643d6893c9\") " pod="openstack/glance-default-external-api-0" Jan 29 15:49:47 crc kubenswrapper[5008]: I0129 15:49:47.655317 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a4572386-a7c3-434a-8bcb-d1643d6893c9-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"a4572386-a7c3-434a-8bcb-d1643d6893c9\") " pod="openstack/glance-default-external-api-0" Jan 29 15:49:47 crc kubenswrapper[5008]: I0129 15:49:47.655345 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a4572386-a7c3-434a-8bcb-d1643d6893c9-config-data\") pod \"glance-default-external-api-0\" (UID: \"a4572386-a7c3-434a-8bcb-d1643d6893c9\") " pod="openstack/glance-default-external-api-0" Jan 29 15:49:47 crc kubenswrapper[5008]: I0129 15:49:47.655381 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/a4572386-a7c3-434a-8bcb-d1643d6893c9-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"a4572386-a7c3-434a-8bcb-d1643d6893c9\") " pod="openstack/glance-default-external-api-0" Jan 29 15:49:47 crc kubenswrapper[5008]: I0129 15:49:47.655402 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"glance-default-external-api-0\" (UID: \"a4572386-a7c3-434a-8bcb-d1643d6893c9\") " pod="openstack/glance-default-external-api-0" Jan 29 15:49:47 crc kubenswrapper[5008]: I0129 15:49:47.655433 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a4572386-a7c3-434a-8bcb-d1643d6893c9-scripts\") pod \"glance-default-external-api-0\" (UID: \"a4572386-a7c3-434a-8bcb-d1643d6893c9\") " pod="openstack/glance-default-external-api-0" Jan 29 15:49:47 crc kubenswrapper[5008]: I0129 15:49:47.654113 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/a4572386-a7c3-434a-8bcb-d1643d6893c9-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"a4572386-a7c3-434a-8bcb-d1643d6893c9\") " pod="openstack/glance-default-external-api-0" Jan 29 15:49:47 crc kubenswrapper[5008]: I0129 15:49:47.656505 5008 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"glance-default-external-api-0\" (UID: \"a4572386-a7c3-434a-8bcb-d1643d6893c9\") device mount path \"/mnt/openstack/pv04\"" pod="openstack/glance-default-external-api-0" Jan 29 15:49:47 crc kubenswrapper[5008]: I0129 15:49:47.658047 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a4572386-a7c3-434a-8bcb-d1643d6893c9-logs\") pod \"glance-default-external-api-0\" (UID: \"a4572386-a7c3-434a-8bcb-d1643d6893c9\") " pod="openstack/glance-default-external-api-0" Jan 29 15:49:47 crc kubenswrapper[5008]: I0129 15:49:47.660416 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a4572386-a7c3-434a-8bcb-d1643d6893c9-scripts\") pod \"glance-default-external-api-0\" (UID: \"a4572386-a7c3-434a-8bcb-d1643d6893c9\") " pod="openstack/glance-default-external-api-0" Jan 29 15:49:47 crc kubenswrapper[5008]: I0129 15:49:47.661372 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a4572386-a7c3-434a-8bcb-d1643d6893c9-config-data\") pod \"glance-default-external-api-0\" (UID: \"a4572386-a7c3-434a-8bcb-d1643d6893c9\") " pod="openstack/glance-default-external-api-0" Jan 29 15:49:47 crc kubenswrapper[5008]: I0129 15:49:47.667365 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/a4572386-a7c3-434a-8bcb-d1643d6893c9-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"a4572386-a7c3-434a-8bcb-d1643d6893c9\") " pod="openstack/glance-default-external-api-0" Jan 29 15:49:47 crc kubenswrapper[5008]: I0129 15:49:47.671597 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a4572386-a7c3-434a-8bcb-d1643d6893c9-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"a4572386-a7c3-434a-8bcb-d1643d6893c9\") " pod="openstack/glance-default-external-api-0" Jan 29 15:49:47 crc kubenswrapper[5008]: I0129 15:49:47.678246 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rw62q\" (UniqueName: \"kubernetes.io/projected/a4572386-a7c3-434a-8bcb-d1643d6893c9-kube-api-access-rw62q\") pod \"glance-default-external-api-0\" (UID: \"a4572386-a7c3-434a-8bcb-d1643d6893c9\") " pod="openstack/glance-default-external-api-0" Jan 29 15:49:47 crc kubenswrapper[5008]: I0129 15:49:47.695370 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"glance-default-external-api-0\" (UID: \"a4572386-a7c3-434a-8bcb-d1643d6893c9\") " pod="openstack/glance-default-external-api-0" Jan 29 15:49:47 crc kubenswrapper[5008]: I0129 15:49:47.784929 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 29 15:49:47 crc kubenswrapper[5008]: I0129 15:49:47.830498 5008 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 29 15:49:47 crc kubenswrapper[5008]: I0129 15:49:47.858623 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5858a5f6-5bd8-43b0-84bd-fc0cca454905-combined-ca-bundle\") pod \"5858a5f6-5bd8-43b0-84bd-fc0cca454905\" (UID: \"5858a5f6-5bd8-43b0-84bd-fc0cca454905\") " Jan 29 15:49:47 crc kubenswrapper[5008]: I0129 15:49:47.858681 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5858a5f6-5bd8-43b0-84bd-fc0cca454905-config-data\") pod \"5858a5f6-5bd8-43b0-84bd-fc0cca454905\" (UID: \"5858a5f6-5bd8-43b0-84bd-fc0cca454905\") " Jan 29 15:49:47 crc kubenswrapper[5008]: I0129 15:49:47.858757 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5858a5f6-5bd8-43b0-84bd-fc0cca454905-scripts\") pod \"5858a5f6-5bd8-43b0-84bd-fc0cca454905\" (UID: \"5858a5f6-5bd8-43b0-84bd-fc0cca454905\") " Jan 29 15:49:47 crc kubenswrapper[5008]: I0129 15:49:47.858903 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"5858a5f6-5bd8-43b0-84bd-fc0cca454905\" (UID: \"5858a5f6-5bd8-43b0-84bd-fc0cca454905\") " Jan 29 15:49:47 crc kubenswrapper[5008]: I0129 15:49:47.858937 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/5858a5f6-5bd8-43b0-84bd-fc0cca454905-httpd-run\") pod \"5858a5f6-5bd8-43b0-84bd-fc0cca454905\" (UID: \"5858a5f6-5bd8-43b0-84bd-fc0cca454905\") " Jan 29 15:49:47 crc kubenswrapper[5008]: I0129 15:49:47.859014 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5858a5f6-5bd8-43b0-84bd-fc0cca454905-logs\") pod \"5858a5f6-5bd8-43b0-84bd-fc0cca454905\" (UID: \"5858a5f6-5bd8-43b0-84bd-fc0cca454905\") " Jan 29 15:49:47 crc kubenswrapper[5008]: I0129 15:49:47.859048 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6gf7r\" (UniqueName: \"kubernetes.io/projected/5858a5f6-5bd8-43b0-84bd-fc0cca454905-kube-api-access-6gf7r\") pod \"5858a5f6-5bd8-43b0-84bd-fc0cca454905\" (UID: \"5858a5f6-5bd8-43b0-84bd-fc0cca454905\") " Jan 29 15:49:47 crc kubenswrapper[5008]: I0129 15:49:47.860178 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5858a5f6-5bd8-43b0-84bd-fc0cca454905-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "5858a5f6-5bd8-43b0-84bd-fc0cca454905" (UID: "5858a5f6-5bd8-43b0-84bd-fc0cca454905"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 15:49:47 crc kubenswrapper[5008]: I0129 15:49:47.860672 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5858a5f6-5bd8-43b0-84bd-fc0cca454905-logs" (OuterVolumeSpecName: "logs") pod "5858a5f6-5bd8-43b0-84bd-fc0cca454905" (UID: "5858a5f6-5bd8-43b0-84bd-fc0cca454905"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 15:49:47 crc kubenswrapper[5008]: I0129 15:49:47.865220 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage07-crc" (OuterVolumeSpecName: "glance") pod "5858a5f6-5bd8-43b0-84bd-fc0cca454905" (UID: "5858a5f6-5bd8-43b0-84bd-fc0cca454905"). InnerVolumeSpecName "local-storage07-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 29 15:49:47 crc kubenswrapper[5008]: I0129 15:49:47.866003 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5858a5f6-5bd8-43b0-84bd-fc0cca454905-scripts" (OuterVolumeSpecName: "scripts") pod "5858a5f6-5bd8-43b0-84bd-fc0cca454905" (UID: "5858a5f6-5bd8-43b0-84bd-fc0cca454905"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:49:47 crc kubenswrapper[5008]: I0129 15:49:47.884248 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5858a5f6-5bd8-43b0-84bd-fc0cca454905-kube-api-access-6gf7r" (OuterVolumeSpecName: "kube-api-access-6gf7r") pod "5858a5f6-5bd8-43b0-84bd-fc0cca454905" (UID: "5858a5f6-5bd8-43b0-84bd-fc0cca454905"). InnerVolumeSpecName "kube-api-access-6gf7r". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:49:47 crc kubenswrapper[5008]: I0129 15:49:47.917004 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5858a5f6-5bd8-43b0-84bd-fc0cca454905-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "5858a5f6-5bd8-43b0-84bd-fc0cca454905" (UID: "5858a5f6-5bd8-43b0-84bd-fc0cca454905"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:49:47 crc kubenswrapper[5008]: I0129 15:49:47.925511 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5858a5f6-5bd8-43b0-84bd-fc0cca454905-config-data" (OuterVolumeSpecName: "config-data") pod "5858a5f6-5bd8-43b0-84bd-fc0cca454905" (UID: "5858a5f6-5bd8-43b0-84bd-fc0cca454905"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:49:47 crc kubenswrapper[5008]: I0129 15:49:47.970991 5008 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5858a5f6-5bd8-43b0-84bd-fc0cca454905-logs\") on node \"crc\" DevicePath \"\"" Jan 29 15:49:47 crc kubenswrapper[5008]: I0129 15:49:47.971029 5008 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6gf7r\" (UniqueName: \"kubernetes.io/projected/5858a5f6-5bd8-43b0-84bd-fc0cca454905-kube-api-access-6gf7r\") on node \"crc\" DevicePath \"\"" Jan 29 15:49:47 crc kubenswrapper[5008]: I0129 15:49:47.971048 5008 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5858a5f6-5bd8-43b0-84bd-fc0cca454905-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 15:49:47 crc kubenswrapper[5008]: I0129 15:49:47.971060 5008 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5858a5f6-5bd8-43b0-84bd-fc0cca454905-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 15:49:47 crc kubenswrapper[5008]: I0129 15:49:47.971073 5008 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5858a5f6-5bd8-43b0-84bd-fc0cca454905-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 15:49:47 crc kubenswrapper[5008]: I0129 15:49:47.971106 5008 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") on node \"crc\" " Jan 29 15:49:47 crc kubenswrapper[5008]: I0129 15:49:47.971119 5008 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/5858a5f6-5bd8-43b0-84bd-fc0cca454905-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 29 15:49:47 crc kubenswrapper[5008]: I0129 15:49:47.995814 5008 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage07-crc" (UniqueName: "kubernetes.io/local-volume/local-storage07-crc") on node "crc" Jan 29 15:49:48 crc kubenswrapper[5008]: I0129 15:49:48.072265 5008 reconciler_common.go:293] "Volume detached for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") on node \"crc\" DevicePath \"\"" Jan 29 15:49:48 crc kubenswrapper[5008]: I0129 15:49:48.328475 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 29 15:49:48 crc kubenswrapper[5008]: I0129 15:49:48.401209 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"a4572386-a7c3-434a-8bcb-d1643d6893c9","Type":"ContainerStarted","Data":"7e694d90fa6a6ef1130c12d5f4ef32d5a6b46fd7321b4f1fabcb430d1ab3333d"} Jan 29 15:49:48 crc kubenswrapper[5008]: I0129 15:49:48.403772 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"5858a5f6-5bd8-43b0-84bd-fc0cca454905","Type":"ContainerDied","Data":"68397ad26b3459caddedb82fa3d7628ebbcabc96acf3985abf57176a47336435"} Jan 29 15:49:48 crc kubenswrapper[5008]: I0129 15:49:48.403870 5008 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 29 15:49:48 crc kubenswrapper[5008]: I0129 15:49:48.404068 5008 scope.go:117] "RemoveContainer" containerID="0bfc91d2a4701b82935b807bf656dedc91ce5258bac2402d794853c482c9e6ba" Jan 29 15:49:48 crc kubenswrapper[5008]: I0129 15:49:48.450224 5008 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 29 15:49:48 crc kubenswrapper[5008]: I0129 15:49:48.451150 5008 scope.go:117] "RemoveContainer" containerID="f8a7a58418b5d32fdb5298004fa279c77a6a8f01505d43cf209a60b4445f0b33" Jan 29 15:49:48 crc kubenswrapper[5008]: I0129 15:49:48.469159 5008 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 29 15:49:48 crc kubenswrapper[5008]: I0129 15:49:48.478231 5008 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 29 15:49:48 crc kubenswrapper[5008]: E0129 15:49:48.478592 5008 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5858a5f6-5bd8-43b0-84bd-fc0cca454905" containerName="glance-httpd" Jan 29 15:49:48 crc kubenswrapper[5008]: I0129 15:49:48.478613 5008 state_mem.go:107] "Deleted CPUSet assignment" podUID="5858a5f6-5bd8-43b0-84bd-fc0cca454905" containerName="glance-httpd" Jan 29 15:49:48 crc kubenswrapper[5008]: E0129 15:49:48.478632 5008 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5858a5f6-5bd8-43b0-84bd-fc0cca454905" containerName="glance-log" Jan 29 15:49:48 crc kubenswrapper[5008]: I0129 15:49:48.478639 5008 state_mem.go:107] "Deleted CPUSet assignment" podUID="5858a5f6-5bd8-43b0-84bd-fc0cca454905" containerName="glance-log" Jan 29 15:49:48 crc kubenswrapper[5008]: I0129 15:49:48.478814 5008 memory_manager.go:354] "RemoveStaleState removing state" podUID="5858a5f6-5bd8-43b0-84bd-fc0cca454905" containerName="glance-httpd" Jan 29 15:49:48 crc kubenswrapper[5008]: I0129 15:49:48.478853 5008 memory_manager.go:354] "RemoveStaleState removing state" podUID="5858a5f6-5bd8-43b0-84bd-fc0cca454905" containerName="glance-log" Jan 29 15:49:48 crc kubenswrapper[5008]: I0129 15:49:48.479875 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 29 15:49:48 crc kubenswrapper[5008]: I0129 15:49:48.483973 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Jan 29 15:49:48 crc kubenswrapper[5008]: I0129 15:49:48.484157 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Jan 29 15:49:48 crc kubenswrapper[5008]: I0129 15:49:48.506671 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 29 15:49:48 crc kubenswrapper[5008]: I0129 15:49:48.581576 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/deb07ec3-dbb1-49c4-a9cc-155472fc28bd-config-data\") pod \"glance-default-internal-api-0\" (UID: \"deb07ec3-dbb1-49c4-a9cc-155472fc28bd\") " pod="openstack/glance-default-internal-api-0" Jan 29 15:49:48 crc kubenswrapper[5008]: I0129 15:49:48.581616 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/deb07ec3-dbb1-49c4-a9cc-155472fc28bd-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"deb07ec3-dbb1-49c4-a9cc-155472fc28bd\") " pod="openstack/glance-default-internal-api-0" Jan 29 15:49:48 crc kubenswrapper[5008]: I0129 15:49:48.581639 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"glance-default-internal-api-0\" (UID: \"deb07ec3-dbb1-49c4-a9cc-155472fc28bd\") " pod="openstack/glance-default-internal-api-0" Jan 29 15:49:48 crc kubenswrapper[5008]: I0129 15:49:48.581674 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/deb07ec3-dbb1-49c4-a9cc-155472fc28bd-logs\") pod \"glance-default-internal-api-0\" (UID: \"deb07ec3-dbb1-49c4-a9cc-155472fc28bd\") " pod="openstack/glance-default-internal-api-0" Jan 29 15:49:48 crc kubenswrapper[5008]: I0129 15:49:48.581716 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/deb07ec3-dbb1-49c4-a9cc-155472fc28bd-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"deb07ec3-dbb1-49c4-a9cc-155472fc28bd\") " pod="openstack/glance-default-internal-api-0" Jan 29 15:49:48 crc kubenswrapper[5008]: I0129 15:49:48.581735 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wzqpv\" (UniqueName: \"kubernetes.io/projected/deb07ec3-dbb1-49c4-a9cc-155472fc28bd-kube-api-access-wzqpv\") pod \"glance-default-internal-api-0\" (UID: \"deb07ec3-dbb1-49c4-a9cc-155472fc28bd\") " pod="openstack/glance-default-internal-api-0" Jan 29 15:49:48 crc kubenswrapper[5008]: I0129 15:49:48.581762 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/deb07ec3-dbb1-49c4-a9cc-155472fc28bd-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"deb07ec3-dbb1-49c4-a9cc-155472fc28bd\") " pod="openstack/glance-default-internal-api-0" Jan 29 15:49:48 crc kubenswrapper[5008]: I0129 15:49:48.581826 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/deb07ec3-dbb1-49c4-a9cc-155472fc28bd-scripts\") pod \"glance-default-internal-api-0\" (UID: \"deb07ec3-dbb1-49c4-a9cc-155472fc28bd\") " pod="openstack/glance-default-internal-api-0" Jan 29 15:49:48 crc kubenswrapper[5008]: I0129 15:49:48.683935 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/deb07ec3-dbb1-49c4-a9cc-155472fc28bd-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"deb07ec3-dbb1-49c4-a9cc-155472fc28bd\") " pod="openstack/glance-default-internal-api-0" Jan 29 15:49:48 crc kubenswrapper[5008]: I0129 15:49:48.684046 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/deb07ec3-dbb1-49c4-a9cc-155472fc28bd-scripts\") pod \"glance-default-internal-api-0\" (UID: \"deb07ec3-dbb1-49c4-a9cc-155472fc28bd\") " pod="openstack/glance-default-internal-api-0" Jan 29 15:49:48 crc kubenswrapper[5008]: I0129 15:49:48.684100 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/deb07ec3-dbb1-49c4-a9cc-155472fc28bd-config-data\") pod \"glance-default-internal-api-0\" (UID: \"deb07ec3-dbb1-49c4-a9cc-155472fc28bd\") " pod="openstack/glance-default-internal-api-0" Jan 29 15:49:48 crc kubenswrapper[5008]: I0129 15:49:48.684120 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/deb07ec3-dbb1-49c4-a9cc-155472fc28bd-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"deb07ec3-dbb1-49c4-a9cc-155472fc28bd\") " pod="openstack/glance-default-internal-api-0" Jan 29 15:49:48 crc kubenswrapper[5008]: I0129 15:49:48.684146 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"glance-default-internal-api-0\" (UID: \"deb07ec3-dbb1-49c4-a9cc-155472fc28bd\") " pod="openstack/glance-default-internal-api-0" Jan 29 15:49:48 crc kubenswrapper[5008]: I0129 15:49:48.684186 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/deb07ec3-dbb1-49c4-a9cc-155472fc28bd-logs\") pod \"glance-default-internal-api-0\" (UID: \"deb07ec3-dbb1-49c4-a9cc-155472fc28bd\") " pod="openstack/glance-default-internal-api-0" Jan 29 15:49:48 crc kubenswrapper[5008]: I0129 15:49:48.684244 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/deb07ec3-dbb1-49c4-a9cc-155472fc28bd-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"deb07ec3-dbb1-49c4-a9cc-155472fc28bd\") " pod="openstack/glance-default-internal-api-0" Jan 29 15:49:48 crc kubenswrapper[5008]: I0129 15:49:48.684272 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wzqpv\" (UniqueName: \"kubernetes.io/projected/deb07ec3-dbb1-49c4-a9cc-155472fc28bd-kube-api-access-wzqpv\") pod \"glance-default-internal-api-0\" (UID: \"deb07ec3-dbb1-49c4-a9cc-155472fc28bd\") " pod="openstack/glance-default-internal-api-0" Jan 29 15:49:48 crc kubenswrapper[5008]: I0129 15:49:48.685122 5008 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"glance-default-internal-api-0\" (UID: \"deb07ec3-dbb1-49c4-a9cc-155472fc28bd\") device mount path \"/mnt/openstack/pv07\"" pod="openstack/glance-default-internal-api-0" Jan 29 15:49:48 crc kubenswrapper[5008]: I0129 15:49:48.685284 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/deb07ec3-dbb1-49c4-a9cc-155472fc28bd-logs\") pod \"glance-default-internal-api-0\" (UID: \"deb07ec3-dbb1-49c4-a9cc-155472fc28bd\") " pod="openstack/glance-default-internal-api-0" Jan 29 15:49:48 crc kubenswrapper[5008]: I0129 15:49:48.685371 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/deb07ec3-dbb1-49c4-a9cc-155472fc28bd-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"deb07ec3-dbb1-49c4-a9cc-155472fc28bd\") " pod="openstack/glance-default-internal-api-0" Jan 29 15:49:48 crc kubenswrapper[5008]: I0129 15:49:48.690568 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/deb07ec3-dbb1-49c4-a9cc-155472fc28bd-scripts\") pod \"glance-default-internal-api-0\" (UID: \"deb07ec3-dbb1-49c4-a9cc-155472fc28bd\") " pod="openstack/glance-default-internal-api-0" Jan 29 15:49:48 crc kubenswrapper[5008]: I0129 15:49:48.691300 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/deb07ec3-dbb1-49c4-a9cc-155472fc28bd-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"deb07ec3-dbb1-49c4-a9cc-155472fc28bd\") " pod="openstack/glance-default-internal-api-0" Jan 29 15:49:48 crc kubenswrapper[5008]: I0129 15:49:48.696352 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/deb07ec3-dbb1-49c4-a9cc-155472fc28bd-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"deb07ec3-dbb1-49c4-a9cc-155472fc28bd\") " pod="openstack/glance-default-internal-api-0" Jan 29 15:49:48 crc kubenswrapper[5008]: I0129 15:49:48.700251 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/deb07ec3-dbb1-49c4-a9cc-155472fc28bd-config-data\") pod \"glance-default-internal-api-0\" (UID: \"deb07ec3-dbb1-49c4-a9cc-155472fc28bd\") " pod="openstack/glance-default-internal-api-0" Jan 29 15:49:48 crc kubenswrapper[5008]: I0129 15:49:48.704258 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wzqpv\" (UniqueName: \"kubernetes.io/projected/deb07ec3-dbb1-49c4-a9cc-155472fc28bd-kube-api-access-wzqpv\") pod \"glance-default-internal-api-0\" (UID: \"deb07ec3-dbb1-49c4-a9cc-155472fc28bd\") " pod="openstack/glance-default-internal-api-0" Jan 29 15:49:48 crc kubenswrapper[5008]: I0129 15:49:48.731639 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"glance-default-internal-api-0\" (UID: \"deb07ec3-dbb1-49c4-a9cc-155472fc28bd\") " pod="openstack/glance-default-internal-api-0" Jan 29 15:49:48 crc kubenswrapper[5008]: I0129 15:49:48.834982 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 29 15:49:49 crc kubenswrapper[5008]: I0129 15:49:49.337248 5008 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5858a5f6-5bd8-43b0-84bd-fc0cca454905" path="/var/lib/kubelet/pods/5858a5f6-5bd8-43b0-84bd-fc0cca454905/volumes" Jan 29 15:49:49 crc kubenswrapper[5008]: I0129 15:49:49.341549 5008 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fa21b57d-29c9-4b5d-8712-66e3d5762f26" path="/var/lib/kubelet/pods/fa21b57d-29c9-4b5d-8712-66e3d5762f26/volumes" Jan 29 15:49:49 crc kubenswrapper[5008]: I0129 15:49:49.430889 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"a4572386-a7c3-434a-8bcb-d1643d6893c9","Type":"ContainerStarted","Data":"e0fa9f1865b5505ccd4891898d3b56eec542add6175364fd360ee56950f55bac"} Jan 29 15:49:49 crc kubenswrapper[5008]: I0129 15:49:49.437692 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 29 15:49:49 crc kubenswrapper[5008]: I0129 15:49:49.444820 5008 generic.go:334] "Generic (PLEG): container finished" podID="9069f34b-ed91-4ced-8b05-91b83dd02938" containerID="4235463096f31772a59e698a0a90916f6b2c055027357bae8128e733c3b9757d" exitCode=0 Jan 29 15:49:49 crc kubenswrapper[5008]: I0129 15:49:49.444861 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-fwhd5" event={"ID":"9069f34b-ed91-4ced-8b05-91b83dd02938","Type":"ContainerDied","Data":"4235463096f31772a59e698a0a90916f6b2c055027357bae8128e733c3b9757d"} Jan 29 15:49:50 crc kubenswrapper[5008]: I0129 15:49:50.460391 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"deb07ec3-dbb1-49c4-a9cc-155472fc28bd","Type":"ContainerStarted","Data":"dd3b252c8faadfc964f08468ca0dd6531af9e9a227235dd0778b9ecd9c6cebce"} Jan 29 15:49:50 crc kubenswrapper[5008]: I0129 15:49:50.460843 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"deb07ec3-dbb1-49c4-a9cc-155472fc28bd","Type":"ContainerStarted","Data":"d5ff4add692e0bdecfe0d236bfcf204bfe9c6a37130e4e5f390ced855d6ac026"} Jan 29 15:49:50 crc kubenswrapper[5008]: I0129 15:49:50.462796 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"a4572386-a7c3-434a-8bcb-d1643d6893c9","Type":"ContainerStarted","Data":"c487f572a202948b8d78e72676270d3b2c63fcc77e90c053860ecb9f63566609"} Jan 29 15:49:50 crc kubenswrapper[5008]: I0129 15:49:50.466306 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-rcl2z" event={"ID":"4ec0e696-652d-463e-b97e-dad0065a543b","Type":"ContainerStarted","Data":"0d834ba968e6d63e097a6aef362d3f06eb5d6b998580ed84a27255f328fc86b5"} Jan 29 15:49:50 crc kubenswrapper[5008]: I0129 15:49:50.497369 5008 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=3.497347347 podStartE2EDuration="3.497347347s" podCreationTimestamp="2026-01-29 15:49:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 15:49:50.485219274 +0000 UTC m=+1334.158073531" watchObservedRunningTime="2026-01-29 15:49:50.497347347 +0000 UTC m=+1334.170201584" Jan 29 15:49:50 crc kubenswrapper[5008]: I0129 15:49:50.513900 5008 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-db-sync-rcl2z" podStartSLOduration=2.559826155 podStartE2EDuration="1m41.513884789s" podCreationTimestamp="2026-01-29 15:48:09 +0000 UTC" firstStartedPulling="2026-01-29 15:48:10.857545749 +0000 UTC m=+1234.530399986" lastFinishedPulling="2026-01-29 15:49:49.811604383 +0000 UTC m=+1333.484458620" observedRunningTime="2026-01-29 15:49:50.50775867 +0000 UTC m=+1334.180612927" watchObservedRunningTime="2026-01-29 15:49:50.513884789 +0000 UTC m=+1334.186739026" Jan 29 15:49:50 crc kubenswrapper[5008]: I0129 15:49:50.858532 5008 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-fwhd5" Jan 29 15:49:51 crc kubenswrapper[5008]: I0129 15:49:51.024941 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9069f34b-ed91-4ced-8b05-91b83dd02938-combined-ca-bundle\") pod \"9069f34b-ed91-4ced-8b05-91b83dd02938\" (UID: \"9069f34b-ed91-4ced-8b05-91b83dd02938\") " Jan 29 15:49:51 crc kubenswrapper[5008]: I0129 15:49:51.025052 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/9069f34b-ed91-4ced-8b05-91b83dd02938-db-sync-config-data\") pod \"9069f34b-ed91-4ced-8b05-91b83dd02938\" (UID: \"9069f34b-ed91-4ced-8b05-91b83dd02938\") " Jan 29 15:49:51 crc kubenswrapper[5008]: I0129 15:49:51.025174 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/9069f34b-ed91-4ced-8b05-91b83dd02938-etc-machine-id\") pod \"9069f34b-ed91-4ced-8b05-91b83dd02938\" (UID: \"9069f34b-ed91-4ced-8b05-91b83dd02938\") " Jan 29 15:49:51 crc kubenswrapper[5008]: I0129 15:49:51.025224 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9069f34b-ed91-4ced-8b05-91b83dd02938-scripts\") pod \"9069f34b-ed91-4ced-8b05-91b83dd02938\" (UID: \"9069f34b-ed91-4ced-8b05-91b83dd02938\") " Jan 29 15:49:51 crc kubenswrapper[5008]: I0129 15:49:51.025273 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6b5fh\" (UniqueName: \"kubernetes.io/projected/9069f34b-ed91-4ced-8b05-91b83dd02938-kube-api-access-6b5fh\") pod \"9069f34b-ed91-4ced-8b05-91b83dd02938\" (UID: \"9069f34b-ed91-4ced-8b05-91b83dd02938\") " Jan 29 15:49:51 crc kubenswrapper[5008]: I0129 15:49:51.025336 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9069f34b-ed91-4ced-8b05-91b83dd02938-config-data\") pod \"9069f34b-ed91-4ced-8b05-91b83dd02938\" (UID: \"9069f34b-ed91-4ced-8b05-91b83dd02938\") " Jan 29 15:49:51 crc kubenswrapper[5008]: I0129 15:49:51.025358 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9069f34b-ed91-4ced-8b05-91b83dd02938-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "9069f34b-ed91-4ced-8b05-91b83dd02938" (UID: "9069f34b-ed91-4ced-8b05-91b83dd02938"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 15:49:51 crc kubenswrapper[5008]: I0129 15:49:51.025764 5008 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/9069f34b-ed91-4ced-8b05-91b83dd02938-etc-machine-id\") on node \"crc\" DevicePath \"\"" Jan 29 15:49:51 crc kubenswrapper[5008]: I0129 15:49:51.030487 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9069f34b-ed91-4ced-8b05-91b83dd02938-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "9069f34b-ed91-4ced-8b05-91b83dd02938" (UID: "9069f34b-ed91-4ced-8b05-91b83dd02938"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:49:51 crc kubenswrapper[5008]: I0129 15:49:51.031605 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9069f34b-ed91-4ced-8b05-91b83dd02938-kube-api-access-6b5fh" (OuterVolumeSpecName: "kube-api-access-6b5fh") pod "9069f34b-ed91-4ced-8b05-91b83dd02938" (UID: "9069f34b-ed91-4ced-8b05-91b83dd02938"). InnerVolumeSpecName "kube-api-access-6b5fh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:49:51 crc kubenswrapper[5008]: I0129 15:49:51.032434 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9069f34b-ed91-4ced-8b05-91b83dd02938-scripts" (OuterVolumeSpecName: "scripts") pod "9069f34b-ed91-4ced-8b05-91b83dd02938" (UID: "9069f34b-ed91-4ced-8b05-91b83dd02938"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:49:51 crc kubenswrapper[5008]: I0129 15:49:51.063638 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9069f34b-ed91-4ced-8b05-91b83dd02938-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "9069f34b-ed91-4ced-8b05-91b83dd02938" (UID: "9069f34b-ed91-4ced-8b05-91b83dd02938"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:49:51 crc kubenswrapper[5008]: I0129 15:49:51.079736 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9069f34b-ed91-4ced-8b05-91b83dd02938-config-data" (OuterVolumeSpecName: "config-data") pod "9069f34b-ed91-4ced-8b05-91b83dd02938" (UID: "9069f34b-ed91-4ced-8b05-91b83dd02938"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:49:51 crc kubenswrapper[5008]: I0129 15:49:51.120336 5008 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/horizon-7f49b8c48b-x77zl" Jan 29 15:49:51 crc kubenswrapper[5008]: I0129 15:49:51.127122 5008 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9069f34b-ed91-4ced-8b05-91b83dd02938-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 15:49:51 crc kubenswrapper[5008]: I0129 15:49:51.127182 5008 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/9069f34b-ed91-4ced-8b05-91b83dd02938-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 15:49:51 crc kubenswrapper[5008]: I0129 15:49:51.127220 5008 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9069f34b-ed91-4ced-8b05-91b83dd02938-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 15:49:51 crc kubenswrapper[5008]: I0129 15:49:51.127242 5008 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6b5fh\" (UniqueName: \"kubernetes.io/projected/9069f34b-ed91-4ced-8b05-91b83dd02938-kube-api-access-6b5fh\") on node \"crc\" DevicePath \"\"" Jan 29 15:49:51 crc kubenswrapper[5008]: I0129 15:49:51.127263 5008 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9069f34b-ed91-4ced-8b05-91b83dd02938-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 15:49:51 crc kubenswrapper[5008]: I0129 15:49:51.475580 5008 generic.go:334] "Generic (PLEG): container finished" podID="6c2a1a18-16ff-4419-b233-8649579edbea" containerID="ea56cb31969ede4dc77690e8380474b589122f4e8ba458f2575d15b6351054fb" exitCode=0 Jan 29 15:49:51 crc kubenswrapper[5008]: I0129 15:49:51.475646 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-4h8lc" event={"ID":"6c2a1a18-16ff-4419-b233-8649579edbea","Type":"ContainerDied","Data":"ea56cb31969ede4dc77690e8380474b589122f4e8ba458f2575d15b6351054fb"} Jan 29 15:49:51 crc kubenswrapper[5008]: I0129 15:49:51.481192 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"deb07ec3-dbb1-49c4-a9cc-155472fc28bd","Type":"ContainerStarted","Data":"545a1369d45b715a3fe719964ed37da74cd517e9b86ae7060e6fa55a82e6ac61"} Jan 29 15:49:51 crc kubenswrapper[5008]: I0129 15:49:51.484502 5008 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-fwhd5" Jan 29 15:49:51 crc kubenswrapper[5008]: I0129 15:49:51.485022 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-fwhd5" event={"ID":"9069f34b-ed91-4ced-8b05-91b83dd02938","Type":"ContainerDied","Data":"87157863b5fd88414615bafc24d16f0a62d9f4319c320d4d86a810d58443cfe6"} Jan 29 15:49:51 crc kubenswrapper[5008]: I0129 15:49:51.485050 5008 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="87157863b5fd88414615bafc24d16f0a62d9f4319c320d4d86a810d58443cfe6" Jan 29 15:49:51 crc kubenswrapper[5008]: I0129 15:49:51.527974 5008 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=3.527952908 podStartE2EDuration="3.527952908s" podCreationTimestamp="2026-01-29 15:49:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 15:49:51.51772539 +0000 UTC m=+1335.190579667" watchObservedRunningTime="2026-01-29 15:49:51.527952908 +0000 UTC m=+1335.200807145" Jan 29 15:49:51 crc kubenswrapper[5008]: I0129 15:49:51.529075 5008 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/horizon-bf5f5fc4b-t9vk7" Jan 29 15:49:51 crc kubenswrapper[5008]: I0129 15:49:51.717352 5008 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Jan 29 15:49:51 crc kubenswrapper[5008]: E0129 15:49:51.717689 5008 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9069f34b-ed91-4ced-8b05-91b83dd02938" containerName="cinder-db-sync" Jan 29 15:49:51 crc kubenswrapper[5008]: I0129 15:49:51.717705 5008 state_mem.go:107] "Deleted CPUSet assignment" podUID="9069f34b-ed91-4ced-8b05-91b83dd02938" containerName="cinder-db-sync" Jan 29 15:49:51 crc kubenswrapper[5008]: I0129 15:49:51.717916 5008 memory_manager.go:354] "RemoveStaleState removing state" podUID="9069f34b-ed91-4ced-8b05-91b83dd02938" containerName="cinder-db-sync" Jan 29 15:49:51 crc kubenswrapper[5008]: I0129 15:49:51.719103 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 29 15:49:51 crc kubenswrapper[5008]: I0129 15:49:51.722763 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Jan 29 15:49:51 crc kubenswrapper[5008]: I0129 15:49:51.723052 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Jan 29 15:49:51 crc kubenswrapper[5008]: I0129 15:49:51.723196 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-x6pwm" Jan 29 15:49:51 crc kubenswrapper[5008]: I0129 15:49:51.731181 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Jan 29 15:49:51 crc kubenswrapper[5008]: I0129 15:49:51.754826 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 29 15:49:51 crc kubenswrapper[5008]: I0129 15:49:51.792192 5008 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-56df8fb6b7-ltv6m"] Jan 29 15:49:51 crc kubenswrapper[5008]: I0129 15:49:51.792809 5008 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-56df8fb6b7-ltv6m" podUID="eeec0b0d-d386-486c-9bd7-2dfe88016cd8" containerName="dnsmasq-dns" containerID="cri-o://8f1b00e6962ba213860058464826f9ee3c7898cafeff02094c1871a86a85758a" gracePeriod=10 Jan 29 15:49:51 crc kubenswrapper[5008]: I0129 15:49:51.800965 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-56df8fb6b7-ltv6m" Jan 29 15:49:51 crc kubenswrapper[5008]: I0129 15:49:51.823801 5008 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5d6bd97c5-9t6nm"] Jan 29 15:49:51 crc kubenswrapper[5008]: I0129 15:49:51.825111 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5d6bd97c5-9t6nm" Jan 29 15:49:51 crc kubenswrapper[5008]: I0129 15:49:51.843519 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d01ff2cd-2707-4765-a399-a68312196c22-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"d01ff2cd-2707-4765-a399-a68312196c22\") " pod="openstack/cinder-scheduler-0" Jan 29 15:49:51 crc kubenswrapper[5008]: I0129 15:49:51.843588 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d01ff2cd-2707-4765-a399-a68312196c22-scripts\") pod \"cinder-scheduler-0\" (UID: \"d01ff2cd-2707-4765-a399-a68312196c22\") " pod="openstack/cinder-scheduler-0" Jan 29 15:49:51 crc kubenswrapper[5008]: I0129 15:49:51.843663 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/d01ff2cd-2707-4765-a399-a68312196c22-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"d01ff2cd-2707-4765-a399-a68312196c22\") " pod="openstack/cinder-scheduler-0" Jan 29 15:49:51 crc kubenswrapper[5008]: I0129 15:49:51.843694 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4hzd8\" (UniqueName: \"kubernetes.io/projected/d01ff2cd-2707-4765-a399-a68312196c22-kube-api-access-4hzd8\") pod \"cinder-scheduler-0\" (UID: \"d01ff2cd-2707-4765-a399-a68312196c22\") " pod="openstack/cinder-scheduler-0" Jan 29 15:49:51 crc kubenswrapper[5008]: I0129 15:49:51.843758 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d01ff2cd-2707-4765-a399-a68312196c22-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"d01ff2cd-2707-4765-a399-a68312196c22\") " pod="openstack/cinder-scheduler-0" Jan 29 15:49:51 crc kubenswrapper[5008]: I0129 15:49:51.843821 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d01ff2cd-2707-4765-a399-a68312196c22-config-data\") pod \"cinder-scheduler-0\" (UID: \"d01ff2cd-2707-4765-a399-a68312196c22\") " pod="openstack/cinder-scheduler-0" Jan 29 15:49:51 crc kubenswrapper[5008]: I0129 15:49:51.858694 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5d6bd97c5-9t6nm"] Jan 29 15:49:51 crc kubenswrapper[5008]: I0129 15:49:51.945655 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d01ff2cd-2707-4765-a399-a68312196c22-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"d01ff2cd-2707-4765-a399-a68312196c22\") " pod="openstack/cinder-scheduler-0" Jan 29 15:49:51 crc kubenswrapper[5008]: I0129 15:49:51.945705 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/13aa614a-9b27-4f4d-a135-a7ee67864df9-config\") pod \"dnsmasq-dns-5d6bd97c5-9t6nm\" (UID: \"13aa614a-9b27-4f4d-a135-a7ee67864df9\") " pod="openstack/dnsmasq-dns-5d6bd97c5-9t6nm" Jan 29 15:49:51 crc kubenswrapper[5008]: I0129 15:49:51.945730 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/13aa614a-9b27-4f4d-a135-a7ee67864df9-ovsdbserver-sb\") pod \"dnsmasq-dns-5d6bd97c5-9t6nm\" (UID: \"13aa614a-9b27-4f4d-a135-a7ee67864df9\") " pod="openstack/dnsmasq-dns-5d6bd97c5-9t6nm" Jan 29 15:49:51 crc kubenswrapper[5008]: I0129 15:49:51.945750 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d01ff2cd-2707-4765-a399-a68312196c22-scripts\") pod \"cinder-scheduler-0\" (UID: \"d01ff2cd-2707-4765-a399-a68312196c22\") " pod="openstack/cinder-scheduler-0" Jan 29 15:49:51 crc kubenswrapper[5008]: I0129 15:49:51.945809 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/13aa614a-9b27-4f4d-a135-a7ee67864df9-dns-swift-storage-0\") pod \"dnsmasq-dns-5d6bd97c5-9t6nm\" (UID: \"13aa614a-9b27-4f4d-a135-a7ee67864df9\") " pod="openstack/dnsmasq-dns-5d6bd97c5-9t6nm" Jan 29 15:49:51 crc kubenswrapper[5008]: I0129 15:49:51.945830 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/13aa614a-9b27-4f4d-a135-a7ee67864df9-ovsdbserver-nb\") pod \"dnsmasq-dns-5d6bd97c5-9t6nm\" (UID: \"13aa614a-9b27-4f4d-a135-a7ee67864df9\") " pod="openstack/dnsmasq-dns-5d6bd97c5-9t6nm" Jan 29 15:49:51 crc kubenswrapper[5008]: I0129 15:49:51.945852 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/d01ff2cd-2707-4765-a399-a68312196c22-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"d01ff2cd-2707-4765-a399-a68312196c22\") " pod="openstack/cinder-scheduler-0" Jan 29 15:49:51 crc kubenswrapper[5008]: I0129 15:49:51.945869 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/13aa614a-9b27-4f4d-a135-a7ee67864df9-dns-svc\") pod \"dnsmasq-dns-5d6bd97c5-9t6nm\" (UID: \"13aa614a-9b27-4f4d-a135-a7ee67864df9\") " pod="openstack/dnsmasq-dns-5d6bd97c5-9t6nm" Jan 29 15:49:51 crc kubenswrapper[5008]: I0129 15:49:51.945890 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4hzd8\" (UniqueName: \"kubernetes.io/projected/d01ff2cd-2707-4765-a399-a68312196c22-kube-api-access-4hzd8\") pod \"cinder-scheduler-0\" (UID: \"d01ff2cd-2707-4765-a399-a68312196c22\") " pod="openstack/cinder-scheduler-0" Jan 29 15:49:51 crc kubenswrapper[5008]: I0129 15:49:51.945916 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xdbwm\" (UniqueName: \"kubernetes.io/projected/13aa614a-9b27-4f4d-a135-a7ee67864df9-kube-api-access-xdbwm\") pod \"dnsmasq-dns-5d6bd97c5-9t6nm\" (UID: \"13aa614a-9b27-4f4d-a135-a7ee67864df9\") " pod="openstack/dnsmasq-dns-5d6bd97c5-9t6nm" Jan 29 15:49:51 crc kubenswrapper[5008]: I0129 15:49:51.945951 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d01ff2cd-2707-4765-a399-a68312196c22-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"d01ff2cd-2707-4765-a399-a68312196c22\") " pod="openstack/cinder-scheduler-0" Jan 29 15:49:51 crc kubenswrapper[5008]: I0129 15:49:51.945991 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d01ff2cd-2707-4765-a399-a68312196c22-config-data\") pod \"cinder-scheduler-0\" (UID: \"d01ff2cd-2707-4765-a399-a68312196c22\") " pod="openstack/cinder-scheduler-0" Jan 29 15:49:51 crc kubenswrapper[5008]: I0129 15:49:51.952878 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d01ff2cd-2707-4765-a399-a68312196c22-config-data\") pod \"cinder-scheduler-0\" (UID: \"d01ff2cd-2707-4765-a399-a68312196c22\") " pod="openstack/cinder-scheduler-0" Jan 29 15:49:51 crc kubenswrapper[5008]: I0129 15:49:51.952962 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/d01ff2cd-2707-4765-a399-a68312196c22-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"d01ff2cd-2707-4765-a399-a68312196c22\") " pod="openstack/cinder-scheduler-0" Jan 29 15:49:51 crc kubenswrapper[5008]: I0129 15:49:51.961732 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d01ff2cd-2707-4765-a399-a68312196c22-scripts\") pod \"cinder-scheduler-0\" (UID: \"d01ff2cd-2707-4765-a399-a68312196c22\") " pod="openstack/cinder-scheduler-0" Jan 29 15:49:51 crc kubenswrapper[5008]: I0129 15:49:51.967470 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d01ff2cd-2707-4765-a399-a68312196c22-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"d01ff2cd-2707-4765-a399-a68312196c22\") " pod="openstack/cinder-scheduler-0" Jan 29 15:49:51 crc kubenswrapper[5008]: I0129 15:49:51.973573 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d01ff2cd-2707-4765-a399-a68312196c22-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"d01ff2cd-2707-4765-a399-a68312196c22\") " pod="openstack/cinder-scheduler-0" Jan 29 15:49:51 crc kubenswrapper[5008]: I0129 15:49:51.993600 5008 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Jan 29 15:49:51 crc kubenswrapper[5008]: I0129 15:49:51.994991 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 29 15:49:51 crc kubenswrapper[5008]: I0129 15:49:51.998183 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Jan 29 15:49:51 crc kubenswrapper[5008]: I0129 15:49:51.998476 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4hzd8\" (UniqueName: \"kubernetes.io/projected/d01ff2cd-2707-4765-a399-a68312196c22-kube-api-access-4hzd8\") pod \"cinder-scheduler-0\" (UID: \"d01ff2cd-2707-4765-a399-a68312196c22\") " pod="openstack/cinder-scheduler-0" Jan 29 15:49:52 crc kubenswrapper[5008]: I0129 15:49:52.002446 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Jan 29 15:49:52 crc kubenswrapper[5008]: I0129 15:49:52.047910 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/13aa614a-9b27-4f4d-a135-a7ee67864df9-dns-svc\") pod \"dnsmasq-dns-5d6bd97c5-9t6nm\" (UID: \"13aa614a-9b27-4f4d-a135-a7ee67864df9\") " pod="openstack/dnsmasq-dns-5d6bd97c5-9t6nm" Jan 29 15:49:52 crc kubenswrapper[5008]: I0129 15:49:52.047973 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xdbwm\" (UniqueName: \"kubernetes.io/projected/13aa614a-9b27-4f4d-a135-a7ee67864df9-kube-api-access-xdbwm\") pod \"dnsmasq-dns-5d6bd97c5-9t6nm\" (UID: \"13aa614a-9b27-4f4d-a135-a7ee67864df9\") " pod="openstack/dnsmasq-dns-5d6bd97c5-9t6nm" Jan 29 15:49:52 crc kubenswrapper[5008]: I0129 15:49:52.048075 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/13aa614a-9b27-4f4d-a135-a7ee67864df9-config\") pod \"dnsmasq-dns-5d6bd97c5-9t6nm\" (UID: \"13aa614a-9b27-4f4d-a135-a7ee67864df9\") " pod="openstack/dnsmasq-dns-5d6bd97c5-9t6nm" Jan 29 15:49:52 crc kubenswrapper[5008]: I0129 15:49:52.048103 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/13aa614a-9b27-4f4d-a135-a7ee67864df9-ovsdbserver-sb\") pod \"dnsmasq-dns-5d6bd97c5-9t6nm\" (UID: \"13aa614a-9b27-4f4d-a135-a7ee67864df9\") " pod="openstack/dnsmasq-dns-5d6bd97c5-9t6nm" Jan 29 15:49:52 crc kubenswrapper[5008]: I0129 15:49:52.048154 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/13aa614a-9b27-4f4d-a135-a7ee67864df9-ovsdbserver-nb\") pod \"dnsmasq-dns-5d6bd97c5-9t6nm\" (UID: \"13aa614a-9b27-4f4d-a135-a7ee67864df9\") " pod="openstack/dnsmasq-dns-5d6bd97c5-9t6nm" Jan 29 15:49:52 crc kubenswrapper[5008]: I0129 15:49:52.048172 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/13aa614a-9b27-4f4d-a135-a7ee67864df9-dns-swift-storage-0\") pod \"dnsmasq-dns-5d6bd97c5-9t6nm\" (UID: \"13aa614a-9b27-4f4d-a135-a7ee67864df9\") " pod="openstack/dnsmasq-dns-5d6bd97c5-9t6nm" Jan 29 15:49:52 crc kubenswrapper[5008]: I0129 15:49:52.049050 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/13aa614a-9b27-4f4d-a135-a7ee67864df9-dns-swift-storage-0\") pod \"dnsmasq-dns-5d6bd97c5-9t6nm\" (UID: \"13aa614a-9b27-4f4d-a135-a7ee67864df9\") " pod="openstack/dnsmasq-dns-5d6bd97c5-9t6nm" Jan 29 15:49:52 crc kubenswrapper[5008]: I0129 15:49:52.049569 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/13aa614a-9b27-4f4d-a135-a7ee67864df9-dns-svc\") pod \"dnsmasq-dns-5d6bd97c5-9t6nm\" (UID: \"13aa614a-9b27-4f4d-a135-a7ee67864df9\") " pod="openstack/dnsmasq-dns-5d6bd97c5-9t6nm" Jan 29 15:49:52 crc kubenswrapper[5008]: I0129 15:49:52.050423 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/13aa614a-9b27-4f4d-a135-a7ee67864df9-config\") pod \"dnsmasq-dns-5d6bd97c5-9t6nm\" (UID: \"13aa614a-9b27-4f4d-a135-a7ee67864df9\") " pod="openstack/dnsmasq-dns-5d6bd97c5-9t6nm" Jan 29 15:49:52 crc kubenswrapper[5008]: I0129 15:49:52.051009 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/13aa614a-9b27-4f4d-a135-a7ee67864df9-ovsdbserver-sb\") pod \"dnsmasq-dns-5d6bd97c5-9t6nm\" (UID: \"13aa614a-9b27-4f4d-a135-a7ee67864df9\") " pod="openstack/dnsmasq-dns-5d6bd97c5-9t6nm" Jan 29 15:49:52 crc kubenswrapper[5008]: I0129 15:49:52.051058 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/13aa614a-9b27-4f4d-a135-a7ee67864df9-ovsdbserver-nb\") pod \"dnsmasq-dns-5d6bd97c5-9t6nm\" (UID: \"13aa614a-9b27-4f4d-a135-a7ee67864df9\") " pod="openstack/dnsmasq-dns-5d6bd97c5-9t6nm" Jan 29 15:49:52 crc kubenswrapper[5008]: I0129 15:49:52.051196 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 29 15:49:52 crc kubenswrapper[5008]: I0129 15:49:52.069294 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xdbwm\" (UniqueName: \"kubernetes.io/projected/13aa614a-9b27-4f4d-a135-a7ee67864df9-kube-api-access-xdbwm\") pod \"dnsmasq-dns-5d6bd97c5-9t6nm\" (UID: \"13aa614a-9b27-4f4d-a135-a7ee67864df9\") " pod="openstack/dnsmasq-dns-5d6bd97c5-9t6nm" Jan 29 15:49:52 crc kubenswrapper[5008]: I0129 15:49:52.149633 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/0c2eec64-4eaa-4412-9ff7-dad5918c12c8-etc-machine-id\") pod \"cinder-api-0\" (UID: \"0c2eec64-4eaa-4412-9ff7-dad5918c12c8\") " pod="openstack/cinder-api-0" Jan 29 15:49:52 crc kubenswrapper[5008]: I0129 15:49:52.149697 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0c2eec64-4eaa-4412-9ff7-dad5918c12c8-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"0c2eec64-4eaa-4412-9ff7-dad5918c12c8\") " pod="openstack/cinder-api-0" Jan 29 15:49:52 crc kubenswrapper[5008]: I0129 15:49:52.149760 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0c2eec64-4eaa-4412-9ff7-dad5918c12c8-scripts\") pod \"cinder-api-0\" (UID: \"0c2eec64-4eaa-4412-9ff7-dad5918c12c8\") " pod="openstack/cinder-api-0" Jan 29 15:49:52 crc kubenswrapper[5008]: I0129 15:49:52.149811 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0c2eec64-4eaa-4412-9ff7-dad5918c12c8-config-data\") pod \"cinder-api-0\" (UID: \"0c2eec64-4eaa-4412-9ff7-dad5918c12c8\") " pod="openstack/cinder-api-0" Jan 29 15:49:52 crc kubenswrapper[5008]: I0129 15:49:52.149864 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0c2eec64-4eaa-4412-9ff7-dad5918c12c8-logs\") pod \"cinder-api-0\" (UID: \"0c2eec64-4eaa-4412-9ff7-dad5918c12c8\") " pod="openstack/cinder-api-0" Jan 29 15:49:52 crc kubenswrapper[5008]: I0129 15:49:52.149897 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0c2eec64-4eaa-4412-9ff7-dad5918c12c8-config-data-custom\") pod \"cinder-api-0\" (UID: \"0c2eec64-4eaa-4412-9ff7-dad5918c12c8\") " pod="openstack/cinder-api-0" Jan 29 15:49:52 crc kubenswrapper[5008]: I0129 15:49:52.150005 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sg756\" (UniqueName: \"kubernetes.io/projected/0c2eec64-4eaa-4412-9ff7-dad5918c12c8-kube-api-access-sg756\") pod \"cinder-api-0\" (UID: \"0c2eec64-4eaa-4412-9ff7-dad5918c12c8\") " pod="openstack/cinder-api-0" Jan 29 15:49:52 crc kubenswrapper[5008]: I0129 15:49:52.156285 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5d6bd97c5-9t6nm" Jan 29 15:49:52 crc kubenswrapper[5008]: I0129 15:49:52.252027 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0c2eec64-4eaa-4412-9ff7-dad5918c12c8-scripts\") pod \"cinder-api-0\" (UID: \"0c2eec64-4eaa-4412-9ff7-dad5918c12c8\") " pod="openstack/cinder-api-0" Jan 29 15:49:52 crc kubenswrapper[5008]: I0129 15:49:52.252355 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0c2eec64-4eaa-4412-9ff7-dad5918c12c8-config-data\") pod \"cinder-api-0\" (UID: \"0c2eec64-4eaa-4412-9ff7-dad5918c12c8\") " pod="openstack/cinder-api-0" Jan 29 15:49:52 crc kubenswrapper[5008]: I0129 15:49:52.252394 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0c2eec64-4eaa-4412-9ff7-dad5918c12c8-logs\") pod \"cinder-api-0\" (UID: \"0c2eec64-4eaa-4412-9ff7-dad5918c12c8\") " pod="openstack/cinder-api-0" Jan 29 15:49:52 crc kubenswrapper[5008]: I0129 15:49:52.252427 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0c2eec64-4eaa-4412-9ff7-dad5918c12c8-config-data-custom\") pod \"cinder-api-0\" (UID: \"0c2eec64-4eaa-4412-9ff7-dad5918c12c8\") " pod="openstack/cinder-api-0" Jan 29 15:49:52 crc kubenswrapper[5008]: I0129 15:49:52.252478 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sg756\" (UniqueName: \"kubernetes.io/projected/0c2eec64-4eaa-4412-9ff7-dad5918c12c8-kube-api-access-sg756\") pod \"cinder-api-0\" (UID: \"0c2eec64-4eaa-4412-9ff7-dad5918c12c8\") " pod="openstack/cinder-api-0" Jan 29 15:49:52 crc kubenswrapper[5008]: I0129 15:49:52.252573 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/0c2eec64-4eaa-4412-9ff7-dad5918c12c8-etc-machine-id\") pod \"cinder-api-0\" (UID: \"0c2eec64-4eaa-4412-9ff7-dad5918c12c8\") " pod="openstack/cinder-api-0" Jan 29 15:49:52 crc kubenswrapper[5008]: I0129 15:49:52.252604 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0c2eec64-4eaa-4412-9ff7-dad5918c12c8-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"0c2eec64-4eaa-4412-9ff7-dad5918c12c8\") " pod="openstack/cinder-api-0" Jan 29 15:49:52 crc kubenswrapper[5008]: I0129 15:49:52.255684 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0c2eec64-4eaa-4412-9ff7-dad5918c12c8-logs\") pod \"cinder-api-0\" (UID: \"0c2eec64-4eaa-4412-9ff7-dad5918c12c8\") " pod="openstack/cinder-api-0" Jan 29 15:49:52 crc kubenswrapper[5008]: I0129 15:49:52.256575 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/0c2eec64-4eaa-4412-9ff7-dad5918c12c8-etc-machine-id\") pod \"cinder-api-0\" (UID: \"0c2eec64-4eaa-4412-9ff7-dad5918c12c8\") " pod="openstack/cinder-api-0" Jan 29 15:49:52 crc kubenswrapper[5008]: I0129 15:49:52.263499 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0c2eec64-4eaa-4412-9ff7-dad5918c12c8-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"0c2eec64-4eaa-4412-9ff7-dad5918c12c8\") " pod="openstack/cinder-api-0" Jan 29 15:49:52 crc kubenswrapper[5008]: I0129 15:49:52.264889 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0c2eec64-4eaa-4412-9ff7-dad5918c12c8-config-data\") pod \"cinder-api-0\" (UID: \"0c2eec64-4eaa-4412-9ff7-dad5918c12c8\") " pod="openstack/cinder-api-0" Jan 29 15:49:52 crc kubenswrapper[5008]: I0129 15:49:52.267696 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0c2eec64-4eaa-4412-9ff7-dad5918c12c8-config-data-custom\") pod \"cinder-api-0\" (UID: \"0c2eec64-4eaa-4412-9ff7-dad5918c12c8\") " pod="openstack/cinder-api-0" Jan 29 15:49:52 crc kubenswrapper[5008]: I0129 15:49:52.271284 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0c2eec64-4eaa-4412-9ff7-dad5918c12c8-scripts\") pod \"cinder-api-0\" (UID: \"0c2eec64-4eaa-4412-9ff7-dad5918c12c8\") " pod="openstack/cinder-api-0" Jan 29 15:49:52 crc kubenswrapper[5008]: I0129 15:49:52.287919 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sg756\" (UniqueName: \"kubernetes.io/projected/0c2eec64-4eaa-4412-9ff7-dad5918c12c8-kube-api-access-sg756\") pod \"cinder-api-0\" (UID: \"0c2eec64-4eaa-4412-9ff7-dad5918c12c8\") " pod="openstack/cinder-api-0" Jan 29 15:49:52 crc kubenswrapper[5008]: I0129 15:49:52.508355 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 29 15:49:52 crc kubenswrapper[5008]: I0129 15:49:52.552491 5008 generic.go:334] "Generic (PLEG): container finished" podID="eeec0b0d-d386-486c-9bd7-2dfe88016cd8" containerID="8f1b00e6962ba213860058464826f9ee3c7898cafeff02094c1871a86a85758a" exitCode=0 Jan 29 15:49:52 crc kubenswrapper[5008]: I0129 15:49:52.553387 5008 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-56df8fb6b7-ltv6m" Jan 29 15:49:52 crc kubenswrapper[5008]: I0129 15:49:52.553600 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-56df8fb6b7-ltv6m" event={"ID":"eeec0b0d-d386-486c-9bd7-2dfe88016cd8","Type":"ContainerDied","Data":"8f1b00e6962ba213860058464826f9ee3c7898cafeff02094c1871a86a85758a"} Jan 29 15:49:52 crc kubenswrapper[5008]: I0129 15:49:52.553691 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-56df8fb6b7-ltv6m" event={"ID":"eeec0b0d-d386-486c-9bd7-2dfe88016cd8","Type":"ContainerDied","Data":"ae0b8d6c25c2b8b74e6f25f289f7b9be41b0f6b931b8004d5a8d1e2aa3fcb1dc"} Jan 29 15:49:52 crc kubenswrapper[5008]: I0129 15:49:52.553714 5008 scope.go:117] "RemoveContainer" containerID="8f1b00e6962ba213860058464826f9ee3c7898cafeff02094c1871a86a85758a" Jan 29 15:49:52 crc kubenswrapper[5008]: I0129 15:49:52.578296 5008 scope.go:117] "RemoveContainer" containerID="79dbfd36569d422fdc7006449ff5ac80732d06ddd7a01c876a0b70533ac654e5" Jan 29 15:49:52 crc kubenswrapper[5008]: I0129 15:49:52.622037 5008 scope.go:117] "RemoveContainer" containerID="8f1b00e6962ba213860058464826f9ee3c7898cafeff02094c1871a86a85758a" Jan 29 15:49:52 crc kubenswrapper[5008]: E0129 15:49:52.623960 5008 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8f1b00e6962ba213860058464826f9ee3c7898cafeff02094c1871a86a85758a\": container with ID starting with 8f1b00e6962ba213860058464826f9ee3c7898cafeff02094c1871a86a85758a not found: ID does not exist" containerID="8f1b00e6962ba213860058464826f9ee3c7898cafeff02094c1871a86a85758a" Jan 29 15:49:52 crc kubenswrapper[5008]: I0129 15:49:52.624006 5008 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8f1b00e6962ba213860058464826f9ee3c7898cafeff02094c1871a86a85758a"} err="failed to get container status \"8f1b00e6962ba213860058464826f9ee3c7898cafeff02094c1871a86a85758a\": rpc error: code = NotFound desc = could not find container \"8f1b00e6962ba213860058464826f9ee3c7898cafeff02094c1871a86a85758a\": container with ID starting with 8f1b00e6962ba213860058464826f9ee3c7898cafeff02094c1871a86a85758a not found: ID does not exist" Jan 29 15:49:52 crc kubenswrapper[5008]: I0129 15:49:52.624035 5008 scope.go:117] "RemoveContainer" containerID="79dbfd36569d422fdc7006449ff5ac80732d06ddd7a01c876a0b70533ac654e5" Jan 29 15:49:52 crc kubenswrapper[5008]: E0129 15:49:52.624389 5008 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"79dbfd36569d422fdc7006449ff5ac80732d06ddd7a01c876a0b70533ac654e5\": container with ID starting with 79dbfd36569d422fdc7006449ff5ac80732d06ddd7a01c876a0b70533ac654e5 not found: ID does not exist" containerID="79dbfd36569d422fdc7006449ff5ac80732d06ddd7a01c876a0b70533ac654e5" Jan 29 15:49:52 crc kubenswrapper[5008]: I0129 15:49:52.624425 5008 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"79dbfd36569d422fdc7006449ff5ac80732d06ddd7a01c876a0b70533ac654e5"} err="failed to get container status \"79dbfd36569d422fdc7006449ff5ac80732d06ddd7a01c876a0b70533ac654e5\": rpc error: code = NotFound desc = could not find container \"79dbfd36569d422fdc7006449ff5ac80732d06ddd7a01c876a0b70533ac654e5\": container with ID starting with 79dbfd36569d422fdc7006449ff5ac80732d06ddd7a01c876a0b70533ac654e5 not found: ID does not exist" Jan 29 15:49:52 crc kubenswrapper[5008]: I0129 15:49:52.663459 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/eeec0b0d-d386-486c-9bd7-2dfe88016cd8-ovsdbserver-nb\") pod \"eeec0b0d-d386-486c-9bd7-2dfe88016cd8\" (UID: \"eeec0b0d-d386-486c-9bd7-2dfe88016cd8\") " Jan 29 15:49:52 crc kubenswrapper[5008]: I0129 15:49:52.663530 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/eeec0b0d-d386-486c-9bd7-2dfe88016cd8-dns-swift-storage-0\") pod \"eeec0b0d-d386-486c-9bd7-2dfe88016cd8\" (UID: \"eeec0b0d-d386-486c-9bd7-2dfe88016cd8\") " Jan 29 15:49:52 crc kubenswrapper[5008]: I0129 15:49:52.663580 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/eeec0b0d-d386-486c-9bd7-2dfe88016cd8-ovsdbserver-sb\") pod \"eeec0b0d-d386-486c-9bd7-2dfe88016cd8\" (UID: \"eeec0b0d-d386-486c-9bd7-2dfe88016cd8\") " Jan 29 15:49:52 crc kubenswrapper[5008]: I0129 15:49:52.663603 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n4tmk\" (UniqueName: \"kubernetes.io/projected/eeec0b0d-d386-486c-9bd7-2dfe88016cd8-kube-api-access-n4tmk\") pod \"eeec0b0d-d386-486c-9bd7-2dfe88016cd8\" (UID: \"eeec0b0d-d386-486c-9bd7-2dfe88016cd8\") " Jan 29 15:49:52 crc kubenswrapper[5008]: I0129 15:49:52.663634 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/eeec0b0d-d386-486c-9bd7-2dfe88016cd8-config\") pod \"eeec0b0d-d386-486c-9bd7-2dfe88016cd8\" (UID: \"eeec0b0d-d386-486c-9bd7-2dfe88016cd8\") " Jan 29 15:49:52 crc kubenswrapper[5008]: I0129 15:49:52.663749 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/eeec0b0d-d386-486c-9bd7-2dfe88016cd8-dns-svc\") pod \"eeec0b0d-d386-486c-9bd7-2dfe88016cd8\" (UID: \"eeec0b0d-d386-486c-9bd7-2dfe88016cd8\") " Jan 29 15:49:52 crc kubenswrapper[5008]: I0129 15:49:52.677019 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/eeec0b0d-d386-486c-9bd7-2dfe88016cd8-kube-api-access-n4tmk" (OuterVolumeSpecName: "kube-api-access-n4tmk") pod "eeec0b0d-d386-486c-9bd7-2dfe88016cd8" (UID: "eeec0b0d-d386-486c-9bd7-2dfe88016cd8"). InnerVolumeSpecName "kube-api-access-n4tmk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:49:52 crc kubenswrapper[5008]: I0129 15:49:52.687311 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 29 15:49:52 crc kubenswrapper[5008]: I0129 15:49:52.720799 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/eeec0b0d-d386-486c-9bd7-2dfe88016cd8-config" (OuterVolumeSpecName: "config") pod "eeec0b0d-d386-486c-9bd7-2dfe88016cd8" (UID: "eeec0b0d-d386-486c-9bd7-2dfe88016cd8"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:49:52 crc kubenswrapper[5008]: I0129 15:49:52.734313 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/eeec0b0d-d386-486c-9bd7-2dfe88016cd8-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "eeec0b0d-d386-486c-9bd7-2dfe88016cd8" (UID: "eeec0b0d-d386-486c-9bd7-2dfe88016cd8"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:49:52 crc kubenswrapper[5008]: I0129 15:49:52.742210 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/eeec0b0d-d386-486c-9bd7-2dfe88016cd8-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "eeec0b0d-d386-486c-9bd7-2dfe88016cd8" (UID: "eeec0b0d-d386-486c-9bd7-2dfe88016cd8"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:49:52 crc kubenswrapper[5008]: I0129 15:49:52.743344 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/eeec0b0d-d386-486c-9bd7-2dfe88016cd8-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "eeec0b0d-d386-486c-9bd7-2dfe88016cd8" (UID: "eeec0b0d-d386-486c-9bd7-2dfe88016cd8"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:49:52 crc kubenswrapper[5008]: I0129 15:49:52.748370 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/eeec0b0d-d386-486c-9bd7-2dfe88016cd8-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "eeec0b0d-d386-486c-9bd7-2dfe88016cd8" (UID: "eeec0b0d-d386-486c-9bd7-2dfe88016cd8"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:49:52 crc kubenswrapper[5008]: I0129 15:49:52.769345 5008 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/eeec0b0d-d386-486c-9bd7-2dfe88016cd8-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 29 15:49:52 crc kubenswrapper[5008]: I0129 15:49:52.769390 5008 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n4tmk\" (UniqueName: \"kubernetes.io/projected/eeec0b0d-d386-486c-9bd7-2dfe88016cd8-kube-api-access-n4tmk\") on node \"crc\" DevicePath \"\"" Jan 29 15:49:52 crc kubenswrapper[5008]: I0129 15:49:52.769405 5008 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/eeec0b0d-d386-486c-9bd7-2dfe88016cd8-config\") on node \"crc\" DevicePath \"\"" Jan 29 15:49:52 crc kubenswrapper[5008]: I0129 15:49:52.769417 5008 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/eeec0b0d-d386-486c-9bd7-2dfe88016cd8-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 29 15:49:52 crc kubenswrapper[5008]: I0129 15:49:52.769428 5008 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/eeec0b0d-d386-486c-9bd7-2dfe88016cd8-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 29 15:49:52 crc kubenswrapper[5008]: I0129 15:49:52.769440 5008 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/eeec0b0d-d386-486c-9bd7-2dfe88016cd8-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 29 15:49:52 crc kubenswrapper[5008]: I0129 15:49:52.917700 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5d6bd97c5-9t6nm"] Jan 29 15:49:52 crc kubenswrapper[5008]: I0129 15:49:52.977156 5008 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-4h8lc" Jan 29 15:49:53 crc kubenswrapper[5008]: I0129 15:49:53.073466 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/6c2a1a18-16ff-4419-b233-8649579edbea-config\") pod \"6c2a1a18-16ff-4419-b233-8649579edbea\" (UID: \"6c2a1a18-16ff-4419-b233-8649579edbea\") " Jan 29 15:49:53 crc kubenswrapper[5008]: I0129 15:49:53.073580 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hmvz6\" (UniqueName: \"kubernetes.io/projected/6c2a1a18-16ff-4419-b233-8649579edbea-kube-api-access-hmvz6\") pod \"6c2a1a18-16ff-4419-b233-8649579edbea\" (UID: \"6c2a1a18-16ff-4419-b233-8649579edbea\") " Jan 29 15:49:53 crc kubenswrapper[5008]: I0129 15:49:53.073606 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6c2a1a18-16ff-4419-b233-8649579edbea-combined-ca-bundle\") pod \"6c2a1a18-16ff-4419-b233-8649579edbea\" (UID: \"6c2a1a18-16ff-4419-b233-8649579edbea\") " Jan 29 15:49:53 crc kubenswrapper[5008]: I0129 15:49:53.079131 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6c2a1a18-16ff-4419-b233-8649579edbea-kube-api-access-hmvz6" (OuterVolumeSpecName: "kube-api-access-hmvz6") pod "6c2a1a18-16ff-4419-b233-8649579edbea" (UID: "6c2a1a18-16ff-4419-b233-8649579edbea"). InnerVolumeSpecName "kube-api-access-hmvz6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:49:53 crc kubenswrapper[5008]: I0129 15:49:53.103927 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6c2a1a18-16ff-4419-b233-8649579edbea-config" (OuterVolumeSpecName: "config") pod "6c2a1a18-16ff-4419-b233-8649579edbea" (UID: "6c2a1a18-16ff-4419-b233-8649579edbea"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:49:53 crc kubenswrapper[5008]: I0129 15:49:53.147856 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6c2a1a18-16ff-4419-b233-8649579edbea-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "6c2a1a18-16ff-4419-b233-8649579edbea" (UID: "6c2a1a18-16ff-4419-b233-8649579edbea"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:49:53 crc kubenswrapper[5008]: I0129 15:49:53.176263 5008 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/6c2a1a18-16ff-4419-b233-8649579edbea-config\") on node \"crc\" DevicePath \"\"" Jan 29 15:49:53 crc kubenswrapper[5008]: I0129 15:49:53.176297 5008 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hmvz6\" (UniqueName: \"kubernetes.io/projected/6c2a1a18-16ff-4419-b233-8649579edbea-kube-api-access-hmvz6\") on node \"crc\" DevicePath \"\"" Jan 29 15:49:53 crc kubenswrapper[5008]: I0129 15:49:53.176309 5008 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6c2a1a18-16ff-4419-b233-8649579edbea-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 15:49:53 crc kubenswrapper[5008]: I0129 15:49:53.205902 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Jan 29 15:49:53 crc kubenswrapper[5008]: I0129 15:49:53.413563 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/horizon-7f49b8c48b-x77zl" Jan 29 15:49:53 crc kubenswrapper[5008]: I0129 15:49:53.582556 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"d01ff2cd-2707-4765-a399-a68312196c22","Type":"ContainerStarted","Data":"57c9901e381187fc7eb0fcdcbe0d130f0d9a3aa88a3658cef67338340e39620e"} Jan 29 15:49:53 crc kubenswrapper[5008]: I0129 15:49:53.585923 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"0c2eec64-4eaa-4412-9ff7-dad5918c12c8","Type":"ContainerStarted","Data":"6656a69f4cd648a8aa3695a0ddc7bc96445ac83b10c1e0933a0183bb3570fe1e"} Jan 29 15:49:53 crc kubenswrapper[5008]: I0129 15:49:53.589515 5008 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-56df8fb6b7-ltv6m" Jan 29 15:49:53 crc kubenswrapper[5008]: I0129 15:49:53.595346 5008 generic.go:334] "Generic (PLEG): container finished" podID="13aa614a-9b27-4f4d-a135-a7ee67864df9" containerID="25cc2e560f073aac6e9502dd45888e6009db4de2cc6eecbdc6f87e9a1e6e7041" exitCode=0 Jan 29 15:49:53 crc kubenswrapper[5008]: I0129 15:49:53.595422 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5d6bd97c5-9t6nm" event={"ID":"13aa614a-9b27-4f4d-a135-a7ee67864df9","Type":"ContainerDied","Data":"25cc2e560f073aac6e9502dd45888e6009db4de2cc6eecbdc6f87e9a1e6e7041"} Jan 29 15:49:53 crc kubenswrapper[5008]: I0129 15:49:53.595448 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5d6bd97c5-9t6nm" event={"ID":"13aa614a-9b27-4f4d-a135-a7ee67864df9","Type":"ContainerStarted","Data":"41f24142aa6f79d88b5af0a20bc8f3202ba12b85b127cf5d8b45441b8876beaf"} Jan 29 15:49:53 crc kubenswrapper[5008]: I0129 15:49:53.611469 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-4h8lc" event={"ID":"6c2a1a18-16ff-4419-b233-8649579edbea","Type":"ContainerDied","Data":"07e336009f3d0d4bad7a27492f349aabeb9348d525d8a5111ca33499deca9afe"} Jan 29 15:49:53 crc kubenswrapper[5008]: I0129 15:49:53.611506 5008 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="07e336009f3d0d4bad7a27492f349aabeb9348d525d8a5111ca33499deca9afe" Jan 29 15:49:53 crc kubenswrapper[5008]: I0129 15:49:53.611533 5008 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-4h8lc" Jan 29 15:49:53 crc kubenswrapper[5008]: I0129 15:49:53.688571 5008 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-56df8fb6b7-ltv6m"] Jan 29 15:49:53 crc kubenswrapper[5008]: I0129 15:49:53.719831 5008 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-56df8fb6b7-ltv6m"] Jan 29 15:49:53 crc kubenswrapper[5008]: I0129 15:49:53.734031 5008 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5d6bd97c5-9t6nm"] Jan 29 15:49:53 crc kubenswrapper[5008]: I0129 15:49:53.757446 5008 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-774db89647-tm89m"] Jan 29 15:49:53 crc kubenswrapper[5008]: E0129 15:49:53.757861 5008 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eeec0b0d-d386-486c-9bd7-2dfe88016cd8" containerName="dnsmasq-dns" Jan 29 15:49:53 crc kubenswrapper[5008]: I0129 15:49:53.757875 5008 state_mem.go:107] "Deleted CPUSet assignment" podUID="eeec0b0d-d386-486c-9bd7-2dfe88016cd8" containerName="dnsmasq-dns" Jan 29 15:49:53 crc kubenswrapper[5008]: E0129 15:49:53.757911 5008 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eeec0b0d-d386-486c-9bd7-2dfe88016cd8" containerName="init" Jan 29 15:49:53 crc kubenswrapper[5008]: I0129 15:49:53.757917 5008 state_mem.go:107] "Deleted CPUSet assignment" podUID="eeec0b0d-d386-486c-9bd7-2dfe88016cd8" containerName="init" Jan 29 15:49:53 crc kubenswrapper[5008]: E0129 15:49:53.757926 5008 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6c2a1a18-16ff-4419-b233-8649579edbea" containerName="neutron-db-sync" Jan 29 15:49:53 crc kubenswrapper[5008]: I0129 15:49:53.757932 5008 state_mem.go:107] "Deleted CPUSet assignment" podUID="6c2a1a18-16ff-4419-b233-8649579edbea" containerName="neutron-db-sync" Jan 29 15:49:53 crc kubenswrapper[5008]: I0129 15:49:53.758120 5008 memory_manager.go:354] "RemoveStaleState removing state" podUID="eeec0b0d-d386-486c-9bd7-2dfe88016cd8" containerName="dnsmasq-dns" Jan 29 15:49:53 crc kubenswrapper[5008]: I0129 15:49:53.758134 5008 memory_manager.go:354] "RemoveStaleState removing state" podUID="6c2a1a18-16ff-4419-b233-8649579edbea" containerName="neutron-db-sync" Jan 29 15:49:53 crc kubenswrapper[5008]: I0129 15:49:53.759050 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-774db89647-tm89m" Jan 29 15:49:53 crc kubenswrapper[5008]: I0129 15:49:53.773391 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-774db89647-tm89m"] Jan 29 15:49:53 crc kubenswrapper[5008]: I0129 15:49:53.790464 5008 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-74c948b66b-9krkd"] Jan 29 15:49:53 crc kubenswrapper[5008]: I0129 15:49:53.791857 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-74c948b66b-9krkd" Jan 29 15:49:53 crc kubenswrapper[5008]: I0129 15:49:53.797016 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Jan 29 15:49:53 crc kubenswrapper[5008]: I0129 15:49:53.797272 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-ovndbs" Jan 29 15:49:53 crc kubenswrapper[5008]: I0129 15:49:53.797373 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-qg4fq" Jan 29 15:49:53 crc kubenswrapper[5008]: I0129 15:49:53.797831 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Jan 29 15:49:53 crc kubenswrapper[5008]: I0129 15:49:53.800572 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-74c948b66b-9krkd"] Jan 29 15:49:53 crc kubenswrapper[5008]: I0129 15:49:53.894737 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lhflq\" (UniqueName: \"kubernetes.io/projected/0a0310f9-e8a2-4f0f-8e33-0b6fa798c4e2-kube-api-access-lhflq\") pod \"neutron-74c948b66b-9krkd\" (UID: \"0a0310f9-e8a2-4f0f-8e33-0b6fa798c4e2\") " pod="openstack/neutron-74c948b66b-9krkd" Jan 29 15:49:53 crc kubenswrapper[5008]: I0129 15:49:53.894842 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/198c1bb9-c544-4f02-9b28-983302b67f85-ovsdbserver-sb\") pod \"dnsmasq-dns-774db89647-tm89m\" (UID: \"198c1bb9-c544-4f02-9b28-983302b67f85\") " pod="openstack/dnsmasq-dns-774db89647-tm89m" Jan 29 15:49:53 crc kubenswrapper[5008]: I0129 15:49:53.894878 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/198c1bb9-c544-4f02-9b28-983302b67f85-dns-swift-storage-0\") pod \"dnsmasq-dns-774db89647-tm89m\" (UID: \"198c1bb9-c544-4f02-9b28-983302b67f85\") " pod="openstack/dnsmasq-dns-774db89647-tm89m" Jan 29 15:49:53 crc kubenswrapper[5008]: I0129 15:49:53.894903 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/198c1bb9-c544-4f02-9b28-983302b67f85-ovsdbserver-nb\") pod \"dnsmasq-dns-774db89647-tm89m\" (UID: \"198c1bb9-c544-4f02-9b28-983302b67f85\") " pod="openstack/dnsmasq-dns-774db89647-tm89m" Jan 29 15:49:53 crc kubenswrapper[5008]: I0129 15:49:53.895028 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/0a0310f9-e8a2-4f0f-8e33-0b6fa798c4e2-httpd-config\") pod \"neutron-74c948b66b-9krkd\" (UID: \"0a0310f9-e8a2-4f0f-8e33-0b6fa798c4e2\") " pod="openstack/neutron-74c948b66b-9krkd" Jan 29 15:49:53 crc kubenswrapper[5008]: I0129 15:49:53.895121 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0a0310f9-e8a2-4f0f-8e33-0b6fa798c4e2-combined-ca-bundle\") pod \"neutron-74c948b66b-9krkd\" (UID: \"0a0310f9-e8a2-4f0f-8e33-0b6fa798c4e2\") " pod="openstack/neutron-74c948b66b-9krkd" Jan 29 15:49:53 crc kubenswrapper[5008]: I0129 15:49:53.895335 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/198c1bb9-c544-4f02-9b28-983302b67f85-config\") pod \"dnsmasq-dns-774db89647-tm89m\" (UID: \"198c1bb9-c544-4f02-9b28-983302b67f85\") " pod="openstack/dnsmasq-dns-774db89647-tm89m" Jan 29 15:49:53 crc kubenswrapper[5008]: I0129 15:49:53.895375 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/0a0310f9-e8a2-4f0f-8e33-0b6fa798c4e2-ovndb-tls-certs\") pod \"neutron-74c948b66b-9krkd\" (UID: \"0a0310f9-e8a2-4f0f-8e33-0b6fa798c4e2\") " pod="openstack/neutron-74c948b66b-9krkd" Jan 29 15:49:53 crc kubenswrapper[5008]: I0129 15:49:53.895402 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/198c1bb9-c544-4f02-9b28-983302b67f85-dns-svc\") pod \"dnsmasq-dns-774db89647-tm89m\" (UID: \"198c1bb9-c544-4f02-9b28-983302b67f85\") " pod="openstack/dnsmasq-dns-774db89647-tm89m" Jan 29 15:49:53 crc kubenswrapper[5008]: I0129 15:49:53.895478 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/0a0310f9-e8a2-4f0f-8e33-0b6fa798c4e2-config\") pod \"neutron-74c948b66b-9krkd\" (UID: \"0a0310f9-e8a2-4f0f-8e33-0b6fa798c4e2\") " pod="openstack/neutron-74c948b66b-9krkd" Jan 29 15:49:53 crc kubenswrapper[5008]: I0129 15:49:53.895591 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xlzfw\" (UniqueName: \"kubernetes.io/projected/198c1bb9-c544-4f02-9b28-983302b67f85-kube-api-access-xlzfw\") pod \"dnsmasq-dns-774db89647-tm89m\" (UID: \"198c1bb9-c544-4f02-9b28-983302b67f85\") " pod="openstack/dnsmasq-dns-774db89647-tm89m" Jan 29 15:49:53 crc kubenswrapper[5008]: I0129 15:49:53.996980 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lhflq\" (UniqueName: \"kubernetes.io/projected/0a0310f9-e8a2-4f0f-8e33-0b6fa798c4e2-kube-api-access-lhflq\") pod \"neutron-74c948b66b-9krkd\" (UID: \"0a0310f9-e8a2-4f0f-8e33-0b6fa798c4e2\") " pod="openstack/neutron-74c948b66b-9krkd" Jan 29 15:49:53 crc kubenswrapper[5008]: I0129 15:49:53.997036 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/198c1bb9-c544-4f02-9b28-983302b67f85-ovsdbserver-sb\") pod \"dnsmasq-dns-774db89647-tm89m\" (UID: \"198c1bb9-c544-4f02-9b28-983302b67f85\") " pod="openstack/dnsmasq-dns-774db89647-tm89m" Jan 29 15:49:53 crc kubenswrapper[5008]: I0129 15:49:53.997055 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/198c1bb9-c544-4f02-9b28-983302b67f85-dns-swift-storage-0\") pod \"dnsmasq-dns-774db89647-tm89m\" (UID: \"198c1bb9-c544-4f02-9b28-983302b67f85\") " pod="openstack/dnsmasq-dns-774db89647-tm89m" Jan 29 15:49:53 crc kubenswrapper[5008]: I0129 15:49:53.997081 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/198c1bb9-c544-4f02-9b28-983302b67f85-ovsdbserver-nb\") pod \"dnsmasq-dns-774db89647-tm89m\" (UID: \"198c1bb9-c544-4f02-9b28-983302b67f85\") " pod="openstack/dnsmasq-dns-774db89647-tm89m" Jan 29 15:49:53 crc kubenswrapper[5008]: I0129 15:49:53.997098 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/0a0310f9-e8a2-4f0f-8e33-0b6fa798c4e2-httpd-config\") pod \"neutron-74c948b66b-9krkd\" (UID: \"0a0310f9-e8a2-4f0f-8e33-0b6fa798c4e2\") " pod="openstack/neutron-74c948b66b-9krkd" Jan 29 15:49:53 crc kubenswrapper[5008]: I0129 15:49:53.997124 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0a0310f9-e8a2-4f0f-8e33-0b6fa798c4e2-combined-ca-bundle\") pod \"neutron-74c948b66b-9krkd\" (UID: \"0a0310f9-e8a2-4f0f-8e33-0b6fa798c4e2\") " pod="openstack/neutron-74c948b66b-9krkd" Jan 29 15:49:53 crc kubenswrapper[5008]: I0129 15:49:53.997248 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/198c1bb9-c544-4f02-9b28-983302b67f85-config\") pod \"dnsmasq-dns-774db89647-tm89m\" (UID: \"198c1bb9-c544-4f02-9b28-983302b67f85\") " pod="openstack/dnsmasq-dns-774db89647-tm89m" Jan 29 15:49:53 crc kubenswrapper[5008]: I0129 15:49:53.997267 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/0a0310f9-e8a2-4f0f-8e33-0b6fa798c4e2-ovndb-tls-certs\") pod \"neutron-74c948b66b-9krkd\" (UID: \"0a0310f9-e8a2-4f0f-8e33-0b6fa798c4e2\") " pod="openstack/neutron-74c948b66b-9krkd" Jan 29 15:49:53 crc kubenswrapper[5008]: I0129 15:49:53.997286 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/198c1bb9-c544-4f02-9b28-983302b67f85-dns-svc\") pod \"dnsmasq-dns-774db89647-tm89m\" (UID: \"198c1bb9-c544-4f02-9b28-983302b67f85\") " pod="openstack/dnsmasq-dns-774db89647-tm89m" Jan 29 15:49:53 crc kubenswrapper[5008]: I0129 15:49:53.997306 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/0a0310f9-e8a2-4f0f-8e33-0b6fa798c4e2-config\") pod \"neutron-74c948b66b-9krkd\" (UID: \"0a0310f9-e8a2-4f0f-8e33-0b6fa798c4e2\") " pod="openstack/neutron-74c948b66b-9krkd" Jan 29 15:49:53 crc kubenswrapper[5008]: I0129 15:49:53.997340 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xlzfw\" (UniqueName: \"kubernetes.io/projected/198c1bb9-c544-4f02-9b28-983302b67f85-kube-api-access-xlzfw\") pod \"dnsmasq-dns-774db89647-tm89m\" (UID: \"198c1bb9-c544-4f02-9b28-983302b67f85\") " pod="openstack/dnsmasq-dns-774db89647-tm89m" Jan 29 15:49:53 crc kubenswrapper[5008]: I0129 15:49:53.998488 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/198c1bb9-c544-4f02-9b28-983302b67f85-ovsdbserver-sb\") pod \"dnsmasq-dns-774db89647-tm89m\" (UID: \"198c1bb9-c544-4f02-9b28-983302b67f85\") " pod="openstack/dnsmasq-dns-774db89647-tm89m" Jan 29 15:49:53 crc kubenswrapper[5008]: I0129 15:49:53.999159 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/198c1bb9-c544-4f02-9b28-983302b67f85-config\") pod \"dnsmasq-dns-774db89647-tm89m\" (UID: \"198c1bb9-c544-4f02-9b28-983302b67f85\") " pod="openstack/dnsmasq-dns-774db89647-tm89m" Jan 29 15:49:54 crc kubenswrapper[5008]: I0129 15:49:54.000853 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/198c1bb9-c544-4f02-9b28-983302b67f85-dns-svc\") pod \"dnsmasq-dns-774db89647-tm89m\" (UID: \"198c1bb9-c544-4f02-9b28-983302b67f85\") " pod="openstack/dnsmasq-dns-774db89647-tm89m" Jan 29 15:49:54 crc kubenswrapper[5008]: I0129 15:49:54.001390 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/198c1bb9-c544-4f02-9b28-983302b67f85-dns-swift-storage-0\") pod \"dnsmasq-dns-774db89647-tm89m\" (UID: \"198c1bb9-c544-4f02-9b28-983302b67f85\") " pod="openstack/dnsmasq-dns-774db89647-tm89m" Jan 29 15:49:54 crc kubenswrapper[5008]: I0129 15:49:54.001594 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/198c1bb9-c544-4f02-9b28-983302b67f85-ovsdbserver-nb\") pod \"dnsmasq-dns-774db89647-tm89m\" (UID: \"198c1bb9-c544-4f02-9b28-983302b67f85\") " pod="openstack/dnsmasq-dns-774db89647-tm89m" Jan 29 15:49:54 crc kubenswrapper[5008]: I0129 15:49:54.004774 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/0a0310f9-e8a2-4f0f-8e33-0b6fa798c4e2-ovndb-tls-certs\") pod \"neutron-74c948b66b-9krkd\" (UID: \"0a0310f9-e8a2-4f0f-8e33-0b6fa798c4e2\") " pod="openstack/neutron-74c948b66b-9krkd" Jan 29 15:49:54 crc kubenswrapper[5008]: I0129 15:49:54.004923 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/0a0310f9-e8a2-4f0f-8e33-0b6fa798c4e2-config\") pod \"neutron-74c948b66b-9krkd\" (UID: \"0a0310f9-e8a2-4f0f-8e33-0b6fa798c4e2\") " pod="openstack/neutron-74c948b66b-9krkd" Jan 29 15:49:54 crc kubenswrapper[5008]: I0129 15:49:54.005404 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/0a0310f9-e8a2-4f0f-8e33-0b6fa798c4e2-httpd-config\") pod \"neutron-74c948b66b-9krkd\" (UID: \"0a0310f9-e8a2-4f0f-8e33-0b6fa798c4e2\") " pod="openstack/neutron-74c948b66b-9krkd" Jan 29 15:49:54 crc kubenswrapper[5008]: I0129 15:49:54.015108 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0a0310f9-e8a2-4f0f-8e33-0b6fa798c4e2-combined-ca-bundle\") pod \"neutron-74c948b66b-9krkd\" (UID: \"0a0310f9-e8a2-4f0f-8e33-0b6fa798c4e2\") " pod="openstack/neutron-74c948b66b-9krkd" Jan 29 15:49:54 crc kubenswrapper[5008]: I0129 15:49:54.018250 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lhflq\" (UniqueName: \"kubernetes.io/projected/0a0310f9-e8a2-4f0f-8e33-0b6fa798c4e2-kube-api-access-lhflq\") pod \"neutron-74c948b66b-9krkd\" (UID: \"0a0310f9-e8a2-4f0f-8e33-0b6fa798c4e2\") " pod="openstack/neutron-74c948b66b-9krkd" Jan 29 15:49:54 crc kubenswrapper[5008]: I0129 15:49:54.023506 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xlzfw\" (UniqueName: \"kubernetes.io/projected/198c1bb9-c544-4f02-9b28-983302b67f85-kube-api-access-xlzfw\") pod \"dnsmasq-dns-774db89647-tm89m\" (UID: \"198c1bb9-c544-4f02-9b28-983302b67f85\") " pod="openstack/dnsmasq-dns-774db89647-tm89m" Jan 29 15:49:54 crc kubenswrapper[5008]: I0129 15:49:54.095407 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-774db89647-tm89m" Jan 29 15:49:54 crc kubenswrapper[5008]: I0129 15:49:54.110663 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-74c948b66b-9krkd" Jan 29 15:49:54 crc kubenswrapper[5008]: I0129 15:49:54.197257 5008 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Jan 29 15:49:54 crc kubenswrapper[5008]: E0129 15:49:54.208009 5008 log.go:32] "CreateContainer in sandbox from runtime service failed" err=< Jan 29 15:49:54 crc kubenswrapper[5008]: rpc error: code = Unknown desc = container create failed: mount `/var/lib/kubelet/pods/13aa614a-9b27-4f4d-a135-a7ee67864df9/volume-subpaths/dns-svc/dnsmasq-dns/1` to `etc/dnsmasq.d/hosts/dns-svc`: No such file or directory Jan 29 15:49:54 crc kubenswrapper[5008]: > podSandboxID="41f24142aa6f79d88b5af0a20bc8f3202ba12b85b127cf5d8b45441b8876beaf" Jan 29 15:49:54 crc kubenswrapper[5008]: E0129 15:49:54.208180 5008 kuberuntime_manager.go:1274] "Unhandled Error" err=< Jan 29 15:49:54 crc kubenswrapper[5008]: container &Container{Name:dnsmasq-dns,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n8bh66fh5d9h598h646h55dhb6h5bdh64h5c7h7bh5f6h559h55dh6hddh65bh644h55bh64bh669h5hcbhdbh564h5bfh67ch5d4h5fh657h5b7h675q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-swift-storage-0,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-swift-storage-0,SubPath:dns-swift-storage-0,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovsdbserver-nb,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/ovsdbserver-nb,SubPath:ovsdbserver-nb,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovsdbserver-sb,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/ovsdbserver-sb,SubPath:ovsdbserver-sb,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-xdbwm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:nil,TCPSocket:&TCPSocketAction{Port:{0 5353 },Host:,},GRPC:nil,},InitialDelaySeconds:3,TimeoutSeconds:5,PeriodSeconds:3,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:nil,TCPSocket:&TCPSocketAction{Port:{0 5353 },Host:,},GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-5d6bd97c5-9t6nm_openstack(13aa614a-9b27-4f4d-a135-a7ee67864df9): CreateContainerError: container create failed: mount `/var/lib/kubelet/pods/13aa614a-9b27-4f4d-a135-a7ee67864df9/volume-subpaths/dns-svc/dnsmasq-dns/1` to `etc/dnsmasq.d/hosts/dns-svc`: No such file or directory Jan 29 15:49:54 crc kubenswrapper[5008]: > logger="UnhandledError" Jan 29 15:49:54 crc kubenswrapper[5008]: E0129 15:49:54.210103 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dnsmasq-dns\" with CreateContainerError: \"container create failed: mount `/var/lib/kubelet/pods/13aa614a-9b27-4f4d-a135-a7ee67864df9/volume-subpaths/dns-svc/dnsmasq-dns/1` to `etc/dnsmasq.d/hosts/dns-svc`: No such file or directory\\n\"" pod="openstack/dnsmasq-dns-5d6bd97c5-9t6nm" podUID="13aa614a-9b27-4f4d-a135-a7ee67864df9" Jan 29 15:49:54 crc kubenswrapper[5008]: I0129 15:49:54.262933 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/horizon-bf5f5fc4b-t9vk7" Jan 29 15:49:54 crc kubenswrapper[5008]: I0129 15:49:54.347066 5008 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-7f49b8c48b-x77zl"] Jan 29 15:49:54 crc kubenswrapper[5008]: I0129 15:49:54.347296 5008 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-7f49b8c48b-x77zl" podUID="8c3bbcd6-6512-4439-b70d-f46dd6382cfe" containerName="horizon-log" containerID="cri-o://c27f9304d6725c80976f2a7ffbaadb3b415bca1c1d26fe7cd46a2a94470354ae" gracePeriod=30 Jan 29 15:49:54 crc kubenswrapper[5008]: I0129 15:49:54.347715 5008 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-7f49b8c48b-x77zl" podUID="8c3bbcd6-6512-4439-b70d-f46dd6382cfe" containerName="horizon" containerID="cri-o://864603c565caf07038d917f5b4aaaeae46b873a4ad67b66ea1932218a20e7fdd" gracePeriod=30 Jan 29 15:49:54 crc kubenswrapper[5008]: I0129 15:49:54.626111 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"0c2eec64-4eaa-4412-9ff7-dad5918c12c8","Type":"ContainerStarted","Data":"6fb9ad78b8cfc33e60172b80d4b4df57814803c7224a9357a1c3e296f8b0d427"} Jan 29 15:49:54 crc kubenswrapper[5008]: I0129 15:49:54.839415 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-774db89647-tm89m"] Jan 29 15:49:54 crc kubenswrapper[5008]: W0129 15:49:54.867666 5008 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod198c1bb9_c544_4f02_9b28_983302b67f85.slice/crio-fe4d27a42fca0f64cafefb978a52eff74b34c4b2a357e4ac6b7f8c5c5f84788a WatchSource:0}: Error finding container fe4d27a42fca0f64cafefb978a52eff74b34c4b2a357e4ac6b7f8c5c5f84788a: Status 404 returned error can't find the container with id fe4d27a42fca0f64cafefb978a52eff74b34c4b2a357e4ac6b7f8c5c5f84788a Jan 29 15:49:54 crc kubenswrapper[5008]: I0129 15:49:54.940342 5008 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5d6bd97c5-9t6nm" Jan 29 15:49:54 crc kubenswrapper[5008]: I0129 15:49:54.945777 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-74c948b66b-9krkd"] Jan 29 15:49:55 crc kubenswrapper[5008]: I0129 15:49:55.023925 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/13aa614a-9b27-4f4d-a135-a7ee67864df9-config\") pod \"13aa614a-9b27-4f4d-a135-a7ee67864df9\" (UID: \"13aa614a-9b27-4f4d-a135-a7ee67864df9\") " Jan 29 15:49:55 crc kubenswrapper[5008]: I0129 15:49:55.023992 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/13aa614a-9b27-4f4d-a135-a7ee67864df9-dns-svc\") pod \"13aa614a-9b27-4f4d-a135-a7ee67864df9\" (UID: \"13aa614a-9b27-4f4d-a135-a7ee67864df9\") " Jan 29 15:49:55 crc kubenswrapper[5008]: I0129 15:49:55.024108 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/13aa614a-9b27-4f4d-a135-a7ee67864df9-ovsdbserver-sb\") pod \"13aa614a-9b27-4f4d-a135-a7ee67864df9\" (UID: \"13aa614a-9b27-4f4d-a135-a7ee67864df9\") " Jan 29 15:49:55 crc kubenswrapper[5008]: I0129 15:49:55.024144 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/13aa614a-9b27-4f4d-a135-a7ee67864df9-dns-swift-storage-0\") pod \"13aa614a-9b27-4f4d-a135-a7ee67864df9\" (UID: \"13aa614a-9b27-4f4d-a135-a7ee67864df9\") " Jan 29 15:49:55 crc kubenswrapper[5008]: I0129 15:49:55.024216 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/13aa614a-9b27-4f4d-a135-a7ee67864df9-ovsdbserver-nb\") pod \"13aa614a-9b27-4f4d-a135-a7ee67864df9\" (UID: \"13aa614a-9b27-4f4d-a135-a7ee67864df9\") " Jan 29 15:49:55 crc kubenswrapper[5008]: I0129 15:49:55.024237 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xdbwm\" (UniqueName: \"kubernetes.io/projected/13aa614a-9b27-4f4d-a135-a7ee67864df9-kube-api-access-xdbwm\") pod \"13aa614a-9b27-4f4d-a135-a7ee67864df9\" (UID: \"13aa614a-9b27-4f4d-a135-a7ee67864df9\") " Jan 29 15:49:55 crc kubenswrapper[5008]: I0129 15:49:55.052555 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/13aa614a-9b27-4f4d-a135-a7ee67864df9-kube-api-access-xdbwm" (OuterVolumeSpecName: "kube-api-access-xdbwm") pod "13aa614a-9b27-4f4d-a135-a7ee67864df9" (UID: "13aa614a-9b27-4f4d-a135-a7ee67864df9"). InnerVolumeSpecName "kube-api-access-xdbwm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:49:55 crc kubenswrapper[5008]: I0129 15:49:55.126736 5008 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xdbwm\" (UniqueName: \"kubernetes.io/projected/13aa614a-9b27-4f4d-a135-a7ee67864df9-kube-api-access-xdbwm\") on node \"crc\" DevicePath \"\"" Jan 29 15:49:55 crc kubenswrapper[5008]: I0129 15:49:55.169722 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/13aa614a-9b27-4f4d-a135-a7ee67864df9-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "13aa614a-9b27-4f4d-a135-a7ee67864df9" (UID: "13aa614a-9b27-4f4d-a135-a7ee67864df9"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:49:55 crc kubenswrapper[5008]: I0129 15:49:55.175218 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/13aa614a-9b27-4f4d-a135-a7ee67864df9-config" (OuterVolumeSpecName: "config") pod "13aa614a-9b27-4f4d-a135-a7ee67864df9" (UID: "13aa614a-9b27-4f4d-a135-a7ee67864df9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:49:55 crc kubenswrapper[5008]: I0129 15:49:55.181054 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/13aa614a-9b27-4f4d-a135-a7ee67864df9-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "13aa614a-9b27-4f4d-a135-a7ee67864df9" (UID: "13aa614a-9b27-4f4d-a135-a7ee67864df9"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:49:55 crc kubenswrapper[5008]: I0129 15:49:55.209340 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/13aa614a-9b27-4f4d-a135-a7ee67864df9-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "13aa614a-9b27-4f4d-a135-a7ee67864df9" (UID: "13aa614a-9b27-4f4d-a135-a7ee67864df9"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:49:55 crc kubenswrapper[5008]: I0129 15:49:55.216725 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/13aa614a-9b27-4f4d-a135-a7ee67864df9-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "13aa614a-9b27-4f4d-a135-a7ee67864df9" (UID: "13aa614a-9b27-4f4d-a135-a7ee67864df9"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:49:55 crc kubenswrapper[5008]: I0129 15:49:55.230128 5008 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/13aa614a-9b27-4f4d-a135-a7ee67864df9-config\") on node \"crc\" DevicePath \"\"" Jan 29 15:49:55 crc kubenswrapper[5008]: I0129 15:49:55.230254 5008 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/13aa614a-9b27-4f4d-a135-a7ee67864df9-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 29 15:49:55 crc kubenswrapper[5008]: I0129 15:49:55.230330 5008 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/13aa614a-9b27-4f4d-a135-a7ee67864df9-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 29 15:49:55 crc kubenswrapper[5008]: I0129 15:49:55.230411 5008 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/13aa614a-9b27-4f4d-a135-a7ee67864df9-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 29 15:49:55 crc kubenswrapper[5008]: I0129 15:49:55.230486 5008 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/13aa614a-9b27-4f4d-a135-a7ee67864df9-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 29 15:49:55 crc kubenswrapper[5008]: I0129 15:49:55.337459 5008 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="eeec0b0d-d386-486c-9bd7-2dfe88016cd8" path="/var/lib/kubelet/pods/eeec0b0d-d386-486c-9bd7-2dfe88016cd8/volumes" Jan 29 15:49:55 crc kubenswrapper[5008]: I0129 15:49:55.640315 5008 generic.go:334] "Generic (PLEG): container finished" podID="198c1bb9-c544-4f02-9b28-983302b67f85" containerID="5992353136cc63043471174685289b57a122a180a840f4ae96151af03ba57534" exitCode=0 Jan 29 15:49:55 crc kubenswrapper[5008]: I0129 15:49:55.640479 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-774db89647-tm89m" event={"ID":"198c1bb9-c544-4f02-9b28-983302b67f85","Type":"ContainerDied","Data":"5992353136cc63043471174685289b57a122a180a840f4ae96151af03ba57534"} Jan 29 15:49:55 crc kubenswrapper[5008]: I0129 15:49:55.640826 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-774db89647-tm89m" event={"ID":"198c1bb9-c544-4f02-9b28-983302b67f85","Type":"ContainerStarted","Data":"fe4d27a42fca0f64cafefb978a52eff74b34c4b2a357e4ac6b7f8c5c5f84788a"} Jan 29 15:49:55 crc kubenswrapper[5008]: I0129 15:49:55.650127 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-74c948b66b-9krkd" event={"ID":"0a0310f9-e8a2-4f0f-8e33-0b6fa798c4e2","Type":"ContainerStarted","Data":"bdd8b5ad2f9dd0f7075ba3ebd36ca61dffe898dd3c726e03f48336bce5f5eb32"} Jan 29 15:49:55 crc kubenswrapper[5008]: I0129 15:49:55.650170 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-74c948b66b-9krkd" event={"ID":"0a0310f9-e8a2-4f0f-8e33-0b6fa798c4e2","Type":"ContainerStarted","Data":"04b65eba50b91345633c6fc5a3520c31c3922a473da83be590641f8a8f92912a"} Jan 29 15:49:55 crc kubenswrapper[5008]: I0129 15:49:55.652799 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"d01ff2cd-2707-4765-a399-a68312196c22","Type":"ContainerStarted","Data":"b75f2a4361779c7b8425fd94ecbf05c19e481194aa4b56d42b2abd6ec2919902"} Jan 29 15:49:55 crc kubenswrapper[5008]: I0129 15:49:55.667292 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"0c2eec64-4eaa-4412-9ff7-dad5918c12c8","Type":"ContainerStarted","Data":"c9201e193ff2e5a2b26c3ff616dcb3f4435c4982dabca37e22678acddcd52a0c"} Jan 29 15:49:55 crc kubenswrapper[5008]: I0129 15:49:55.667430 5008 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="0c2eec64-4eaa-4412-9ff7-dad5918c12c8" containerName="cinder-api-log" containerID="cri-o://6fb9ad78b8cfc33e60172b80d4b4df57814803c7224a9357a1c3e296f8b0d427" gracePeriod=30 Jan 29 15:49:55 crc kubenswrapper[5008]: I0129 15:49:55.667499 5008 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="0c2eec64-4eaa-4412-9ff7-dad5918c12c8" containerName="cinder-api" containerID="cri-o://c9201e193ff2e5a2b26c3ff616dcb3f4435c4982dabca37e22678acddcd52a0c" gracePeriod=30 Jan 29 15:49:55 crc kubenswrapper[5008]: I0129 15:49:55.667512 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Jan 29 15:49:55 crc kubenswrapper[5008]: I0129 15:49:55.680471 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5d6bd97c5-9t6nm" event={"ID":"13aa614a-9b27-4f4d-a135-a7ee67864df9","Type":"ContainerDied","Data":"41f24142aa6f79d88b5af0a20bc8f3202ba12b85b127cf5d8b45441b8876beaf"} Jan 29 15:49:55 crc kubenswrapper[5008]: I0129 15:49:55.680525 5008 scope.go:117] "RemoveContainer" containerID="25cc2e560f073aac6e9502dd45888e6009db4de2cc6eecbdc6f87e9a1e6e7041" Jan 29 15:49:55 crc kubenswrapper[5008]: I0129 15:49:55.680697 5008 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5d6bd97c5-9t6nm" Jan 29 15:49:55 crc kubenswrapper[5008]: I0129 15:49:55.702380 5008 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=4.702363843 podStartE2EDuration="4.702363843s" podCreationTimestamp="2026-01-29 15:49:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 15:49:55.684638763 +0000 UTC m=+1339.357493000" watchObservedRunningTime="2026-01-29 15:49:55.702363843 +0000 UTC m=+1339.375218080" Jan 29 15:49:55 crc kubenswrapper[5008]: I0129 15:49:55.772954 5008 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5d6bd97c5-9t6nm"] Jan 29 15:49:55 crc kubenswrapper[5008]: I0129 15:49:55.784232 5008 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5d6bd97c5-9t6nm"] Jan 29 15:49:56 crc kubenswrapper[5008]: I0129 15:49:56.697242 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-774db89647-tm89m" event={"ID":"198c1bb9-c544-4f02-9b28-983302b67f85","Type":"ContainerStarted","Data":"3b493622238ba247bd3a423fda4a6f572ff13e66c0b2cd863b93d7fa09956597"} Jan 29 15:49:56 crc kubenswrapper[5008]: I0129 15:49:56.697719 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-774db89647-tm89m" Jan 29 15:49:56 crc kubenswrapper[5008]: I0129 15:49:56.701764 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-74c948b66b-9krkd" event={"ID":"0a0310f9-e8a2-4f0f-8e33-0b6fa798c4e2","Type":"ContainerStarted","Data":"07ed4b32a695d898c860c162dfa7b0d1cb072e63d6b2dbb86d1f05987c9972fb"} Jan 29 15:49:56 crc kubenswrapper[5008]: I0129 15:49:56.701858 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-74c948b66b-9krkd" Jan 29 15:49:56 crc kubenswrapper[5008]: I0129 15:49:56.706192 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"d01ff2cd-2707-4765-a399-a68312196c22","Type":"ContainerStarted","Data":"69665425f19a49b5cdcfb4255b47fbfaaa95a031ae37ae6f7818c9b5e08c3fc8"} Jan 29 15:49:56 crc kubenswrapper[5008]: I0129 15:49:56.709051 5008 generic.go:334] "Generic (PLEG): container finished" podID="0c2eec64-4eaa-4412-9ff7-dad5918c12c8" containerID="c9201e193ff2e5a2b26c3ff616dcb3f4435c4982dabca37e22678acddcd52a0c" exitCode=0 Jan 29 15:49:56 crc kubenswrapper[5008]: I0129 15:49:56.709077 5008 generic.go:334] "Generic (PLEG): container finished" podID="0c2eec64-4eaa-4412-9ff7-dad5918c12c8" containerID="6fb9ad78b8cfc33e60172b80d4b4df57814803c7224a9357a1c3e296f8b0d427" exitCode=143 Jan 29 15:49:56 crc kubenswrapper[5008]: I0129 15:49:56.709125 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"0c2eec64-4eaa-4412-9ff7-dad5918c12c8","Type":"ContainerDied","Data":"c9201e193ff2e5a2b26c3ff616dcb3f4435c4982dabca37e22678acddcd52a0c"} Jan 29 15:49:56 crc kubenswrapper[5008]: I0129 15:49:56.709146 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"0c2eec64-4eaa-4412-9ff7-dad5918c12c8","Type":"ContainerDied","Data":"6fb9ad78b8cfc33e60172b80d4b4df57814803c7224a9357a1c3e296f8b0d427"} Jan 29 15:49:56 crc kubenswrapper[5008]: I0129 15:49:56.723191 5008 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-774db89647-tm89m" podStartSLOduration=3.723167046 podStartE2EDuration="3.723167046s" podCreationTimestamp="2026-01-29 15:49:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 15:49:56.717321814 +0000 UTC m=+1340.390176051" watchObservedRunningTime="2026-01-29 15:49:56.723167046 +0000 UTC m=+1340.396021303" Jan 29 15:49:56 crc kubenswrapper[5008]: I0129 15:49:56.754346 5008 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-74c948b66b-9krkd" podStartSLOduration=3.754324241 podStartE2EDuration="3.754324241s" podCreationTimestamp="2026-01-29 15:49:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 15:49:56.743603951 +0000 UTC m=+1340.416458188" watchObservedRunningTime="2026-01-29 15:49:56.754324241 +0000 UTC m=+1340.427178488" Jan 29 15:49:56 crc kubenswrapper[5008]: I0129 15:49:56.770241 5008 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=4.527820799 podStartE2EDuration="5.770222777s" podCreationTimestamp="2026-01-29 15:49:51 +0000 UTC" firstStartedPulling="2026-01-29 15:49:52.703754881 +0000 UTC m=+1336.376609118" lastFinishedPulling="2026-01-29 15:49:53.946156859 +0000 UTC m=+1337.619011096" observedRunningTime="2026-01-29 15:49:56.766140469 +0000 UTC m=+1340.438994716" watchObservedRunningTime="2026-01-29 15:49:56.770222777 +0000 UTC m=+1340.443077014" Jan 29 15:49:57 crc kubenswrapper[5008]: I0129 15:49:57.052644 5008 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Jan 29 15:49:57 crc kubenswrapper[5008]: I0129 15:49:57.336793 5008 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="13aa614a-9b27-4f4d-a135-a7ee67864df9" path="/var/lib/kubelet/pods/13aa614a-9b27-4f4d-a135-a7ee67864df9/volumes" Jan 29 15:49:57 crc kubenswrapper[5008]: I0129 15:49:57.720506 5008 generic.go:334] "Generic (PLEG): container finished" podID="8c3bbcd6-6512-4439-b70d-f46dd6382cfe" containerID="864603c565caf07038d917f5b4aaaeae46b873a4ad67b66ea1932218a20e7fdd" exitCode=0 Jan 29 15:49:57 crc kubenswrapper[5008]: I0129 15:49:57.720916 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7f49b8c48b-x77zl" event={"ID":"8c3bbcd6-6512-4439-b70d-f46dd6382cfe","Type":"ContainerDied","Data":"864603c565caf07038d917f5b4aaaeae46b873a4ad67b66ea1932218a20e7fdd"} Jan 29 15:49:57 crc kubenswrapper[5008]: E0129 15:49:57.733141 5008 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8c3bbcd6_6512_4439_b70d_f46dd6382cfe.slice/crio-conmon-864603c565caf07038d917f5b4aaaeae46b873a4ad67b66ea1932218a20e7fdd.scope\": RecentStats: unable to find data in memory cache]" Jan 29 15:49:57 crc kubenswrapper[5008]: I0129 15:49:57.785709 5008 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Jan 29 15:49:57 crc kubenswrapper[5008]: I0129 15:49:57.785765 5008 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Jan 29 15:49:57 crc kubenswrapper[5008]: I0129 15:49:57.827906 5008 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Jan 29 15:49:57 crc kubenswrapper[5008]: I0129 15:49:57.841349 5008 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Jan 29 15:49:58 crc kubenswrapper[5008]: I0129 15:49:58.391994 5008 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-98cff5df-8qpcl"] Jan 29 15:49:58 crc kubenswrapper[5008]: E0129 15:49:58.396839 5008 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="13aa614a-9b27-4f4d-a135-a7ee67864df9" containerName="init" Jan 29 15:49:58 crc kubenswrapper[5008]: I0129 15:49:58.396888 5008 state_mem.go:107] "Deleted CPUSet assignment" podUID="13aa614a-9b27-4f4d-a135-a7ee67864df9" containerName="init" Jan 29 15:49:58 crc kubenswrapper[5008]: I0129 15:49:58.397102 5008 memory_manager.go:354] "RemoveStaleState removing state" podUID="13aa614a-9b27-4f4d-a135-a7ee67864df9" containerName="init" Jan 29 15:49:58 crc kubenswrapper[5008]: I0129 15:49:58.398024 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-98cff5df-8qpcl" Jan 29 15:49:58 crc kubenswrapper[5008]: I0129 15:49:58.401374 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-98cff5df-8qpcl"] Jan 29 15:49:58 crc kubenswrapper[5008]: I0129 15:49:58.401756 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-public-svc" Jan 29 15:49:58 crc kubenswrapper[5008]: I0129 15:49:58.402099 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-internal-svc" Jan 29 15:49:58 crc kubenswrapper[5008]: I0129 15:49:58.502307 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/6bf14a27-dc0a-430e-affa-a6a28e944947-internal-tls-certs\") pod \"neutron-98cff5df-8qpcl\" (UID: \"6bf14a27-dc0a-430e-affa-a6a28e944947\") " pod="openstack/neutron-98cff5df-8qpcl" Jan 29 15:49:58 crc kubenswrapper[5008]: I0129 15:49:58.502462 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dkcsj\" (UniqueName: \"kubernetes.io/projected/6bf14a27-dc0a-430e-affa-a6a28e944947-kube-api-access-dkcsj\") pod \"neutron-98cff5df-8qpcl\" (UID: \"6bf14a27-dc0a-430e-affa-a6a28e944947\") " pod="openstack/neutron-98cff5df-8qpcl" Jan 29 15:49:58 crc kubenswrapper[5008]: I0129 15:49:58.502616 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/6bf14a27-dc0a-430e-affa-a6a28e944947-public-tls-certs\") pod \"neutron-98cff5df-8qpcl\" (UID: \"6bf14a27-dc0a-430e-affa-a6a28e944947\") " pod="openstack/neutron-98cff5df-8qpcl" Jan 29 15:49:58 crc kubenswrapper[5008]: I0129 15:49:58.502749 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6bf14a27-dc0a-430e-affa-a6a28e944947-combined-ca-bundle\") pod \"neutron-98cff5df-8qpcl\" (UID: \"6bf14a27-dc0a-430e-affa-a6a28e944947\") " pod="openstack/neutron-98cff5df-8qpcl" Jan 29 15:49:58 crc kubenswrapper[5008]: I0129 15:49:58.502808 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/6bf14a27-dc0a-430e-affa-a6a28e944947-httpd-config\") pod \"neutron-98cff5df-8qpcl\" (UID: \"6bf14a27-dc0a-430e-affa-a6a28e944947\") " pod="openstack/neutron-98cff5df-8qpcl" Jan 29 15:49:58 crc kubenswrapper[5008]: I0129 15:49:58.502831 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/6bf14a27-dc0a-430e-affa-a6a28e944947-ovndb-tls-certs\") pod \"neutron-98cff5df-8qpcl\" (UID: \"6bf14a27-dc0a-430e-affa-a6a28e944947\") " pod="openstack/neutron-98cff5df-8qpcl" Jan 29 15:49:58 crc kubenswrapper[5008]: I0129 15:49:58.503227 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/6bf14a27-dc0a-430e-affa-a6a28e944947-config\") pod \"neutron-98cff5df-8qpcl\" (UID: \"6bf14a27-dc0a-430e-affa-a6a28e944947\") " pod="openstack/neutron-98cff5df-8qpcl" Jan 29 15:49:58 crc kubenswrapper[5008]: I0129 15:49:58.604809 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/6bf14a27-dc0a-430e-affa-a6a28e944947-config\") pod \"neutron-98cff5df-8qpcl\" (UID: \"6bf14a27-dc0a-430e-affa-a6a28e944947\") " pod="openstack/neutron-98cff5df-8qpcl" Jan 29 15:49:58 crc kubenswrapper[5008]: I0129 15:49:58.604876 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/6bf14a27-dc0a-430e-affa-a6a28e944947-internal-tls-certs\") pod \"neutron-98cff5df-8qpcl\" (UID: \"6bf14a27-dc0a-430e-affa-a6a28e944947\") " pod="openstack/neutron-98cff5df-8qpcl" Jan 29 15:49:58 crc kubenswrapper[5008]: I0129 15:49:58.604901 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dkcsj\" (UniqueName: \"kubernetes.io/projected/6bf14a27-dc0a-430e-affa-a6a28e944947-kube-api-access-dkcsj\") pod \"neutron-98cff5df-8qpcl\" (UID: \"6bf14a27-dc0a-430e-affa-a6a28e944947\") " pod="openstack/neutron-98cff5df-8qpcl" Jan 29 15:49:58 crc kubenswrapper[5008]: I0129 15:49:58.604938 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/6bf14a27-dc0a-430e-affa-a6a28e944947-public-tls-certs\") pod \"neutron-98cff5df-8qpcl\" (UID: \"6bf14a27-dc0a-430e-affa-a6a28e944947\") " pod="openstack/neutron-98cff5df-8qpcl" Jan 29 15:49:58 crc kubenswrapper[5008]: I0129 15:49:58.604968 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6bf14a27-dc0a-430e-affa-a6a28e944947-combined-ca-bundle\") pod \"neutron-98cff5df-8qpcl\" (UID: \"6bf14a27-dc0a-430e-affa-a6a28e944947\") " pod="openstack/neutron-98cff5df-8qpcl" Jan 29 15:49:58 crc kubenswrapper[5008]: I0129 15:49:58.604984 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/6bf14a27-dc0a-430e-affa-a6a28e944947-httpd-config\") pod \"neutron-98cff5df-8qpcl\" (UID: \"6bf14a27-dc0a-430e-affa-a6a28e944947\") " pod="openstack/neutron-98cff5df-8qpcl" Jan 29 15:49:58 crc kubenswrapper[5008]: I0129 15:49:58.605001 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/6bf14a27-dc0a-430e-affa-a6a28e944947-ovndb-tls-certs\") pod \"neutron-98cff5df-8qpcl\" (UID: \"6bf14a27-dc0a-430e-affa-a6a28e944947\") " pod="openstack/neutron-98cff5df-8qpcl" Jan 29 15:49:58 crc kubenswrapper[5008]: I0129 15:49:58.612074 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6bf14a27-dc0a-430e-affa-a6a28e944947-combined-ca-bundle\") pod \"neutron-98cff5df-8qpcl\" (UID: \"6bf14a27-dc0a-430e-affa-a6a28e944947\") " pod="openstack/neutron-98cff5df-8qpcl" Jan 29 15:49:58 crc kubenswrapper[5008]: I0129 15:49:58.612894 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/6bf14a27-dc0a-430e-affa-a6a28e944947-httpd-config\") pod \"neutron-98cff5df-8qpcl\" (UID: \"6bf14a27-dc0a-430e-affa-a6a28e944947\") " pod="openstack/neutron-98cff5df-8qpcl" Jan 29 15:49:58 crc kubenswrapper[5008]: I0129 15:49:58.612914 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/6bf14a27-dc0a-430e-affa-a6a28e944947-public-tls-certs\") pod \"neutron-98cff5df-8qpcl\" (UID: \"6bf14a27-dc0a-430e-affa-a6a28e944947\") " pod="openstack/neutron-98cff5df-8qpcl" Jan 29 15:49:58 crc kubenswrapper[5008]: I0129 15:49:58.613388 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/6bf14a27-dc0a-430e-affa-a6a28e944947-internal-tls-certs\") pod \"neutron-98cff5df-8qpcl\" (UID: \"6bf14a27-dc0a-430e-affa-a6a28e944947\") " pod="openstack/neutron-98cff5df-8qpcl" Jan 29 15:49:58 crc kubenswrapper[5008]: I0129 15:49:58.614553 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/6bf14a27-dc0a-430e-affa-a6a28e944947-config\") pod \"neutron-98cff5df-8qpcl\" (UID: \"6bf14a27-dc0a-430e-affa-a6a28e944947\") " pod="openstack/neutron-98cff5df-8qpcl" Jan 29 15:49:58 crc kubenswrapper[5008]: I0129 15:49:58.628833 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/6bf14a27-dc0a-430e-affa-a6a28e944947-ovndb-tls-certs\") pod \"neutron-98cff5df-8qpcl\" (UID: \"6bf14a27-dc0a-430e-affa-a6a28e944947\") " pod="openstack/neutron-98cff5df-8qpcl" Jan 29 15:49:58 crc kubenswrapper[5008]: I0129 15:49:58.629562 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dkcsj\" (UniqueName: \"kubernetes.io/projected/6bf14a27-dc0a-430e-affa-a6a28e944947-kube-api-access-dkcsj\") pod \"neutron-98cff5df-8qpcl\" (UID: \"6bf14a27-dc0a-430e-affa-a6a28e944947\") " pod="openstack/neutron-98cff5df-8qpcl" Jan 29 15:49:58 crc kubenswrapper[5008]: I0129 15:49:58.731185 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Jan 29 15:49:58 crc kubenswrapper[5008]: I0129 15:49:58.731227 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Jan 29 15:49:58 crc kubenswrapper[5008]: I0129 15:49:58.732856 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-98cff5df-8qpcl" Jan 29 15:49:58 crc kubenswrapper[5008]: I0129 15:49:58.837156 5008 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Jan 29 15:49:58 crc kubenswrapper[5008]: I0129 15:49:58.837812 5008 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Jan 29 15:49:58 crc kubenswrapper[5008]: I0129 15:49:58.877932 5008 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Jan 29 15:49:58 crc kubenswrapper[5008]: I0129 15:49:58.890170 5008 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Jan 29 15:49:59 crc kubenswrapper[5008]: I0129 15:49:59.135259 5008 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-7f49b8c48b-x77zl" podUID="8c3bbcd6-6512-4439-b70d-f46dd6382cfe" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.145:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.145:8443: connect: connection refused" Jan 29 15:49:59 crc kubenswrapper[5008]: I0129 15:49:59.740186 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Jan 29 15:49:59 crc kubenswrapper[5008]: I0129 15:49:59.740834 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Jan 29 15:50:00 crc kubenswrapper[5008]: I0129 15:50:00.764139 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Jan 29 15:50:00 crc kubenswrapper[5008]: I0129 15:50:00.764243 5008 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 29 15:50:00 crc kubenswrapper[5008]: I0129 15:50:00.810092 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Jan 29 15:50:01 crc kubenswrapper[5008]: I0129 15:50:01.654152 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Jan 29 15:50:01 crc kubenswrapper[5008]: I0129 15:50:01.669868 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Jan 29 15:50:01 crc kubenswrapper[5008]: I0129 15:50:01.946221 5008 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 29 15:50:02 crc kubenswrapper[5008]: I0129 15:50:02.069857 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0c2eec64-4eaa-4412-9ff7-dad5918c12c8-logs\") pod \"0c2eec64-4eaa-4412-9ff7-dad5918c12c8\" (UID: \"0c2eec64-4eaa-4412-9ff7-dad5918c12c8\") " Jan 29 15:50:02 crc kubenswrapper[5008]: I0129 15:50:02.069955 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/0c2eec64-4eaa-4412-9ff7-dad5918c12c8-etc-machine-id\") pod \"0c2eec64-4eaa-4412-9ff7-dad5918c12c8\" (UID: \"0c2eec64-4eaa-4412-9ff7-dad5918c12c8\") " Jan 29 15:50:02 crc kubenswrapper[5008]: I0129 15:50:02.070040 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0c2eec64-4eaa-4412-9ff7-dad5918c12c8-config-data\") pod \"0c2eec64-4eaa-4412-9ff7-dad5918c12c8\" (UID: \"0c2eec64-4eaa-4412-9ff7-dad5918c12c8\") " Jan 29 15:50:02 crc kubenswrapper[5008]: I0129 15:50:02.070214 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0c2eec64-4eaa-4412-9ff7-dad5918c12c8-combined-ca-bundle\") pod \"0c2eec64-4eaa-4412-9ff7-dad5918c12c8\" (UID: \"0c2eec64-4eaa-4412-9ff7-dad5918c12c8\") " Jan 29 15:50:02 crc kubenswrapper[5008]: I0129 15:50:02.070282 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0c2eec64-4eaa-4412-9ff7-dad5918c12c8-config-data-custom\") pod \"0c2eec64-4eaa-4412-9ff7-dad5918c12c8\" (UID: \"0c2eec64-4eaa-4412-9ff7-dad5918c12c8\") " Jan 29 15:50:02 crc kubenswrapper[5008]: I0129 15:50:02.070322 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sg756\" (UniqueName: \"kubernetes.io/projected/0c2eec64-4eaa-4412-9ff7-dad5918c12c8-kube-api-access-sg756\") pod \"0c2eec64-4eaa-4412-9ff7-dad5918c12c8\" (UID: \"0c2eec64-4eaa-4412-9ff7-dad5918c12c8\") " Jan 29 15:50:02 crc kubenswrapper[5008]: I0129 15:50:02.070347 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0c2eec64-4eaa-4412-9ff7-dad5918c12c8-scripts\") pod \"0c2eec64-4eaa-4412-9ff7-dad5918c12c8\" (UID: \"0c2eec64-4eaa-4412-9ff7-dad5918c12c8\") " Jan 29 15:50:02 crc kubenswrapper[5008]: I0129 15:50:02.077902 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0c2eec64-4eaa-4412-9ff7-dad5918c12c8-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "0c2eec64-4eaa-4412-9ff7-dad5918c12c8" (UID: "0c2eec64-4eaa-4412-9ff7-dad5918c12c8"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 15:50:02 crc kubenswrapper[5008]: I0129 15:50:02.078866 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0c2eec64-4eaa-4412-9ff7-dad5918c12c8-scripts" (OuterVolumeSpecName: "scripts") pod "0c2eec64-4eaa-4412-9ff7-dad5918c12c8" (UID: "0c2eec64-4eaa-4412-9ff7-dad5918c12c8"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:50:02 crc kubenswrapper[5008]: I0129 15:50:02.079411 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0c2eec64-4eaa-4412-9ff7-dad5918c12c8-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "0c2eec64-4eaa-4412-9ff7-dad5918c12c8" (UID: "0c2eec64-4eaa-4412-9ff7-dad5918c12c8"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:50:02 crc kubenswrapper[5008]: I0129 15:50:02.079492 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0c2eec64-4eaa-4412-9ff7-dad5918c12c8-kube-api-access-sg756" (OuterVolumeSpecName: "kube-api-access-sg756") pod "0c2eec64-4eaa-4412-9ff7-dad5918c12c8" (UID: "0c2eec64-4eaa-4412-9ff7-dad5918c12c8"). InnerVolumeSpecName "kube-api-access-sg756". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:50:02 crc kubenswrapper[5008]: I0129 15:50:02.084920 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0c2eec64-4eaa-4412-9ff7-dad5918c12c8-logs" (OuterVolumeSpecName: "logs") pod "0c2eec64-4eaa-4412-9ff7-dad5918c12c8" (UID: "0c2eec64-4eaa-4412-9ff7-dad5918c12c8"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 15:50:02 crc kubenswrapper[5008]: I0129 15:50:02.113590 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0c2eec64-4eaa-4412-9ff7-dad5918c12c8-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "0c2eec64-4eaa-4412-9ff7-dad5918c12c8" (UID: "0c2eec64-4eaa-4412-9ff7-dad5918c12c8"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:50:02 crc kubenswrapper[5008]: I0129 15:50:02.147892 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0c2eec64-4eaa-4412-9ff7-dad5918c12c8-config-data" (OuterVolumeSpecName: "config-data") pod "0c2eec64-4eaa-4412-9ff7-dad5918c12c8" (UID: "0c2eec64-4eaa-4412-9ff7-dad5918c12c8"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:50:02 crc kubenswrapper[5008]: I0129 15:50:02.172980 5008 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0c2eec64-4eaa-4412-9ff7-dad5918c12c8-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 15:50:02 crc kubenswrapper[5008]: I0129 15:50:02.173011 5008 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0c2eec64-4eaa-4412-9ff7-dad5918c12c8-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 29 15:50:02 crc kubenswrapper[5008]: I0129 15:50:02.173020 5008 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0c2eec64-4eaa-4412-9ff7-dad5918c12c8-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 15:50:02 crc kubenswrapper[5008]: I0129 15:50:02.173032 5008 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sg756\" (UniqueName: \"kubernetes.io/projected/0c2eec64-4eaa-4412-9ff7-dad5918c12c8-kube-api-access-sg756\") on node \"crc\" DevicePath \"\"" Jan 29 15:50:02 crc kubenswrapper[5008]: I0129 15:50:02.173043 5008 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0c2eec64-4eaa-4412-9ff7-dad5918c12c8-logs\") on node \"crc\" DevicePath \"\"" Jan 29 15:50:02 crc kubenswrapper[5008]: I0129 15:50:02.173051 5008 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/0c2eec64-4eaa-4412-9ff7-dad5918c12c8-etc-machine-id\") on node \"crc\" DevicePath \"\"" Jan 29 15:50:02 crc kubenswrapper[5008]: I0129 15:50:02.173058 5008 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0c2eec64-4eaa-4412-9ff7-dad5918c12c8-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 15:50:02 crc kubenswrapper[5008]: I0129 15:50:02.649919 5008 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Jan 29 15:50:02 crc kubenswrapper[5008]: I0129 15:50:02.692985 5008 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 29 15:50:02 crc kubenswrapper[5008]: I0129 15:50:02.769054 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"0c2eec64-4eaa-4412-9ff7-dad5918c12c8","Type":"ContainerDied","Data":"6656a69f4cd648a8aa3695a0ddc7bc96445ac83b10c1e0933a0183bb3570fe1e"} Jan 29 15:50:02 crc kubenswrapper[5008]: I0129 15:50:02.769125 5008 scope.go:117] "RemoveContainer" containerID="c9201e193ff2e5a2b26c3ff616dcb3f4435c4982dabca37e22678acddcd52a0c" Jan 29 15:50:02 crc kubenswrapper[5008]: I0129 15:50:02.769205 5008 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 29 15:50:02 crc kubenswrapper[5008]: I0129 15:50:02.769348 5008 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="d01ff2cd-2707-4765-a399-a68312196c22" containerName="cinder-scheduler" containerID="cri-o://b75f2a4361779c7b8425fd94ecbf05c19e481194aa4b56d42b2abd6ec2919902" gracePeriod=30 Jan 29 15:50:02 crc kubenswrapper[5008]: I0129 15:50:02.769722 5008 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="d01ff2cd-2707-4765-a399-a68312196c22" containerName="probe" containerID="cri-o://69665425f19a49b5cdcfb4255b47fbfaaa95a031ae37ae6f7818c9b5e08c3fc8" gracePeriod=30 Jan 29 15:50:02 crc kubenswrapper[5008]: I0129 15:50:02.806508 5008 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Jan 29 15:50:02 crc kubenswrapper[5008]: I0129 15:50:02.819648 5008 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-api-0"] Jan 29 15:50:02 crc kubenswrapper[5008]: I0129 15:50:02.837032 5008 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Jan 29 15:50:02 crc kubenswrapper[5008]: E0129 15:50:02.837379 5008 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0c2eec64-4eaa-4412-9ff7-dad5918c12c8" containerName="cinder-api" Jan 29 15:50:02 crc kubenswrapper[5008]: I0129 15:50:02.837397 5008 state_mem.go:107] "Deleted CPUSet assignment" podUID="0c2eec64-4eaa-4412-9ff7-dad5918c12c8" containerName="cinder-api" Jan 29 15:50:02 crc kubenswrapper[5008]: E0129 15:50:02.837410 5008 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0c2eec64-4eaa-4412-9ff7-dad5918c12c8" containerName="cinder-api-log" Jan 29 15:50:02 crc kubenswrapper[5008]: I0129 15:50:02.837417 5008 state_mem.go:107] "Deleted CPUSet assignment" podUID="0c2eec64-4eaa-4412-9ff7-dad5918c12c8" containerName="cinder-api-log" Jan 29 15:50:02 crc kubenswrapper[5008]: I0129 15:50:02.838371 5008 memory_manager.go:354] "RemoveStaleState removing state" podUID="0c2eec64-4eaa-4412-9ff7-dad5918c12c8" containerName="cinder-api-log" Jan 29 15:50:02 crc kubenswrapper[5008]: I0129 15:50:02.838394 5008 memory_manager.go:354] "RemoveStaleState removing state" podUID="0c2eec64-4eaa-4412-9ff7-dad5918c12c8" containerName="cinder-api" Jan 29 15:50:02 crc kubenswrapper[5008]: I0129 15:50:02.841406 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 29 15:50:02 crc kubenswrapper[5008]: I0129 15:50:02.844936 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-internal-svc" Jan 29 15:50:02 crc kubenswrapper[5008]: I0129 15:50:02.844976 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-public-svc" Jan 29 15:50:02 crc kubenswrapper[5008]: I0129 15:50:02.845071 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Jan 29 15:50:02 crc kubenswrapper[5008]: I0129 15:50:02.855042 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Jan 29 15:50:02 crc kubenswrapper[5008]: I0129 15:50:02.888563 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2f60d298-c33b-44b3-a99c-a0e75a321a80-config-data\") pod \"cinder-api-0\" (UID: \"2f60d298-c33b-44b3-a99c-a0e75a321a80\") " pod="openstack/cinder-api-0" Jan 29 15:50:02 crc kubenswrapper[5008]: I0129 15:50:02.888638 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2f60d298-c33b-44b3-a99c-a0e75a321a80-logs\") pod \"cinder-api-0\" (UID: \"2f60d298-c33b-44b3-a99c-a0e75a321a80\") " pod="openstack/cinder-api-0" Jan 29 15:50:02 crc kubenswrapper[5008]: I0129 15:50:02.888663 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/2f60d298-c33b-44b3-a99c-a0e75a321a80-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"2f60d298-c33b-44b3-a99c-a0e75a321a80\") " pod="openstack/cinder-api-0" Jan 29 15:50:02 crc kubenswrapper[5008]: I0129 15:50:02.891878 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/2f60d298-c33b-44b3-a99c-a0e75a321a80-public-tls-certs\") pod \"cinder-api-0\" (UID: \"2f60d298-c33b-44b3-a99c-a0e75a321a80\") " pod="openstack/cinder-api-0" Jan 29 15:50:02 crc kubenswrapper[5008]: I0129 15:50:02.891958 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/2f60d298-c33b-44b3-a99c-a0e75a321a80-etc-machine-id\") pod \"cinder-api-0\" (UID: \"2f60d298-c33b-44b3-a99c-a0e75a321a80\") " pod="openstack/cinder-api-0" Jan 29 15:50:02 crc kubenswrapper[5008]: I0129 15:50:02.892004 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n725c\" (UniqueName: \"kubernetes.io/projected/2f60d298-c33b-44b3-a99c-a0e75a321a80-kube-api-access-n725c\") pod \"cinder-api-0\" (UID: \"2f60d298-c33b-44b3-a99c-a0e75a321a80\") " pod="openstack/cinder-api-0" Jan 29 15:50:02 crc kubenswrapper[5008]: I0129 15:50:02.892050 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2f60d298-c33b-44b3-a99c-a0e75a321a80-config-data-custom\") pod \"cinder-api-0\" (UID: \"2f60d298-c33b-44b3-a99c-a0e75a321a80\") " pod="openstack/cinder-api-0" Jan 29 15:50:02 crc kubenswrapper[5008]: I0129 15:50:02.892219 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2f60d298-c33b-44b3-a99c-a0e75a321a80-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"2f60d298-c33b-44b3-a99c-a0e75a321a80\") " pod="openstack/cinder-api-0" Jan 29 15:50:02 crc kubenswrapper[5008]: I0129 15:50:02.892327 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2f60d298-c33b-44b3-a99c-a0e75a321a80-scripts\") pod \"cinder-api-0\" (UID: \"2f60d298-c33b-44b3-a99c-a0e75a321a80\") " pod="openstack/cinder-api-0" Jan 29 15:50:02 crc kubenswrapper[5008]: I0129 15:50:02.994207 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2f60d298-c33b-44b3-a99c-a0e75a321a80-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"2f60d298-c33b-44b3-a99c-a0e75a321a80\") " pod="openstack/cinder-api-0" Jan 29 15:50:02 crc kubenswrapper[5008]: I0129 15:50:02.994271 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2f60d298-c33b-44b3-a99c-a0e75a321a80-scripts\") pod \"cinder-api-0\" (UID: \"2f60d298-c33b-44b3-a99c-a0e75a321a80\") " pod="openstack/cinder-api-0" Jan 29 15:50:02 crc kubenswrapper[5008]: I0129 15:50:02.994315 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2f60d298-c33b-44b3-a99c-a0e75a321a80-config-data\") pod \"cinder-api-0\" (UID: \"2f60d298-c33b-44b3-a99c-a0e75a321a80\") " pod="openstack/cinder-api-0" Jan 29 15:50:02 crc kubenswrapper[5008]: I0129 15:50:02.994356 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2f60d298-c33b-44b3-a99c-a0e75a321a80-logs\") pod \"cinder-api-0\" (UID: \"2f60d298-c33b-44b3-a99c-a0e75a321a80\") " pod="openstack/cinder-api-0" Jan 29 15:50:02 crc kubenswrapper[5008]: I0129 15:50:02.994377 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/2f60d298-c33b-44b3-a99c-a0e75a321a80-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"2f60d298-c33b-44b3-a99c-a0e75a321a80\") " pod="openstack/cinder-api-0" Jan 29 15:50:02 crc kubenswrapper[5008]: I0129 15:50:02.994410 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/2f60d298-c33b-44b3-a99c-a0e75a321a80-public-tls-certs\") pod \"cinder-api-0\" (UID: \"2f60d298-c33b-44b3-a99c-a0e75a321a80\") " pod="openstack/cinder-api-0" Jan 29 15:50:02 crc kubenswrapper[5008]: I0129 15:50:02.994430 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/2f60d298-c33b-44b3-a99c-a0e75a321a80-etc-machine-id\") pod \"cinder-api-0\" (UID: \"2f60d298-c33b-44b3-a99c-a0e75a321a80\") " pod="openstack/cinder-api-0" Jan 29 15:50:02 crc kubenswrapper[5008]: I0129 15:50:02.994452 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n725c\" (UniqueName: \"kubernetes.io/projected/2f60d298-c33b-44b3-a99c-a0e75a321a80-kube-api-access-n725c\") pod \"cinder-api-0\" (UID: \"2f60d298-c33b-44b3-a99c-a0e75a321a80\") " pod="openstack/cinder-api-0" Jan 29 15:50:02 crc kubenswrapper[5008]: I0129 15:50:02.994474 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2f60d298-c33b-44b3-a99c-a0e75a321a80-config-data-custom\") pod \"cinder-api-0\" (UID: \"2f60d298-c33b-44b3-a99c-a0e75a321a80\") " pod="openstack/cinder-api-0" Jan 29 15:50:02 crc kubenswrapper[5008]: I0129 15:50:02.994897 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2f60d298-c33b-44b3-a99c-a0e75a321a80-logs\") pod \"cinder-api-0\" (UID: \"2f60d298-c33b-44b3-a99c-a0e75a321a80\") " pod="openstack/cinder-api-0" Jan 29 15:50:02 crc kubenswrapper[5008]: I0129 15:50:02.994958 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/2f60d298-c33b-44b3-a99c-a0e75a321a80-etc-machine-id\") pod \"cinder-api-0\" (UID: \"2f60d298-c33b-44b3-a99c-a0e75a321a80\") " pod="openstack/cinder-api-0" Jan 29 15:50:03 crc kubenswrapper[5008]: I0129 15:50:03.003633 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/2f60d298-c33b-44b3-a99c-a0e75a321a80-public-tls-certs\") pod \"cinder-api-0\" (UID: \"2f60d298-c33b-44b3-a99c-a0e75a321a80\") " pod="openstack/cinder-api-0" Jan 29 15:50:03 crc kubenswrapper[5008]: I0129 15:50:03.003742 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/2f60d298-c33b-44b3-a99c-a0e75a321a80-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"2f60d298-c33b-44b3-a99c-a0e75a321a80\") " pod="openstack/cinder-api-0" Jan 29 15:50:03 crc kubenswrapper[5008]: I0129 15:50:03.004555 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2f60d298-c33b-44b3-a99c-a0e75a321a80-scripts\") pod \"cinder-api-0\" (UID: \"2f60d298-c33b-44b3-a99c-a0e75a321a80\") " pod="openstack/cinder-api-0" Jan 29 15:50:03 crc kubenswrapper[5008]: I0129 15:50:03.006537 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2f60d298-c33b-44b3-a99c-a0e75a321a80-config-data-custom\") pod \"cinder-api-0\" (UID: \"2f60d298-c33b-44b3-a99c-a0e75a321a80\") " pod="openstack/cinder-api-0" Jan 29 15:50:03 crc kubenswrapper[5008]: I0129 15:50:03.007590 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2f60d298-c33b-44b3-a99c-a0e75a321a80-config-data\") pod \"cinder-api-0\" (UID: \"2f60d298-c33b-44b3-a99c-a0e75a321a80\") " pod="openstack/cinder-api-0" Jan 29 15:50:03 crc kubenswrapper[5008]: I0129 15:50:03.014060 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2f60d298-c33b-44b3-a99c-a0e75a321a80-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"2f60d298-c33b-44b3-a99c-a0e75a321a80\") " pod="openstack/cinder-api-0" Jan 29 15:50:03 crc kubenswrapper[5008]: I0129 15:50:03.027066 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n725c\" (UniqueName: \"kubernetes.io/projected/2f60d298-c33b-44b3-a99c-a0e75a321a80-kube-api-access-n725c\") pod \"cinder-api-0\" (UID: \"2f60d298-c33b-44b3-a99c-a0e75a321a80\") " pod="openstack/cinder-api-0" Jan 29 15:50:03 crc kubenswrapper[5008]: I0129 15:50:03.164925 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 29 15:50:03 crc kubenswrapper[5008]: I0129 15:50:03.352505 5008 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0c2eec64-4eaa-4412-9ff7-dad5918c12c8" path="/var/lib/kubelet/pods/0c2eec64-4eaa-4412-9ff7-dad5918c12c8/volumes" Jan 29 15:50:04 crc kubenswrapper[5008]: I0129 15:50:04.098031 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-774db89647-tm89m" Jan 29 15:50:04 crc kubenswrapper[5008]: I0129 15:50:04.188087 5008 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-cf78879c9-f77w7"] Jan 29 15:50:04 crc kubenswrapper[5008]: I0129 15:50:04.188359 5008 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-cf78879c9-f77w7" podUID="771d4fdc-7731-4bfc-a65a-7c3b8624eb32" containerName="dnsmasq-dns" containerID="cri-o://7c2adc3a463437940f2209966bd51450818f3254391e12503b2d25eac2fb47ae" gracePeriod=10 Jan 29 15:50:04 crc kubenswrapper[5008]: I0129 15:50:04.786192 5008 generic.go:334] "Generic (PLEG): container finished" podID="771d4fdc-7731-4bfc-a65a-7c3b8624eb32" containerID="7c2adc3a463437940f2209966bd51450818f3254391e12503b2d25eac2fb47ae" exitCode=0 Jan 29 15:50:04 crc kubenswrapper[5008]: I0129 15:50:04.786352 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-cf78879c9-f77w7" event={"ID":"771d4fdc-7731-4bfc-a65a-7c3b8624eb32","Type":"ContainerDied","Data":"7c2adc3a463437940f2209966bd51450818f3254391e12503b2d25eac2fb47ae"} Jan 29 15:50:04 crc kubenswrapper[5008]: I0129 15:50:04.789281 5008 generic.go:334] "Generic (PLEG): container finished" podID="d01ff2cd-2707-4765-a399-a68312196c22" containerID="69665425f19a49b5cdcfb4255b47fbfaaa95a031ae37ae6f7818c9b5e08c3fc8" exitCode=0 Jan 29 15:50:04 crc kubenswrapper[5008]: I0129 15:50:04.789317 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"d01ff2cd-2707-4765-a399-a68312196c22","Type":"ContainerDied","Data":"69665425f19a49b5cdcfb4255b47fbfaaa95a031ae37ae6f7818c9b5e08c3fc8"} Jan 29 15:50:05 crc kubenswrapper[5008]: I0129 15:50:05.169427 5008 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-cf78879c9-f77w7" podUID="771d4fdc-7731-4bfc-a65a-7c3b8624eb32" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.140:5353: connect: connection refused" Jan 29 15:50:05 crc kubenswrapper[5008]: I0129 15:50:05.313343 5008 scope.go:117] "RemoveContainer" containerID="6fb9ad78b8cfc33e60172b80d4b4df57814803c7224a9357a1c3e296f8b0d427" Jan 29 15:50:05 crc kubenswrapper[5008]: E0129 15:50:05.565014 5008 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/ubi9/httpd-24:latest" Jan 29 15:50:05 crc kubenswrapper[5008]: E0129 15:50:05.565503 5008 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:proxy-httpd,Image:registry.redhat.io/ubi9/httpd-24:latest,Command:[/usr/sbin/httpd],Args:[-DFOREGROUND],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:proxy-httpd,HostPort:0,ContainerPort:3000,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/httpd/conf/httpd.conf,SubPath:httpd.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/httpd/conf.d/ssl.conf,SubPath:ssl.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:run-httpd,ReadOnly:false,MountPath:/run/httpd,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:log-httpd,ReadOnly:false,MountPath:/var/log/httpd,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-ngjqg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/,Port:{0 3000 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:30,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/,Port:{0 3000 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:30,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(8457b44a-814e-403f-a2c9-71907f5cb2d2): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 29 15:50:05 crc kubenswrapper[5008]: E0129 15:50:05.567364 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"ceilometer-central-agent\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\", failed to \"StartContainer\" for \"proxy-httpd\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"]" pod="openstack/ceilometer-0" podUID="8457b44a-814e-403f-a2c9-71907f5cb2d2" Jan 29 15:50:05 crc kubenswrapper[5008]: I0129 15:50:05.802931 5008 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="8457b44a-814e-403f-a2c9-71907f5cb2d2" containerName="ceilometer-notification-agent" containerID="cri-o://c73a64288c02c3985aea7548e9fdb8867b747089e767ede40e25dba325344234" gracePeriod=30 Jan 29 15:50:05 crc kubenswrapper[5008]: I0129 15:50:05.803388 5008 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="8457b44a-814e-403f-a2c9-71907f5cb2d2" containerName="sg-core" containerID="cri-o://73570da7fb4cd60403415b8ef7560376566de89eb802bb7bc549402efb543a24" gracePeriod=30 Jan 29 15:50:05 crc kubenswrapper[5008]: I0129 15:50:05.878148 5008 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-cf78879c9-f77w7" Jan 29 15:50:05 crc kubenswrapper[5008]: I0129 15:50:05.907758 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Jan 29 15:50:05 crc kubenswrapper[5008]: I0129 15:50:05.921995 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-98cff5df-8qpcl"] Jan 29 15:50:05 crc kubenswrapper[5008]: I0129 15:50:05.944061 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2hqb9\" (UniqueName: \"kubernetes.io/projected/771d4fdc-7731-4bfc-a65a-7c3b8624eb32-kube-api-access-2hqb9\") pod \"771d4fdc-7731-4bfc-a65a-7c3b8624eb32\" (UID: \"771d4fdc-7731-4bfc-a65a-7c3b8624eb32\") " Jan 29 15:50:05 crc kubenswrapper[5008]: I0129 15:50:05.944209 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/771d4fdc-7731-4bfc-a65a-7c3b8624eb32-dns-svc\") pod \"771d4fdc-7731-4bfc-a65a-7c3b8624eb32\" (UID: \"771d4fdc-7731-4bfc-a65a-7c3b8624eb32\") " Jan 29 15:50:05 crc kubenswrapper[5008]: I0129 15:50:05.944227 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/771d4fdc-7731-4bfc-a65a-7c3b8624eb32-ovsdbserver-nb\") pod \"771d4fdc-7731-4bfc-a65a-7c3b8624eb32\" (UID: \"771d4fdc-7731-4bfc-a65a-7c3b8624eb32\") " Jan 29 15:50:05 crc kubenswrapper[5008]: I0129 15:50:05.944254 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/771d4fdc-7731-4bfc-a65a-7c3b8624eb32-ovsdbserver-sb\") pod \"771d4fdc-7731-4bfc-a65a-7c3b8624eb32\" (UID: \"771d4fdc-7731-4bfc-a65a-7c3b8624eb32\") " Jan 29 15:50:05 crc kubenswrapper[5008]: I0129 15:50:05.944290 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/771d4fdc-7731-4bfc-a65a-7c3b8624eb32-config\") pod \"771d4fdc-7731-4bfc-a65a-7c3b8624eb32\" (UID: \"771d4fdc-7731-4bfc-a65a-7c3b8624eb32\") " Jan 29 15:50:05 crc kubenswrapper[5008]: I0129 15:50:05.944333 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/771d4fdc-7731-4bfc-a65a-7c3b8624eb32-dns-swift-storage-0\") pod \"771d4fdc-7731-4bfc-a65a-7c3b8624eb32\" (UID: \"771d4fdc-7731-4bfc-a65a-7c3b8624eb32\") " Jan 29 15:50:05 crc kubenswrapper[5008]: I0129 15:50:05.950657 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/771d4fdc-7731-4bfc-a65a-7c3b8624eb32-kube-api-access-2hqb9" (OuterVolumeSpecName: "kube-api-access-2hqb9") pod "771d4fdc-7731-4bfc-a65a-7c3b8624eb32" (UID: "771d4fdc-7731-4bfc-a65a-7c3b8624eb32"). InnerVolumeSpecName "kube-api-access-2hqb9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:50:05 crc kubenswrapper[5008]: I0129 15:50:05.990158 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/771d4fdc-7731-4bfc-a65a-7c3b8624eb32-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "771d4fdc-7731-4bfc-a65a-7c3b8624eb32" (UID: "771d4fdc-7731-4bfc-a65a-7c3b8624eb32"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:50:06 crc kubenswrapper[5008]: I0129 15:50:06.000933 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/771d4fdc-7731-4bfc-a65a-7c3b8624eb32-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "771d4fdc-7731-4bfc-a65a-7c3b8624eb32" (UID: "771d4fdc-7731-4bfc-a65a-7c3b8624eb32"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:50:06 crc kubenswrapper[5008]: I0129 15:50:06.003284 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/771d4fdc-7731-4bfc-a65a-7c3b8624eb32-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "771d4fdc-7731-4bfc-a65a-7c3b8624eb32" (UID: "771d4fdc-7731-4bfc-a65a-7c3b8624eb32"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:50:06 crc kubenswrapper[5008]: I0129 15:50:06.004278 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/771d4fdc-7731-4bfc-a65a-7c3b8624eb32-config" (OuterVolumeSpecName: "config") pod "771d4fdc-7731-4bfc-a65a-7c3b8624eb32" (UID: "771d4fdc-7731-4bfc-a65a-7c3b8624eb32"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:50:06 crc kubenswrapper[5008]: I0129 15:50:06.006442 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/771d4fdc-7731-4bfc-a65a-7c3b8624eb32-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "771d4fdc-7731-4bfc-a65a-7c3b8624eb32" (UID: "771d4fdc-7731-4bfc-a65a-7c3b8624eb32"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:50:06 crc kubenswrapper[5008]: I0129 15:50:06.046794 5008 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/771d4fdc-7731-4bfc-a65a-7c3b8624eb32-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 29 15:50:06 crc kubenswrapper[5008]: I0129 15:50:06.046835 5008 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2hqb9\" (UniqueName: \"kubernetes.io/projected/771d4fdc-7731-4bfc-a65a-7c3b8624eb32-kube-api-access-2hqb9\") on node \"crc\" DevicePath \"\"" Jan 29 15:50:06 crc kubenswrapper[5008]: I0129 15:50:06.046851 5008 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/771d4fdc-7731-4bfc-a65a-7c3b8624eb32-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 29 15:50:06 crc kubenswrapper[5008]: I0129 15:50:06.046865 5008 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/771d4fdc-7731-4bfc-a65a-7c3b8624eb32-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 29 15:50:06 crc kubenswrapper[5008]: I0129 15:50:06.046877 5008 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/771d4fdc-7731-4bfc-a65a-7c3b8624eb32-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 29 15:50:06 crc kubenswrapper[5008]: I0129 15:50:06.046888 5008 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/771d4fdc-7731-4bfc-a65a-7c3b8624eb32-config\") on node \"crc\" DevicePath \"\"" Jan 29 15:50:06 crc kubenswrapper[5008]: I0129 15:50:06.812539 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-98cff5df-8qpcl" event={"ID":"6bf14a27-dc0a-430e-affa-a6a28e944947","Type":"ContainerStarted","Data":"22f0c736cbcd5ecc4ec0e4188555dfe5fe097fd13441242813876c102b643b46"} Jan 29 15:50:06 crc kubenswrapper[5008]: I0129 15:50:06.813114 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-98cff5df-8qpcl" event={"ID":"6bf14a27-dc0a-430e-affa-a6a28e944947","Type":"ContainerStarted","Data":"be1205ad8348b2d00885a1eb712b0ab1840bafcf0d42108c655323a2168a5d8a"} Jan 29 15:50:06 crc kubenswrapper[5008]: I0129 15:50:06.813128 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-98cff5df-8qpcl" event={"ID":"6bf14a27-dc0a-430e-affa-a6a28e944947","Type":"ContainerStarted","Data":"09bec9cc7b5bfce2561ed03382e1c39ba67dc010d7a69e00025756ae6c9863ae"} Jan 29 15:50:06 crc kubenswrapper[5008]: I0129 15:50:06.813141 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-98cff5df-8qpcl" Jan 29 15:50:06 crc kubenswrapper[5008]: I0129 15:50:06.817654 5008 generic.go:334] "Generic (PLEG): container finished" podID="8457b44a-814e-403f-a2c9-71907f5cb2d2" containerID="73570da7fb4cd60403415b8ef7560376566de89eb802bb7bc549402efb543a24" exitCode=2 Jan 29 15:50:06 crc kubenswrapper[5008]: I0129 15:50:06.817699 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8457b44a-814e-403f-a2c9-71907f5cb2d2","Type":"ContainerDied","Data":"73570da7fb4cd60403415b8ef7560376566de89eb802bb7bc549402efb543a24"} Jan 29 15:50:06 crc kubenswrapper[5008]: I0129 15:50:06.818916 5008 generic.go:334] "Generic (PLEG): container finished" podID="4ec0e696-652d-463e-b97e-dad0065a543b" containerID="0d834ba968e6d63e097a6aef362d3f06eb5d6b998580ed84a27255f328fc86b5" exitCode=0 Jan 29 15:50:06 crc kubenswrapper[5008]: I0129 15:50:06.818963 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-rcl2z" event={"ID":"4ec0e696-652d-463e-b97e-dad0065a543b","Type":"ContainerDied","Data":"0d834ba968e6d63e097a6aef362d3f06eb5d6b998580ed84a27255f328fc86b5"} Jan 29 15:50:06 crc kubenswrapper[5008]: I0129 15:50:06.820242 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-cf78879c9-f77w7" event={"ID":"771d4fdc-7731-4bfc-a65a-7c3b8624eb32","Type":"ContainerDied","Data":"0855c1b3124d74f066ce8585049d7c108a1ae142bfe48dd2fe48b76c9a87b4b0"} Jan 29 15:50:06 crc kubenswrapper[5008]: I0129 15:50:06.820274 5008 scope.go:117] "RemoveContainer" containerID="7c2adc3a463437940f2209966bd51450818f3254391e12503b2d25eac2fb47ae" Jan 29 15:50:06 crc kubenswrapper[5008]: I0129 15:50:06.820383 5008 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-cf78879c9-f77w7" Jan 29 15:50:06 crc kubenswrapper[5008]: I0129 15:50:06.832953 5008 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-98cff5df-8qpcl" podStartSLOduration=8.832932782 podStartE2EDuration="8.832932782s" podCreationTimestamp="2026-01-29 15:49:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 15:50:06.831080857 +0000 UTC m=+1350.503935104" watchObservedRunningTime="2026-01-29 15:50:06.832932782 +0000 UTC m=+1350.505787019" Jan 29 15:50:06 crc kubenswrapper[5008]: I0129 15:50:06.846364 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"2f60d298-c33b-44b3-a99c-a0e75a321a80","Type":"ContainerStarted","Data":"cd25fc19c8d48481455c2dc0d01e51bd350a2779964eeedcdc3663db00a3354d"} Jan 29 15:50:06 crc kubenswrapper[5008]: I0129 15:50:06.846419 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"2f60d298-c33b-44b3-a99c-a0e75a321a80","Type":"ContainerStarted","Data":"24ef3c55c65c899e90c3a3025fc0ec92178c9af98f4626b89ff82020024e8b95"} Jan 29 15:50:06 crc kubenswrapper[5008]: I0129 15:50:06.907048 5008 scope.go:117] "RemoveContainer" containerID="3fec96d0d9b6bf3046f7029a3dc91f246cf551ca6e017f8896e18866aed96699" Jan 29 15:50:06 crc kubenswrapper[5008]: I0129 15:50:06.909162 5008 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-cf78879c9-f77w7"] Jan 29 15:50:06 crc kubenswrapper[5008]: I0129 15:50:06.915766 5008 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-cf78879c9-f77w7"] Jan 29 15:50:07 crc kubenswrapper[5008]: I0129 15:50:07.335854 5008 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="771d4fdc-7731-4bfc-a65a-7c3b8624eb32" path="/var/lib/kubelet/pods/771d4fdc-7731-4bfc-a65a-7c3b8624eb32/volumes" Jan 29 15:50:07 crc kubenswrapper[5008]: I0129 15:50:07.430134 5008 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 29 15:50:07 crc kubenswrapper[5008]: I0129 15:50:07.490016 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8457b44a-814e-403f-a2c9-71907f5cb2d2-scripts\") pod \"8457b44a-814e-403f-a2c9-71907f5cb2d2\" (UID: \"8457b44a-814e-403f-a2c9-71907f5cb2d2\") " Jan 29 15:50:07 crc kubenswrapper[5008]: I0129 15:50:07.490136 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/8457b44a-814e-403f-a2c9-71907f5cb2d2-sg-core-conf-yaml\") pod \"8457b44a-814e-403f-a2c9-71907f5cb2d2\" (UID: \"8457b44a-814e-403f-a2c9-71907f5cb2d2\") " Jan 29 15:50:07 crc kubenswrapper[5008]: I0129 15:50:07.490220 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8457b44a-814e-403f-a2c9-71907f5cb2d2-combined-ca-bundle\") pod \"8457b44a-814e-403f-a2c9-71907f5cb2d2\" (UID: \"8457b44a-814e-403f-a2c9-71907f5cb2d2\") " Jan 29 15:50:07 crc kubenswrapper[5008]: I0129 15:50:07.490263 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ngjqg\" (UniqueName: \"kubernetes.io/projected/8457b44a-814e-403f-a2c9-71907f5cb2d2-kube-api-access-ngjqg\") pod \"8457b44a-814e-403f-a2c9-71907f5cb2d2\" (UID: \"8457b44a-814e-403f-a2c9-71907f5cb2d2\") " Jan 29 15:50:07 crc kubenswrapper[5008]: I0129 15:50:07.490324 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8457b44a-814e-403f-a2c9-71907f5cb2d2-run-httpd\") pod \"8457b44a-814e-403f-a2c9-71907f5cb2d2\" (UID: \"8457b44a-814e-403f-a2c9-71907f5cb2d2\") " Jan 29 15:50:07 crc kubenswrapper[5008]: I0129 15:50:07.490419 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8457b44a-814e-403f-a2c9-71907f5cb2d2-log-httpd\") pod \"8457b44a-814e-403f-a2c9-71907f5cb2d2\" (UID: \"8457b44a-814e-403f-a2c9-71907f5cb2d2\") " Jan 29 15:50:07 crc kubenswrapper[5008]: I0129 15:50:07.490455 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8457b44a-814e-403f-a2c9-71907f5cb2d2-config-data\") pod \"8457b44a-814e-403f-a2c9-71907f5cb2d2\" (UID: \"8457b44a-814e-403f-a2c9-71907f5cb2d2\") " Jan 29 15:50:07 crc kubenswrapper[5008]: I0129 15:50:07.490744 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8457b44a-814e-403f-a2c9-71907f5cb2d2-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "8457b44a-814e-403f-a2c9-71907f5cb2d2" (UID: "8457b44a-814e-403f-a2c9-71907f5cb2d2"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 15:50:07 crc kubenswrapper[5008]: I0129 15:50:07.490773 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8457b44a-814e-403f-a2c9-71907f5cb2d2-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "8457b44a-814e-403f-a2c9-71907f5cb2d2" (UID: "8457b44a-814e-403f-a2c9-71907f5cb2d2"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 15:50:07 crc kubenswrapper[5008]: I0129 15:50:07.492162 5008 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8457b44a-814e-403f-a2c9-71907f5cb2d2-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 29 15:50:07 crc kubenswrapper[5008]: I0129 15:50:07.492336 5008 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8457b44a-814e-403f-a2c9-71907f5cb2d2-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 29 15:50:07 crc kubenswrapper[5008]: I0129 15:50:07.497410 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8457b44a-814e-403f-a2c9-71907f5cb2d2-scripts" (OuterVolumeSpecName: "scripts") pod "8457b44a-814e-403f-a2c9-71907f5cb2d2" (UID: "8457b44a-814e-403f-a2c9-71907f5cb2d2"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:50:07 crc kubenswrapper[5008]: I0129 15:50:07.499543 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8457b44a-814e-403f-a2c9-71907f5cb2d2-kube-api-access-ngjqg" (OuterVolumeSpecName: "kube-api-access-ngjqg") pod "8457b44a-814e-403f-a2c9-71907f5cb2d2" (UID: "8457b44a-814e-403f-a2c9-71907f5cb2d2"). InnerVolumeSpecName "kube-api-access-ngjqg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:50:07 crc kubenswrapper[5008]: I0129 15:50:07.519872 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8457b44a-814e-403f-a2c9-71907f5cb2d2-config-data" (OuterVolumeSpecName: "config-data") pod "8457b44a-814e-403f-a2c9-71907f5cb2d2" (UID: "8457b44a-814e-403f-a2c9-71907f5cb2d2"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:50:07 crc kubenswrapper[5008]: I0129 15:50:07.522994 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8457b44a-814e-403f-a2c9-71907f5cb2d2-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "8457b44a-814e-403f-a2c9-71907f5cb2d2" (UID: "8457b44a-814e-403f-a2c9-71907f5cb2d2"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:50:07 crc kubenswrapper[5008]: I0129 15:50:07.528178 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8457b44a-814e-403f-a2c9-71907f5cb2d2-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "8457b44a-814e-403f-a2c9-71907f5cb2d2" (UID: "8457b44a-814e-403f-a2c9-71907f5cb2d2"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:50:07 crc kubenswrapper[5008]: I0129 15:50:07.593493 5008 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8457b44a-814e-403f-a2c9-71907f5cb2d2-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 15:50:07 crc kubenswrapper[5008]: I0129 15:50:07.593534 5008 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8457b44a-814e-403f-a2c9-71907f5cb2d2-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 15:50:07 crc kubenswrapper[5008]: I0129 15:50:07.593543 5008 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/8457b44a-814e-403f-a2c9-71907f5cb2d2-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 29 15:50:07 crc kubenswrapper[5008]: I0129 15:50:07.593553 5008 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8457b44a-814e-403f-a2c9-71907f5cb2d2-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 15:50:07 crc kubenswrapper[5008]: I0129 15:50:07.593565 5008 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ngjqg\" (UniqueName: \"kubernetes.io/projected/8457b44a-814e-403f-a2c9-71907f5cb2d2-kube-api-access-ngjqg\") on node \"crc\" DevicePath \"\"" Jan 29 15:50:07 crc kubenswrapper[5008]: I0129 15:50:07.875734 5008 generic.go:334] "Generic (PLEG): container finished" podID="8457b44a-814e-403f-a2c9-71907f5cb2d2" containerID="c73a64288c02c3985aea7548e9fdb8867b747089e767ede40e25dba325344234" exitCode=0 Jan 29 15:50:07 crc kubenswrapper[5008]: I0129 15:50:07.875830 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8457b44a-814e-403f-a2c9-71907f5cb2d2","Type":"ContainerDied","Data":"c73a64288c02c3985aea7548e9fdb8867b747089e767ede40e25dba325344234"} Jan 29 15:50:07 crc kubenswrapper[5008]: I0129 15:50:07.877084 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8457b44a-814e-403f-a2c9-71907f5cb2d2","Type":"ContainerDied","Data":"c97bf01c6b949d39e9bc8fa902a0c1cf304eedee9dbe4194b2055c35de3ec4ce"} Jan 29 15:50:07 crc kubenswrapper[5008]: I0129 15:50:07.875887 5008 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 29 15:50:07 crc kubenswrapper[5008]: I0129 15:50:07.877149 5008 scope.go:117] "RemoveContainer" containerID="73570da7fb4cd60403415b8ef7560376566de89eb802bb7bc549402efb543a24" Jan 29 15:50:07 crc kubenswrapper[5008]: I0129 15:50:07.886837 5008 generic.go:334] "Generic (PLEG): container finished" podID="d01ff2cd-2707-4765-a399-a68312196c22" containerID="b75f2a4361779c7b8425fd94ecbf05c19e481194aa4b56d42b2abd6ec2919902" exitCode=0 Jan 29 15:50:07 crc kubenswrapper[5008]: I0129 15:50:07.886879 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"d01ff2cd-2707-4765-a399-a68312196c22","Type":"ContainerDied","Data":"b75f2a4361779c7b8425fd94ecbf05c19e481194aa4b56d42b2abd6ec2919902"} Jan 29 15:50:07 crc kubenswrapper[5008]: I0129 15:50:07.982385 5008 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 29 15:50:08 crc kubenswrapper[5008]: I0129 15:50:08.005361 5008 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 29 15:50:08 crc kubenswrapper[5008]: I0129 15:50:08.015571 5008 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 29 15:50:08 crc kubenswrapper[5008]: E0129 15:50:08.016103 5008 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8457b44a-814e-403f-a2c9-71907f5cb2d2" containerName="ceilometer-notification-agent" Jan 29 15:50:08 crc kubenswrapper[5008]: I0129 15:50:08.016121 5008 state_mem.go:107] "Deleted CPUSet assignment" podUID="8457b44a-814e-403f-a2c9-71907f5cb2d2" containerName="ceilometer-notification-agent" Jan 29 15:50:08 crc kubenswrapper[5008]: E0129 15:50:08.016134 5008 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8457b44a-814e-403f-a2c9-71907f5cb2d2" containerName="sg-core" Jan 29 15:50:08 crc kubenswrapper[5008]: I0129 15:50:08.016141 5008 state_mem.go:107] "Deleted CPUSet assignment" podUID="8457b44a-814e-403f-a2c9-71907f5cb2d2" containerName="sg-core" Jan 29 15:50:08 crc kubenswrapper[5008]: E0129 15:50:08.016163 5008 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="771d4fdc-7731-4bfc-a65a-7c3b8624eb32" containerName="init" Jan 29 15:50:08 crc kubenswrapper[5008]: I0129 15:50:08.016169 5008 state_mem.go:107] "Deleted CPUSet assignment" podUID="771d4fdc-7731-4bfc-a65a-7c3b8624eb32" containerName="init" Jan 29 15:50:08 crc kubenswrapper[5008]: E0129 15:50:08.016183 5008 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="771d4fdc-7731-4bfc-a65a-7c3b8624eb32" containerName="dnsmasq-dns" Jan 29 15:50:08 crc kubenswrapper[5008]: I0129 15:50:08.016190 5008 state_mem.go:107] "Deleted CPUSet assignment" podUID="771d4fdc-7731-4bfc-a65a-7c3b8624eb32" containerName="dnsmasq-dns" Jan 29 15:50:08 crc kubenswrapper[5008]: I0129 15:50:08.016341 5008 memory_manager.go:354] "RemoveStaleState removing state" podUID="8457b44a-814e-403f-a2c9-71907f5cb2d2" containerName="sg-core" Jan 29 15:50:08 crc kubenswrapper[5008]: I0129 15:50:08.016360 5008 memory_manager.go:354] "RemoveStaleState removing state" podUID="771d4fdc-7731-4bfc-a65a-7c3b8624eb32" containerName="dnsmasq-dns" Jan 29 15:50:08 crc kubenswrapper[5008]: I0129 15:50:08.016374 5008 memory_manager.go:354] "RemoveStaleState removing state" podUID="8457b44a-814e-403f-a2c9-71907f5cb2d2" containerName="ceilometer-notification-agent" Jan 29 15:50:08 crc kubenswrapper[5008]: I0129 15:50:08.018121 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 29 15:50:08 crc kubenswrapper[5008]: I0129 15:50:08.021455 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 29 15:50:08 crc kubenswrapper[5008]: I0129 15:50:08.022523 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 29 15:50:08 crc kubenswrapper[5008]: I0129 15:50:08.048649 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 29 15:50:08 crc kubenswrapper[5008]: I0129 15:50:08.051630 5008 scope.go:117] "RemoveContainer" containerID="c73a64288c02c3985aea7548e9fdb8867b747089e767ede40e25dba325344234" Jan 29 15:50:08 crc kubenswrapper[5008]: I0129 15:50:08.083217 5008 scope.go:117] "RemoveContainer" containerID="73570da7fb4cd60403415b8ef7560376566de89eb802bb7bc549402efb543a24" Jan 29 15:50:08 crc kubenswrapper[5008]: E0129 15:50:08.083640 5008 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"73570da7fb4cd60403415b8ef7560376566de89eb802bb7bc549402efb543a24\": container with ID starting with 73570da7fb4cd60403415b8ef7560376566de89eb802bb7bc549402efb543a24 not found: ID does not exist" containerID="73570da7fb4cd60403415b8ef7560376566de89eb802bb7bc549402efb543a24" Jan 29 15:50:08 crc kubenswrapper[5008]: I0129 15:50:08.083670 5008 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"73570da7fb4cd60403415b8ef7560376566de89eb802bb7bc549402efb543a24"} err="failed to get container status \"73570da7fb4cd60403415b8ef7560376566de89eb802bb7bc549402efb543a24\": rpc error: code = NotFound desc = could not find container \"73570da7fb4cd60403415b8ef7560376566de89eb802bb7bc549402efb543a24\": container with ID starting with 73570da7fb4cd60403415b8ef7560376566de89eb802bb7bc549402efb543a24 not found: ID does not exist" Jan 29 15:50:08 crc kubenswrapper[5008]: I0129 15:50:08.083690 5008 scope.go:117] "RemoveContainer" containerID="c73a64288c02c3985aea7548e9fdb8867b747089e767ede40e25dba325344234" Jan 29 15:50:08 crc kubenswrapper[5008]: E0129 15:50:08.084209 5008 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c73a64288c02c3985aea7548e9fdb8867b747089e767ede40e25dba325344234\": container with ID starting with c73a64288c02c3985aea7548e9fdb8867b747089e767ede40e25dba325344234 not found: ID does not exist" containerID="c73a64288c02c3985aea7548e9fdb8867b747089e767ede40e25dba325344234" Jan 29 15:50:08 crc kubenswrapper[5008]: I0129 15:50:08.084230 5008 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c73a64288c02c3985aea7548e9fdb8867b747089e767ede40e25dba325344234"} err="failed to get container status \"c73a64288c02c3985aea7548e9fdb8867b747089e767ede40e25dba325344234\": rpc error: code = NotFound desc = could not find container \"c73a64288c02c3985aea7548e9fdb8867b747089e767ede40e25dba325344234\": container with ID starting with c73a64288c02c3985aea7548e9fdb8867b747089e767ede40e25dba325344234 not found: ID does not exist" Jan 29 15:50:08 crc kubenswrapper[5008]: I0129 15:50:08.101754 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7nrdl\" (UniqueName: \"kubernetes.io/projected/b98db574-9529-4d76-be4d-66b44b61a962-kube-api-access-7nrdl\") pod \"ceilometer-0\" (UID: \"b98db574-9529-4d76-be4d-66b44b61a962\") " pod="openstack/ceilometer-0" Jan 29 15:50:08 crc kubenswrapper[5008]: I0129 15:50:08.102068 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/b98db574-9529-4d76-be4d-66b44b61a962-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"b98db574-9529-4d76-be4d-66b44b61a962\") " pod="openstack/ceilometer-0" Jan 29 15:50:08 crc kubenswrapper[5008]: I0129 15:50:08.102294 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b98db574-9529-4d76-be4d-66b44b61a962-config-data\") pod \"ceilometer-0\" (UID: \"b98db574-9529-4d76-be4d-66b44b61a962\") " pod="openstack/ceilometer-0" Jan 29 15:50:08 crc kubenswrapper[5008]: I0129 15:50:08.102357 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b98db574-9529-4d76-be4d-66b44b61a962-run-httpd\") pod \"ceilometer-0\" (UID: \"b98db574-9529-4d76-be4d-66b44b61a962\") " pod="openstack/ceilometer-0" Jan 29 15:50:08 crc kubenswrapper[5008]: I0129 15:50:08.103211 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b98db574-9529-4d76-be4d-66b44b61a962-log-httpd\") pod \"ceilometer-0\" (UID: \"b98db574-9529-4d76-be4d-66b44b61a962\") " pod="openstack/ceilometer-0" Jan 29 15:50:08 crc kubenswrapper[5008]: I0129 15:50:08.103295 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b98db574-9529-4d76-be4d-66b44b61a962-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"b98db574-9529-4d76-be4d-66b44b61a962\") " pod="openstack/ceilometer-0" Jan 29 15:50:08 crc kubenswrapper[5008]: I0129 15:50:08.103337 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b98db574-9529-4d76-be4d-66b44b61a962-scripts\") pod \"ceilometer-0\" (UID: \"b98db574-9529-4d76-be4d-66b44b61a962\") " pod="openstack/ceilometer-0" Jan 29 15:50:08 crc kubenswrapper[5008]: I0129 15:50:08.205166 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7nrdl\" (UniqueName: \"kubernetes.io/projected/b98db574-9529-4d76-be4d-66b44b61a962-kube-api-access-7nrdl\") pod \"ceilometer-0\" (UID: \"b98db574-9529-4d76-be4d-66b44b61a962\") " pod="openstack/ceilometer-0" Jan 29 15:50:08 crc kubenswrapper[5008]: I0129 15:50:08.205484 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/b98db574-9529-4d76-be4d-66b44b61a962-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"b98db574-9529-4d76-be4d-66b44b61a962\") " pod="openstack/ceilometer-0" Jan 29 15:50:08 crc kubenswrapper[5008]: I0129 15:50:08.205539 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b98db574-9529-4d76-be4d-66b44b61a962-config-data\") pod \"ceilometer-0\" (UID: \"b98db574-9529-4d76-be4d-66b44b61a962\") " pod="openstack/ceilometer-0" Jan 29 15:50:08 crc kubenswrapper[5008]: I0129 15:50:08.205591 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b98db574-9529-4d76-be4d-66b44b61a962-run-httpd\") pod \"ceilometer-0\" (UID: \"b98db574-9529-4d76-be4d-66b44b61a962\") " pod="openstack/ceilometer-0" Jan 29 15:50:08 crc kubenswrapper[5008]: I0129 15:50:08.205665 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b98db574-9529-4d76-be4d-66b44b61a962-log-httpd\") pod \"ceilometer-0\" (UID: \"b98db574-9529-4d76-be4d-66b44b61a962\") " pod="openstack/ceilometer-0" Jan 29 15:50:08 crc kubenswrapper[5008]: I0129 15:50:08.205752 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b98db574-9529-4d76-be4d-66b44b61a962-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"b98db574-9529-4d76-be4d-66b44b61a962\") " pod="openstack/ceilometer-0" Jan 29 15:50:08 crc kubenswrapper[5008]: I0129 15:50:08.205841 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b98db574-9529-4d76-be4d-66b44b61a962-scripts\") pod \"ceilometer-0\" (UID: \"b98db574-9529-4d76-be4d-66b44b61a962\") " pod="openstack/ceilometer-0" Jan 29 15:50:08 crc kubenswrapper[5008]: I0129 15:50:08.207350 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b98db574-9529-4d76-be4d-66b44b61a962-log-httpd\") pod \"ceilometer-0\" (UID: \"b98db574-9529-4d76-be4d-66b44b61a962\") " pod="openstack/ceilometer-0" Jan 29 15:50:08 crc kubenswrapper[5008]: I0129 15:50:08.207729 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b98db574-9529-4d76-be4d-66b44b61a962-run-httpd\") pod \"ceilometer-0\" (UID: \"b98db574-9529-4d76-be4d-66b44b61a962\") " pod="openstack/ceilometer-0" Jan 29 15:50:08 crc kubenswrapper[5008]: I0129 15:50:08.211433 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b98db574-9529-4d76-be4d-66b44b61a962-config-data\") pod \"ceilometer-0\" (UID: \"b98db574-9529-4d76-be4d-66b44b61a962\") " pod="openstack/ceilometer-0" Jan 29 15:50:08 crc kubenswrapper[5008]: I0129 15:50:08.211583 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b98db574-9529-4d76-be4d-66b44b61a962-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"b98db574-9529-4d76-be4d-66b44b61a962\") " pod="openstack/ceilometer-0" Jan 29 15:50:08 crc kubenswrapper[5008]: I0129 15:50:08.212412 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b98db574-9529-4d76-be4d-66b44b61a962-scripts\") pod \"ceilometer-0\" (UID: \"b98db574-9529-4d76-be4d-66b44b61a962\") " pod="openstack/ceilometer-0" Jan 29 15:50:08 crc kubenswrapper[5008]: I0129 15:50:08.213632 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/b98db574-9529-4d76-be4d-66b44b61a962-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"b98db574-9529-4d76-be4d-66b44b61a962\") " pod="openstack/ceilometer-0" Jan 29 15:50:08 crc kubenswrapper[5008]: I0129 15:50:08.223667 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7nrdl\" (UniqueName: \"kubernetes.io/projected/b98db574-9529-4d76-be4d-66b44b61a962-kube-api-access-7nrdl\") pod \"ceilometer-0\" (UID: \"b98db574-9529-4d76-be4d-66b44b61a962\") " pod="openstack/ceilometer-0" Jan 29 15:50:08 crc kubenswrapper[5008]: I0129 15:50:08.328449 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-6445bd445b-mhznq" Jan 29 15:50:08 crc kubenswrapper[5008]: I0129 15:50:08.329027 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-6445bd445b-mhznq" Jan 29 15:50:08 crc kubenswrapper[5008]: I0129 15:50:08.345719 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/keystone-779d6696cc-ltp9g" Jan 29 15:50:08 crc kubenswrapper[5008]: I0129 15:50:08.348179 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 29 15:50:08 crc kubenswrapper[5008]: I0129 15:50:08.359621 5008 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-rcl2z" Jan 29 15:50:08 crc kubenswrapper[5008]: I0129 15:50:08.424997 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5jftb\" (UniqueName: \"kubernetes.io/projected/4ec0e696-652d-463e-b97e-dad0065a543b-kube-api-access-5jftb\") pod \"4ec0e696-652d-463e-b97e-dad0065a543b\" (UID: \"4ec0e696-652d-463e-b97e-dad0065a543b\") " Jan 29 15:50:08 crc kubenswrapper[5008]: I0129 15:50:08.425665 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/4ec0e696-652d-463e-b97e-dad0065a543b-db-sync-config-data\") pod \"4ec0e696-652d-463e-b97e-dad0065a543b\" (UID: \"4ec0e696-652d-463e-b97e-dad0065a543b\") " Jan 29 15:50:08 crc kubenswrapper[5008]: I0129 15:50:08.425818 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4ec0e696-652d-463e-b97e-dad0065a543b-combined-ca-bundle\") pod \"4ec0e696-652d-463e-b97e-dad0065a543b\" (UID: \"4ec0e696-652d-463e-b97e-dad0065a543b\") " Jan 29 15:50:08 crc kubenswrapper[5008]: I0129 15:50:08.434738 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4ec0e696-652d-463e-b97e-dad0065a543b-kube-api-access-5jftb" (OuterVolumeSpecName: "kube-api-access-5jftb") pod "4ec0e696-652d-463e-b97e-dad0065a543b" (UID: "4ec0e696-652d-463e-b97e-dad0065a543b"). InnerVolumeSpecName "kube-api-access-5jftb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:50:08 crc kubenswrapper[5008]: I0129 15:50:08.445160 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4ec0e696-652d-463e-b97e-dad0065a543b-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "4ec0e696-652d-463e-b97e-dad0065a543b" (UID: "4ec0e696-652d-463e-b97e-dad0065a543b"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:50:08 crc kubenswrapper[5008]: I0129 15:50:08.494342 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4ec0e696-652d-463e-b97e-dad0065a543b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "4ec0e696-652d-463e-b97e-dad0065a543b" (UID: "4ec0e696-652d-463e-b97e-dad0065a543b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:50:08 crc kubenswrapper[5008]: I0129 15:50:08.528861 5008 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5jftb\" (UniqueName: \"kubernetes.io/projected/4ec0e696-652d-463e-b97e-dad0065a543b-kube-api-access-5jftb\") on node \"crc\" DevicePath \"\"" Jan 29 15:50:08 crc kubenswrapper[5008]: I0129 15:50:08.528894 5008 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/4ec0e696-652d-463e-b97e-dad0065a543b-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 15:50:08 crc kubenswrapper[5008]: I0129 15:50:08.528903 5008 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4ec0e696-652d-463e-b97e-dad0065a543b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 15:50:08 crc kubenswrapper[5008]: I0129 15:50:08.617668 5008 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-55d9fbf66-r5kj8"] Jan 29 15:50:08 crc kubenswrapper[5008]: E0129 15:50:08.618265 5008 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4ec0e696-652d-463e-b97e-dad0065a543b" containerName="barbican-db-sync" Jan 29 15:50:08 crc kubenswrapper[5008]: I0129 15:50:08.618286 5008 state_mem.go:107] "Deleted CPUSet assignment" podUID="4ec0e696-652d-463e-b97e-dad0065a543b" containerName="barbican-db-sync" Jan 29 15:50:08 crc kubenswrapper[5008]: I0129 15:50:08.618719 5008 memory_manager.go:354] "RemoveStaleState removing state" podUID="4ec0e696-652d-463e-b97e-dad0065a543b" containerName="barbican-db-sync" Jan 29 15:50:08 crc kubenswrapper[5008]: I0129 15:50:08.620918 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-55d9fbf66-r5kj8" Jan 29 15:50:08 crc kubenswrapper[5008]: I0129 15:50:08.653495 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-55d9fbf66-r5kj8"] Jan 29 15:50:08 crc kubenswrapper[5008]: I0129 15:50:08.731952 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/85024049-9e4b-4814-a617-cd17614f2a80-logs\") pod \"placement-55d9fbf66-r5kj8\" (UID: \"85024049-9e4b-4814-a617-cd17614f2a80\") " pod="openstack/placement-55d9fbf66-r5kj8" Jan 29 15:50:08 crc kubenswrapper[5008]: I0129 15:50:08.732002 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/85024049-9e4b-4814-a617-cd17614f2a80-scripts\") pod \"placement-55d9fbf66-r5kj8\" (UID: \"85024049-9e4b-4814-a617-cd17614f2a80\") " pod="openstack/placement-55d9fbf66-r5kj8" Jan 29 15:50:08 crc kubenswrapper[5008]: I0129 15:50:08.732037 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/85024049-9e4b-4814-a617-cd17614f2a80-public-tls-certs\") pod \"placement-55d9fbf66-r5kj8\" (UID: \"85024049-9e4b-4814-a617-cd17614f2a80\") " pod="openstack/placement-55d9fbf66-r5kj8" Jan 29 15:50:08 crc kubenswrapper[5008]: I0129 15:50:08.732212 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/85024049-9e4b-4814-a617-cd17614f2a80-combined-ca-bundle\") pod \"placement-55d9fbf66-r5kj8\" (UID: \"85024049-9e4b-4814-a617-cd17614f2a80\") " pod="openstack/placement-55d9fbf66-r5kj8" Jan 29 15:50:08 crc kubenswrapper[5008]: I0129 15:50:08.732283 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pkpzj\" (UniqueName: \"kubernetes.io/projected/85024049-9e4b-4814-a617-cd17614f2a80-kube-api-access-pkpzj\") pod \"placement-55d9fbf66-r5kj8\" (UID: \"85024049-9e4b-4814-a617-cd17614f2a80\") " pod="openstack/placement-55d9fbf66-r5kj8" Jan 29 15:50:08 crc kubenswrapper[5008]: I0129 15:50:08.732313 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/85024049-9e4b-4814-a617-cd17614f2a80-config-data\") pod \"placement-55d9fbf66-r5kj8\" (UID: \"85024049-9e4b-4814-a617-cd17614f2a80\") " pod="openstack/placement-55d9fbf66-r5kj8" Jan 29 15:50:08 crc kubenswrapper[5008]: I0129 15:50:08.732368 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/85024049-9e4b-4814-a617-cd17614f2a80-internal-tls-certs\") pod \"placement-55d9fbf66-r5kj8\" (UID: \"85024049-9e4b-4814-a617-cd17614f2a80\") " pod="openstack/placement-55d9fbf66-r5kj8" Jan 29 15:50:08 crc kubenswrapper[5008]: I0129 15:50:08.781406 5008 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 29 15:50:08 crc kubenswrapper[5008]: I0129 15:50:08.833419 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d01ff2cd-2707-4765-a399-a68312196c22-combined-ca-bundle\") pod \"d01ff2cd-2707-4765-a399-a68312196c22\" (UID: \"d01ff2cd-2707-4765-a399-a68312196c22\") " Jan 29 15:50:08 crc kubenswrapper[5008]: I0129 15:50:08.833543 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d01ff2cd-2707-4765-a399-a68312196c22-scripts\") pod \"d01ff2cd-2707-4765-a399-a68312196c22\" (UID: \"d01ff2cd-2707-4765-a399-a68312196c22\") " Jan 29 15:50:08 crc kubenswrapper[5008]: I0129 15:50:08.833632 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4hzd8\" (UniqueName: \"kubernetes.io/projected/d01ff2cd-2707-4765-a399-a68312196c22-kube-api-access-4hzd8\") pod \"d01ff2cd-2707-4765-a399-a68312196c22\" (UID: \"d01ff2cd-2707-4765-a399-a68312196c22\") " Jan 29 15:50:08 crc kubenswrapper[5008]: I0129 15:50:08.833695 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/d01ff2cd-2707-4765-a399-a68312196c22-etc-machine-id\") pod \"d01ff2cd-2707-4765-a399-a68312196c22\" (UID: \"d01ff2cd-2707-4765-a399-a68312196c22\") " Jan 29 15:50:08 crc kubenswrapper[5008]: I0129 15:50:08.833929 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d01ff2cd-2707-4765-a399-a68312196c22-config-data-custom\") pod \"d01ff2cd-2707-4765-a399-a68312196c22\" (UID: \"d01ff2cd-2707-4765-a399-a68312196c22\") " Jan 29 15:50:08 crc kubenswrapper[5008]: I0129 15:50:08.834064 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d01ff2cd-2707-4765-a399-a68312196c22-config-data\") pod \"d01ff2cd-2707-4765-a399-a68312196c22\" (UID: \"d01ff2cd-2707-4765-a399-a68312196c22\") " Jan 29 15:50:08 crc kubenswrapper[5008]: I0129 15:50:08.834431 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/85024049-9e4b-4814-a617-cd17614f2a80-combined-ca-bundle\") pod \"placement-55d9fbf66-r5kj8\" (UID: \"85024049-9e4b-4814-a617-cd17614f2a80\") " pod="openstack/placement-55d9fbf66-r5kj8" Jan 29 15:50:08 crc kubenswrapper[5008]: I0129 15:50:08.834519 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pkpzj\" (UniqueName: \"kubernetes.io/projected/85024049-9e4b-4814-a617-cd17614f2a80-kube-api-access-pkpzj\") pod \"placement-55d9fbf66-r5kj8\" (UID: \"85024049-9e4b-4814-a617-cd17614f2a80\") " pod="openstack/placement-55d9fbf66-r5kj8" Jan 29 15:50:08 crc kubenswrapper[5008]: I0129 15:50:08.834581 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/85024049-9e4b-4814-a617-cd17614f2a80-config-data\") pod \"placement-55d9fbf66-r5kj8\" (UID: \"85024049-9e4b-4814-a617-cd17614f2a80\") " pod="openstack/placement-55d9fbf66-r5kj8" Jan 29 15:50:08 crc kubenswrapper[5008]: I0129 15:50:08.834672 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/85024049-9e4b-4814-a617-cd17614f2a80-internal-tls-certs\") pod \"placement-55d9fbf66-r5kj8\" (UID: \"85024049-9e4b-4814-a617-cd17614f2a80\") " pod="openstack/placement-55d9fbf66-r5kj8" Jan 29 15:50:08 crc kubenswrapper[5008]: I0129 15:50:08.834744 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/85024049-9e4b-4814-a617-cd17614f2a80-logs\") pod \"placement-55d9fbf66-r5kj8\" (UID: \"85024049-9e4b-4814-a617-cd17614f2a80\") " pod="openstack/placement-55d9fbf66-r5kj8" Jan 29 15:50:08 crc kubenswrapper[5008]: I0129 15:50:08.834768 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/85024049-9e4b-4814-a617-cd17614f2a80-scripts\") pod \"placement-55d9fbf66-r5kj8\" (UID: \"85024049-9e4b-4814-a617-cd17614f2a80\") " pod="openstack/placement-55d9fbf66-r5kj8" Jan 29 15:50:08 crc kubenswrapper[5008]: I0129 15:50:08.834853 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/85024049-9e4b-4814-a617-cd17614f2a80-public-tls-certs\") pod \"placement-55d9fbf66-r5kj8\" (UID: \"85024049-9e4b-4814-a617-cd17614f2a80\") " pod="openstack/placement-55d9fbf66-r5kj8" Jan 29 15:50:08 crc kubenswrapper[5008]: I0129 15:50:08.837454 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/85024049-9e4b-4814-a617-cd17614f2a80-logs\") pod \"placement-55d9fbf66-r5kj8\" (UID: \"85024049-9e4b-4814-a617-cd17614f2a80\") " pod="openstack/placement-55d9fbf66-r5kj8" Jan 29 15:50:08 crc kubenswrapper[5008]: I0129 15:50:08.840315 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d01ff2cd-2707-4765-a399-a68312196c22-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "d01ff2cd-2707-4765-a399-a68312196c22" (UID: "d01ff2cd-2707-4765-a399-a68312196c22"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 15:50:08 crc kubenswrapper[5008]: I0129 15:50:08.845349 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/85024049-9e4b-4814-a617-cd17614f2a80-config-data\") pod \"placement-55d9fbf66-r5kj8\" (UID: \"85024049-9e4b-4814-a617-cd17614f2a80\") " pod="openstack/placement-55d9fbf66-r5kj8" Jan 29 15:50:08 crc kubenswrapper[5008]: I0129 15:50:08.847346 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/85024049-9e4b-4814-a617-cd17614f2a80-scripts\") pod \"placement-55d9fbf66-r5kj8\" (UID: \"85024049-9e4b-4814-a617-cd17614f2a80\") " pod="openstack/placement-55d9fbf66-r5kj8" Jan 29 15:50:08 crc kubenswrapper[5008]: I0129 15:50:08.850962 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d01ff2cd-2707-4765-a399-a68312196c22-scripts" (OuterVolumeSpecName: "scripts") pod "d01ff2cd-2707-4765-a399-a68312196c22" (UID: "d01ff2cd-2707-4765-a399-a68312196c22"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:50:08 crc kubenswrapper[5008]: I0129 15:50:08.855584 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d01ff2cd-2707-4765-a399-a68312196c22-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "d01ff2cd-2707-4765-a399-a68312196c22" (UID: "d01ff2cd-2707-4765-a399-a68312196c22"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:50:08 crc kubenswrapper[5008]: I0129 15:50:08.855682 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d01ff2cd-2707-4765-a399-a68312196c22-kube-api-access-4hzd8" (OuterVolumeSpecName: "kube-api-access-4hzd8") pod "d01ff2cd-2707-4765-a399-a68312196c22" (UID: "d01ff2cd-2707-4765-a399-a68312196c22"). InnerVolumeSpecName "kube-api-access-4hzd8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:50:08 crc kubenswrapper[5008]: I0129 15:50:08.856040 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/85024049-9e4b-4814-a617-cd17614f2a80-combined-ca-bundle\") pod \"placement-55d9fbf66-r5kj8\" (UID: \"85024049-9e4b-4814-a617-cd17614f2a80\") " pod="openstack/placement-55d9fbf66-r5kj8" Jan 29 15:50:08 crc kubenswrapper[5008]: I0129 15:50:08.860473 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pkpzj\" (UniqueName: \"kubernetes.io/projected/85024049-9e4b-4814-a617-cd17614f2a80-kube-api-access-pkpzj\") pod \"placement-55d9fbf66-r5kj8\" (UID: \"85024049-9e4b-4814-a617-cd17614f2a80\") " pod="openstack/placement-55d9fbf66-r5kj8" Jan 29 15:50:08 crc kubenswrapper[5008]: I0129 15:50:08.861094 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/85024049-9e4b-4814-a617-cd17614f2a80-internal-tls-certs\") pod \"placement-55d9fbf66-r5kj8\" (UID: \"85024049-9e4b-4814-a617-cd17614f2a80\") " pod="openstack/placement-55d9fbf66-r5kj8" Jan 29 15:50:08 crc kubenswrapper[5008]: I0129 15:50:08.877158 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/85024049-9e4b-4814-a617-cd17614f2a80-public-tls-certs\") pod \"placement-55d9fbf66-r5kj8\" (UID: \"85024049-9e4b-4814-a617-cd17614f2a80\") " pod="openstack/placement-55d9fbf66-r5kj8" Jan 29 15:50:08 crc kubenswrapper[5008]: I0129 15:50:08.900509 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-rcl2z" event={"ID":"4ec0e696-652d-463e-b97e-dad0065a543b","Type":"ContainerDied","Data":"748398d1ff4ce764be647594fea290f65e925f9a2636d8aeb85a205a07c6aff2"} Jan 29 15:50:08 crc kubenswrapper[5008]: I0129 15:50:08.900528 5008 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-rcl2z" Jan 29 15:50:08 crc kubenswrapper[5008]: I0129 15:50:08.900547 5008 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="748398d1ff4ce764be647594fea290f65e925f9a2636d8aeb85a205a07c6aff2" Jan 29 15:50:08 crc kubenswrapper[5008]: I0129 15:50:08.903224 5008 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 29 15:50:08 crc kubenswrapper[5008]: I0129 15:50:08.903206 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"d01ff2cd-2707-4765-a399-a68312196c22","Type":"ContainerDied","Data":"57c9901e381187fc7eb0fcdcbe0d130f0d9a3aa88a3658cef67338340e39620e"} Jan 29 15:50:08 crc kubenswrapper[5008]: I0129 15:50:08.903359 5008 scope.go:117] "RemoveContainer" containerID="69665425f19a49b5cdcfb4255b47fbfaaa95a031ae37ae6f7818c9b5e08c3fc8" Jan 29 15:50:08 crc kubenswrapper[5008]: I0129 15:50:08.908696 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"2f60d298-c33b-44b3-a99c-a0e75a321a80","Type":"ContainerStarted","Data":"70a42b9a83558cc59b8000dd44397e820d03c275dab9b9708e536893765263c3"} Jan 29 15:50:08 crc kubenswrapper[5008]: I0129 15:50:08.909505 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Jan 29 15:50:08 crc kubenswrapper[5008]: I0129 15:50:08.931896 5008 scope.go:117] "RemoveContainer" containerID="b75f2a4361779c7b8425fd94ecbf05c19e481194aa4b56d42b2abd6ec2919902" Jan 29 15:50:08 crc kubenswrapper[5008]: I0129 15:50:08.938996 5008 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d01ff2cd-2707-4765-a399-a68312196c22-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 29 15:50:08 crc kubenswrapper[5008]: I0129 15:50:08.939021 5008 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d01ff2cd-2707-4765-a399-a68312196c22-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 15:50:08 crc kubenswrapper[5008]: I0129 15:50:08.939030 5008 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4hzd8\" (UniqueName: \"kubernetes.io/projected/d01ff2cd-2707-4765-a399-a68312196c22-kube-api-access-4hzd8\") on node \"crc\" DevicePath \"\"" Jan 29 15:50:08 crc kubenswrapper[5008]: I0129 15:50:08.939038 5008 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/d01ff2cd-2707-4765-a399-a68312196c22-etc-machine-id\") on node \"crc\" DevicePath \"\"" Jan 29 15:50:08 crc kubenswrapper[5008]: I0129 15:50:08.954618 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d01ff2cd-2707-4765-a399-a68312196c22-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d01ff2cd-2707-4765-a399-a68312196c22" (UID: "d01ff2cd-2707-4765-a399-a68312196c22"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:50:08 crc kubenswrapper[5008]: I0129 15:50:08.962371 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-55d9fbf66-r5kj8" Jan 29 15:50:09 crc kubenswrapper[5008]: I0129 15:50:09.008021 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d01ff2cd-2707-4765-a399-a68312196c22-config-data" (OuterVolumeSpecName: "config-data") pod "d01ff2cd-2707-4765-a399-a68312196c22" (UID: "d01ff2cd-2707-4765-a399-a68312196c22"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:50:09 crc kubenswrapper[5008]: I0129 15:50:09.044343 5008 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d01ff2cd-2707-4765-a399-a68312196c22-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 15:50:09 crc kubenswrapper[5008]: I0129 15:50:09.044369 5008 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d01ff2cd-2707-4765-a399-a68312196c22-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 15:50:09 crc kubenswrapper[5008]: I0129 15:50:09.063992 5008 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=7.063970024 podStartE2EDuration="7.063970024s" podCreationTimestamp="2026-01-29 15:50:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 15:50:08.932137156 +0000 UTC m=+1352.604991403" watchObservedRunningTime="2026-01-29 15:50:09.063970024 +0000 UTC m=+1352.736824261" Jan 29 15:50:09 crc kubenswrapper[5008]: I0129 15:50:09.121322 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 29 15:50:09 crc kubenswrapper[5008]: I0129 15:50:09.125480 5008 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 29 15:50:09 crc kubenswrapper[5008]: I0129 15:50:09.148219 5008 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-7f49b8c48b-x77zl" podUID="8c3bbcd6-6512-4439-b70d-f46dd6382cfe" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.145:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.145:8443: connect: connection refused" Jan 29 15:50:09 crc kubenswrapper[5008]: I0129 15:50:09.155498 5008 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-keystone-listener-d5688bfcd-94rkm"] Jan 29 15:50:09 crc kubenswrapper[5008]: E0129 15:50:09.155906 5008 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d01ff2cd-2707-4765-a399-a68312196c22" containerName="probe" Jan 29 15:50:09 crc kubenswrapper[5008]: I0129 15:50:09.155923 5008 state_mem.go:107] "Deleted CPUSet assignment" podUID="d01ff2cd-2707-4765-a399-a68312196c22" containerName="probe" Jan 29 15:50:09 crc kubenswrapper[5008]: E0129 15:50:09.155952 5008 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d01ff2cd-2707-4765-a399-a68312196c22" containerName="cinder-scheduler" Jan 29 15:50:09 crc kubenswrapper[5008]: I0129 15:50:09.155958 5008 state_mem.go:107] "Deleted CPUSet assignment" podUID="d01ff2cd-2707-4765-a399-a68312196c22" containerName="cinder-scheduler" Jan 29 15:50:09 crc kubenswrapper[5008]: I0129 15:50:09.156102 5008 memory_manager.go:354] "RemoveStaleState removing state" podUID="d01ff2cd-2707-4765-a399-a68312196c22" containerName="cinder-scheduler" Jan 29 15:50:09 crc kubenswrapper[5008]: I0129 15:50:09.156141 5008 memory_manager.go:354] "RemoveStaleState removing state" podUID="d01ff2cd-2707-4765-a399-a68312196c22" containerName="probe" Jan 29 15:50:09 crc kubenswrapper[5008]: I0129 15:50:09.157098 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-d5688bfcd-94rkm" Jan 29 15:50:09 crc kubenswrapper[5008]: I0129 15:50:09.159814 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-wg4h5" Jan 29 15:50:09 crc kubenswrapper[5008]: I0129 15:50:09.160108 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-keystone-listener-config-data" Jan 29 15:50:09 crc kubenswrapper[5008]: I0129 15:50:09.160255 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Jan 29 15:50:09 crc kubenswrapper[5008]: I0129 15:50:09.237936 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-d5688bfcd-94rkm"] Jan 29 15:50:09 crc kubenswrapper[5008]: I0129 15:50:09.253727 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/24c4cc25-9e50-4601-bac2-552e1aded799-config-data\") pod \"barbican-keystone-listener-d5688bfcd-94rkm\" (UID: \"24c4cc25-9e50-4601-bac2-552e1aded799\") " pod="openstack/barbican-keystone-listener-d5688bfcd-94rkm" Jan 29 15:50:09 crc kubenswrapper[5008]: I0129 15:50:09.253824 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z7c65\" (UniqueName: \"kubernetes.io/projected/24c4cc25-9e50-4601-bac2-552e1aded799-kube-api-access-z7c65\") pod \"barbican-keystone-listener-d5688bfcd-94rkm\" (UID: \"24c4cc25-9e50-4601-bac2-552e1aded799\") " pod="openstack/barbican-keystone-listener-d5688bfcd-94rkm" Jan 29 15:50:09 crc kubenswrapper[5008]: I0129 15:50:09.253857 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/24c4cc25-9e50-4601-bac2-552e1aded799-logs\") pod \"barbican-keystone-listener-d5688bfcd-94rkm\" (UID: \"24c4cc25-9e50-4601-bac2-552e1aded799\") " pod="openstack/barbican-keystone-listener-d5688bfcd-94rkm" Jan 29 15:50:09 crc kubenswrapper[5008]: I0129 15:50:09.253905 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/24c4cc25-9e50-4601-bac2-552e1aded799-config-data-custom\") pod \"barbican-keystone-listener-d5688bfcd-94rkm\" (UID: \"24c4cc25-9e50-4601-bac2-552e1aded799\") " pod="openstack/barbican-keystone-listener-d5688bfcd-94rkm" Jan 29 15:50:09 crc kubenswrapper[5008]: I0129 15:50:09.253958 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/24c4cc25-9e50-4601-bac2-552e1aded799-combined-ca-bundle\") pod \"barbican-keystone-listener-d5688bfcd-94rkm\" (UID: \"24c4cc25-9e50-4601-bac2-552e1aded799\") " pod="openstack/barbican-keystone-listener-d5688bfcd-94rkm" Jan 29 15:50:09 crc kubenswrapper[5008]: I0129 15:50:09.255420 5008 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-worker-5c46c758ff-5p4jl"] Jan 29 15:50:09 crc kubenswrapper[5008]: I0129 15:50:09.270554 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-5c46c758ff-5p4jl" Jan 29 15:50:09 crc kubenswrapper[5008]: I0129 15:50:09.274320 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-worker-config-data" Jan 29 15:50:09 crc kubenswrapper[5008]: I0129 15:50:09.276840 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-5c46c758ff-5p4jl"] Jan 29 15:50:09 crc kubenswrapper[5008]: I0129 15:50:09.353654 5008 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8457b44a-814e-403f-a2c9-71907f5cb2d2" path="/var/lib/kubelet/pods/8457b44a-814e-403f-a2c9-71907f5cb2d2/volumes" Jan 29 15:50:09 crc kubenswrapper[5008]: I0129 15:50:09.354234 5008 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6578955fd5-h99wm"] Jan 29 15:50:09 crc kubenswrapper[5008]: I0129 15:50:09.355745 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/24c4cc25-9e50-4601-bac2-552e1aded799-combined-ca-bundle\") pod \"barbican-keystone-listener-d5688bfcd-94rkm\" (UID: \"24c4cc25-9e50-4601-bac2-552e1aded799\") " pod="openstack/barbican-keystone-listener-d5688bfcd-94rkm" Jan 29 15:50:09 crc kubenswrapper[5008]: I0129 15:50:09.355826 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/24c4cc25-9e50-4601-bac2-552e1aded799-config-data\") pod \"barbican-keystone-listener-d5688bfcd-94rkm\" (UID: \"24c4cc25-9e50-4601-bac2-552e1aded799\") " pod="openstack/barbican-keystone-listener-d5688bfcd-94rkm" Jan 29 15:50:09 crc kubenswrapper[5008]: I0129 15:50:09.355869 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f77f54f0-02b9-4082-8a76-dc78a9b7d08c-combined-ca-bundle\") pod \"barbican-worker-5c46c758ff-5p4jl\" (UID: \"f77f54f0-02b9-4082-8a76-dc78a9b7d08c\") " pod="openstack/barbican-worker-5c46c758ff-5p4jl" Jan 29 15:50:09 crc kubenswrapper[5008]: I0129 15:50:09.355891 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z7c65\" (UniqueName: \"kubernetes.io/projected/24c4cc25-9e50-4601-bac2-552e1aded799-kube-api-access-z7c65\") pod \"barbican-keystone-listener-d5688bfcd-94rkm\" (UID: \"24c4cc25-9e50-4601-bac2-552e1aded799\") " pod="openstack/barbican-keystone-listener-d5688bfcd-94rkm" Jan 29 15:50:09 crc kubenswrapper[5008]: I0129 15:50:09.355913 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6pnmm\" (UniqueName: \"kubernetes.io/projected/f77f54f0-02b9-4082-8a76-dc78a9b7d08c-kube-api-access-6pnmm\") pod \"barbican-worker-5c46c758ff-5p4jl\" (UID: \"f77f54f0-02b9-4082-8a76-dc78a9b7d08c\") " pod="openstack/barbican-worker-5c46c758ff-5p4jl" Jan 29 15:50:09 crc kubenswrapper[5008]: I0129 15:50:09.355933 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f77f54f0-02b9-4082-8a76-dc78a9b7d08c-logs\") pod \"barbican-worker-5c46c758ff-5p4jl\" (UID: \"f77f54f0-02b9-4082-8a76-dc78a9b7d08c\") " pod="openstack/barbican-worker-5c46c758ff-5p4jl" Jan 29 15:50:09 crc kubenswrapper[5008]: I0129 15:50:09.355952 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/24c4cc25-9e50-4601-bac2-552e1aded799-logs\") pod \"barbican-keystone-listener-d5688bfcd-94rkm\" (UID: \"24c4cc25-9e50-4601-bac2-552e1aded799\") " pod="openstack/barbican-keystone-listener-d5688bfcd-94rkm" Jan 29 15:50:09 crc kubenswrapper[5008]: I0129 15:50:09.355989 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f77f54f0-02b9-4082-8a76-dc78a9b7d08c-config-data-custom\") pod \"barbican-worker-5c46c758ff-5p4jl\" (UID: \"f77f54f0-02b9-4082-8a76-dc78a9b7d08c\") " pod="openstack/barbican-worker-5c46c758ff-5p4jl" Jan 29 15:50:09 crc kubenswrapper[5008]: I0129 15:50:09.356015 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/24c4cc25-9e50-4601-bac2-552e1aded799-config-data-custom\") pod \"barbican-keystone-listener-d5688bfcd-94rkm\" (UID: \"24c4cc25-9e50-4601-bac2-552e1aded799\") " pod="openstack/barbican-keystone-listener-d5688bfcd-94rkm" Jan 29 15:50:09 crc kubenswrapper[5008]: I0129 15:50:09.356037 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f77f54f0-02b9-4082-8a76-dc78a9b7d08c-config-data\") pod \"barbican-worker-5c46c758ff-5p4jl\" (UID: \"f77f54f0-02b9-4082-8a76-dc78a9b7d08c\") " pod="openstack/barbican-worker-5c46c758ff-5p4jl" Jan 29 15:50:09 crc kubenswrapper[5008]: I0129 15:50:09.356717 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/24c4cc25-9e50-4601-bac2-552e1aded799-logs\") pod \"barbican-keystone-listener-d5688bfcd-94rkm\" (UID: \"24c4cc25-9e50-4601-bac2-552e1aded799\") " pod="openstack/barbican-keystone-listener-d5688bfcd-94rkm" Jan 29 15:50:09 crc kubenswrapper[5008]: I0129 15:50:09.361958 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/24c4cc25-9e50-4601-bac2-552e1aded799-combined-ca-bundle\") pod \"barbican-keystone-listener-d5688bfcd-94rkm\" (UID: \"24c4cc25-9e50-4601-bac2-552e1aded799\") " pod="openstack/barbican-keystone-listener-d5688bfcd-94rkm" Jan 29 15:50:09 crc kubenswrapper[5008]: I0129 15:50:09.363298 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/24c4cc25-9e50-4601-bac2-552e1aded799-config-data-custom\") pod \"barbican-keystone-listener-d5688bfcd-94rkm\" (UID: \"24c4cc25-9e50-4601-bac2-552e1aded799\") " pod="openstack/barbican-keystone-listener-d5688bfcd-94rkm" Jan 29 15:50:09 crc kubenswrapper[5008]: I0129 15:50:09.383885 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6578955fd5-h99wm"] Jan 29 15:50:09 crc kubenswrapper[5008]: I0129 15:50:09.383993 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6578955fd5-h99wm" Jan 29 15:50:09 crc kubenswrapper[5008]: I0129 15:50:09.388140 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/24c4cc25-9e50-4601-bac2-552e1aded799-config-data\") pod \"barbican-keystone-listener-d5688bfcd-94rkm\" (UID: \"24c4cc25-9e50-4601-bac2-552e1aded799\") " pod="openstack/barbican-keystone-listener-d5688bfcd-94rkm" Jan 29 15:50:09 crc kubenswrapper[5008]: I0129 15:50:09.397846 5008 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-788c485464-442t2"] Jan 29 15:50:09 crc kubenswrapper[5008]: I0129 15:50:09.399376 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-788c485464-442t2" Jan 29 15:50:09 crc kubenswrapper[5008]: I0129 15:50:09.401367 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z7c65\" (UniqueName: \"kubernetes.io/projected/24c4cc25-9e50-4601-bac2-552e1aded799-kube-api-access-z7c65\") pod \"barbican-keystone-listener-d5688bfcd-94rkm\" (UID: \"24c4cc25-9e50-4601-bac2-552e1aded799\") " pod="openstack/barbican-keystone-listener-d5688bfcd-94rkm" Jan 29 15:50:09 crc kubenswrapper[5008]: I0129 15:50:09.407980 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-api-config-data" Jan 29 15:50:09 crc kubenswrapper[5008]: I0129 15:50:09.417400 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-788c485464-442t2"] Jan 29 15:50:09 crc kubenswrapper[5008]: I0129 15:50:09.443393 5008 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 29 15:50:09 crc kubenswrapper[5008]: I0129 15:50:09.452140 5008 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 29 15:50:09 crc kubenswrapper[5008]: I0129 15:50:09.458024 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w4hcq\" (UniqueName: \"kubernetes.io/projected/35979baf-dba0-453c-bafd-16985d082448-kube-api-access-w4hcq\") pod \"dnsmasq-dns-6578955fd5-h99wm\" (UID: \"35979baf-dba0-453c-bafd-16985d082448\") " pod="openstack/dnsmasq-dns-6578955fd5-h99wm" Jan 29 15:50:09 crc kubenswrapper[5008]: I0129 15:50:09.458076 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f77f54f0-02b9-4082-8a76-dc78a9b7d08c-combined-ca-bundle\") pod \"barbican-worker-5c46c758ff-5p4jl\" (UID: \"f77f54f0-02b9-4082-8a76-dc78a9b7d08c\") " pod="openstack/barbican-worker-5c46c758ff-5p4jl" Jan 29 15:50:09 crc kubenswrapper[5008]: I0129 15:50:09.458099 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/35979baf-dba0-453c-bafd-16985d082448-dns-svc\") pod \"dnsmasq-dns-6578955fd5-h99wm\" (UID: \"35979baf-dba0-453c-bafd-16985d082448\") " pod="openstack/dnsmasq-dns-6578955fd5-h99wm" Jan 29 15:50:09 crc kubenswrapper[5008]: I0129 15:50:09.458124 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/930b6c6f-40a8-476f-ad73-069c7f2ffeb8-combined-ca-bundle\") pod \"barbican-api-788c485464-442t2\" (UID: \"930b6c6f-40a8-476f-ad73-069c7f2ffeb8\") " pod="openstack/barbican-api-788c485464-442t2" Jan 29 15:50:09 crc kubenswrapper[5008]: I0129 15:50:09.458146 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6pnmm\" (UniqueName: \"kubernetes.io/projected/f77f54f0-02b9-4082-8a76-dc78a9b7d08c-kube-api-access-6pnmm\") pod \"barbican-worker-5c46c758ff-5p4jl\" (UID: \"f77f54f0-02b9-4082-8a76-dc78a9b7d08c\") " pod="openstack/barbican-worker-5c46c758ff-5p4jl" Jan 29 15:50:09 crc kubenswrapper[5008]: I0129 15:50:09.458170 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/930b6c6f-40a8-476f-ad73-069c7f2ffeb8-logs\") pod \"barbican-api-788c485464-442t2\" (UID: \"930b6c6f-40a8-476f-ad73-069c7f2ffeb8\") " pod="openstack/barbican-api-788c485464-442t2" Jan 29 15:50:09 crc kubenswrapper[5008]: I0129 15:50:09.458186 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f77f54f0-02b9-4082-8a76-dc78a9b7d08c-logs\") pod \"barbican-worker-5c46c758ff-5p4jl\" (UID: \"f77f54f0-02b9-4082-8a76-dc78a9b7d08c\") " pod="openstack/barbican-worker-5c46c758ff-5p4jl" Jan 29 15:50:09 crc kubenswrapper[5008]: I0129 15:50:09.458201 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/35979baf-dba0-453c-bafd-16985d082448-config\") pod \"dnsmasq-dns-6578955fd5-h99wm\" (UID: \"35979baf-dba0-453c-bafd-16985d082448\") " pod="openstack/dnsmasq-dns-6578955fd5-h99wm" Jan 29 15:50:09 crc kubenswrapper[5008]: I0129 15:50:09.458228 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/930b6c6f-40a8-476f-ad73-069c7f2ffeb8-config-data\") pod \"barbican-api-788c485464-442t2\" (UID: \"930b6c6f-40a8-476f-ad73-069c7f2ffeb8\") " pod="openstack/barbican-api-788c485464-442t2" Jan 29 15:50:09 crc kubenswrapper[5008]: I0129 15:50:09.458289 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f77f54f0-02b9-4082-8a76-dc78a9b7d08c-config-data-custom\") pod \"barbican-worker-5c46c758ff-5p4jl\" (UID: \"f77f54f0-02b9-4082-8a76-dc78a9b7d08c\") " pod="openstack/barbican-worker-5c46c758ff-5p4jl" Jan 29 15:50:09 crc kubenswrapper[5008]: I0129 15:50:09.458334 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gt8j9\" (UniqueName: \"kubernetes.io/projected/930b6c6f-40a8-476f-ad73-069c7f2ffeb8-kube-api-access-gt8j9\") pod \"barbican-api-788c485464-442t2\" (UID: \"930b6c6f-40a8-476f-ad73-069c7f2ffeb8\") " pod="openstack/barbican-api-788c485464-442t2" Jan 29 15:50:09 crc kubenswrapper[5008]: I0129 15:50:09.458356 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/35979baf-dba0-453c-bafd-16985d082448-dns-swift-storage-0\") pod \"dnsmasq-dns-6578955fd5-h99wm\" (UID: \"35979baf-dba0-453c-bafd-16985d082448\") " pod="openstack/dnsmasq-dns-6578955fd5-h99wm" Jan 29 15:50:09 crc kubenswrapper[5008]: I0129 15:50:09.458380 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f77f54f0-02b9-4082-8a76-dc78a9b7d08c-config-data\") pod \"barbican-worker-5c46c758ff-5p4jl\" (UID: \"f77f54f0-02b9-4082-8a76-dc78a9b7d08c\") " pod="openstack/barbican-worker-5c46c758ff-5p4jl" Jan 29 15:50:09 crc kubenswrapper[5008]: I0129 15:50:09.458410 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/35979baf-dba0-453c-bafd-16985d082448-ovsdbserver-nb\") pod \"dnsmasq-dns-6578955fd5-h99wm\" (UID: \"35979baf-dba0-453c-bafd-16985d082448\") " pod="openstack/dnsmasq-dns-6578955fd5-h99wm" Jan 29 15:50:09 crc kubenswrapper[5008]: I0129 15:50:09.458431 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/930b6c6f-40a8-476f-ad73-069c7f2ffeb8-config-data-custom\") pod \"barbican-api-788c485464-442t2\" (UID: \"930b6c6f-40a8-476f-ad73-069c7f2ffeb8\") " pod="openstack/barbican-api-788c485464-442t2" Jan 29 15:50:09 crc kubenswrapper[5008]: I0129 15:50:09.458490 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/35979baf-dba0-453c-bafd-16985d082448-ovsdbserver-sb\") pod \"dnsmasq-dns-6578955fd5-h99wm\" (UID: \"35979baf-dba0-453c-bafd-16985d082448\") " pod="openstack/dnsmasq-dns-6578955fd5-h99wm" Jan 29 15:50:09 crc kubenswrapper[5008]: I0129 15:50:09.460093 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f77f54f0-02b9-4082-8a76-dc78a9b7d08c-logs\") pod \"barbican-worker-5c46c758ff-5p4jl\" (UID: \"f77f54f0-02b9-4082-8a76-dc78a9b7d08c\") " pod="openstack/barbican-worker-5c46c758ff-5p4jl" Jan 29 15:50:09 crc kubenswrapper[5008]: I0129 15:50:09.463761 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f77f54f0-02b9-4082-8a76-dc78a9b7d08c-config-data\") pod \"barbican-worker-5c46c758ff-5p4jl\" (UID: \"f77f54f0-02b9-4082-8a76-dc78a9b7d08c\") " pod="openstack/barbican-worker-5c46c758ff-5p4jl" Jan 29 15:50:09 crc kubenswrapper[5008]: I0129 15:50:09.463827 5008 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Jan 29 15:50:09 crc kubenswrapper[5008]: I0129 15:50:09.465311 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f77f54f0-02b9-4082-8a76-dc78a9b7d08c-config-data-custom\") pod \"barbican-worker-5c46c758ff-5p4jl\" (UID: \"f77f54f0-02b9-4082-8a76-dc78a9b7d08c\") " pod="openstack/barbican-worker-5c46c758ff-5p4jl" Jan 29 15:50:09 crc kubenswrapper[5008]: I0129 15:50:09.466517 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 29 15:50:09 crc kubenswrapper[5008]: I0129 15:50:09.467992 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Jan 29 15:50:09 crc kubenswrapper[5008]: I0129 15:50:09.471272 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 29 15:50:09 crc kubenswrapper[5008]: I0129 15:50:09.487505 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f77f54f0-02b9-4082-8a76-dc78a9b7d08c-combined-ca-bundle\") pod \"barbican-worker-5c46c758ff-5p4jl\" (UID: \"f77f54f0-02b9-4082-8a76-dc78a9b7d08c\") " pod="openstack/barbican-worker-5c46c758ff-5p4jl" Jan 29 15:50:09 crc kubenswrapper[5008]: I0129 15:50:09.489342 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6pnmm\" (UniqueName: \"kubernetes.io/projected/f77f54f0-02b9-4082-8a76-dc78a9b7d08c-kube-api-access-6pnmm\") pod \"barbican-worker-5c46c758ff-5p4jl\" (UID: \"f77f54f0-02b9-4082-8a76-dc78a9b7d08c\") " pod="openstack/barbican-worker-5c46c758ff-5p4jl" Jan 29 15:50:09 crc kubenswrapper[5008]: I0129 15:50:09.504741 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-d5688bfcd-94rkm" Jan 29 15:50:09 crc kubenswrapper[5008]: I0129 15:50:09.559811 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gt8j9\" (UniqueName: \"kubernetes.io/projected/930b6c6f-40a8-476f-ad73-069c7f2ffeb8-kube-api-access-gt8j9\") pod \"barbican-api-788c485464-442t2\" (UID: \"930b6c6f-40a8-476f-ad73-069c7f2ffeb8\") " pod="openstack/barbican-api-788c485464-442t2" Jan 29 15:50:09 crc kubenswrapper[5008]: I0129 15:50:09.559904 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/35979baf-dba0-453c-bafd-16985d082448-dns-swift-storage-0\") pod \"dnsmasq-dns-6578955fd5-h99wm\" (UID: \"35979baf-dba0-453c-bafd-16985d082448\") " pod="openstack/dnsmasq-dns-6578955fd5-h99wm" Jan 29 15:50:09 crc kubenswrapper[5008]: I0129 15:50:09.559953 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2c4e7961-5802-47c7-becf-75dd01d6e7d1-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"2c4e7961-5802-47c7-becf-75dd01d6e7d1\") " pod="openstack/cinder-scheduler-0" Jan 29 15:50:09 crc kubenswrapper[5008]: I0129 15:50:09.560010 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2c4e7961-5802-47c7-becf-75dd01d6e7d1-config-data\") pod \"cinder-scheduler-0\" (UID: \"2c4e7961-5802-47c7-becf-75dd01d6e7d1\") " pod="openstack/cinder-scheduler-0" Jan 29 15:50:09 crc kubenswrapper[5008]: I0129 15:50:09.560035 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/35979baf-dba0-453c-bafd-16985d082448-ovsdbserver-nb\") pod \"dnsmasq-dns-6578955fd5-h99wm\" (UID: \"35979baf-dba0-453c-bafd-16985d082448\") " pod="openstack/dnsmasq-dns-6578955fd5-h99wm" Jan 29 15:50:09 crc kubenswrapper[5008]: I0129 15:50:09.560065 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/930b6c6f-40a8-476f-ad73-069c7f2ffeb8-config-data-custom\") pod \"barbican-api-788c485464-442t2\" (UID: \"930b6c6f-40a8-476f-ad73-069c7f2ffeb8\") " pod="openstack/barbican-api-788c485464-442t2" Jan 29 15:50:09 crc kubenswrapper[5008]: I0129 15:50:09.560120 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b876n\" (UniqueName: \"kubernetes.io/projected/2c4e7961-5802-47c7-becf-75dd01d6e7d1-kube-api-access-b876n\") pod \"cinder-scheduler-0\" (UID: \"2c4e7961-5802-47c7-becf-75dd01d6e7d1\") " pod="openstack/cinder-scheduler-0" Jan 29 15:50:09 crc kubenswrapper[5008]: I0129 15:50:09.560142 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/2c4e7961-5802-47c7-becf-75dd01d6e7d1-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"2c4e7961-5802-47c7-becf-75dd01d6e7d1\") " pod="openstack/cinder-scheduler-0" Jan 29 15:50:09 crc kubenswrapper[5008]: I0129 15:50:09.560193 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2c4e7961-5802-47c7-becf-75dd01d6e7d1-scripts\") pod \"cinder-scheduler-0\" (UID: \"2c4e7961-5802-47c7-becf-75dd01d6e7d1\") " pod="openstack/cinder-scheduler-0" Jan 29 15:50:09 crc kubenswrapper[5008]: I0129 15:50:09.560231 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/35979baf-dba0-453c-bafd-16985d082448-ovsdbserver-sb\") pod \"dnsmasq-dns-6578955fd5-h99wm\" (UID: \"35979baf-dba0-453c-bafd-16985d082448\") " pod="openstack/dnsmasq-dns-6578955fd5-h99wm" Jan 29 15:50:09 crc kubenswrapper[5008]: I0129 15:50:09.560260 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w4hcq\" (UniqueName: \"kubernetes.io/projected/35979baf-dba0-453c-bafd-16985d082448-kube-api-access-w4hcq\") pod \"dnsmasq-dns-6578955fd5-h99wm\" (UID: \"35979baf-dba0-453c-bafd-16985d082448\") " pod="openstack/dnsmasq-dns-6578955fd5-h99wm" Jan 29 15:50:09 crc kubenswrapper[5008]: I0129 15:50:09.560294 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/35979baf-dba0-453c-bafd-16985d082448-dns-svc\") pod \"dnsmasq-dns-6578955fd5-h99wm\" (UID: \"35979baf-dba0-453c-bafd-16985d082448\") " pod="openstack/dnsmasq-dns-6578955fd5-h99wm" Jan 29 15:50:09 crc kubenswrapper[5008]: I0129 15:50:09.560326 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/930b6c6f-40a8-476f-ad73-069c7f2ffeb8-combined-ca-bundle\") pod \"barbican-api-788c485464-442t2\" (UID: \"930b6c6f-40a8-476f-ad73-069c7f2ffeb8\") " pod="openstack/barbican-api-788c485464-442t2" Jan 29 15:50:09 crc kubenswrapper[5008]: I0129 15:50:09.560356 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2c4e7961-5802-47c7-becf-75dd01d6e7d1-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"2c4e7961-5802-47c7-becf-75dd01d6e7d1\") " pod="openstack/cinder-scheduler-0" Jan 29 15:50:09 crc kubenswrapper[5008]: I0129 15:50:09.560384 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/930b6c6f-40a8-476f-ad73-069c7f2ffeb8-logs\") pod \"barbican-api-788c485464-442t2\" (UID: \"930b6c6f-40a8-476f-ad73-069c7f2ffeb8\") " pod="openstack/barbican-api-788c485464-442t2" Jan 29 15:50:09 crc kubenswrapper[5008]: I0129 15:50:09.560408 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/35979baf-dba0-453c-bafd-16985d082448-config\") pod \"dnsmasq-dns-6578955fd5-h99wm\" (UID: \"35979baf-dba0-453c-bafd-16985d082448\") " pod="openstack/dnsmasq-dns-6578955fd5-h99wm" Jan 29 15:50:09 crc kubenswrapper[5008]: I0129 15:50:09.560441 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/930b6c6f-40a8-476f-ad73-069c7f2ffeb8-config-data\") pod \"barbican-api-788c485464-442t2\" (UID: \"930b6c6f-40a8-476f-ad73-069c7f2ffeb8\") " pod="openstack/barbican-api-788c485464-442t2" Jan 29 15:50:09 crc kubenswrapper[5008]: I0129 15:50:09.570515 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/35979baf-dba0-453c-bafd-16985d082448-ovsdbserver-nb\") pod \"dnsmasq-dns-6578955fd5-h99wm\" (UID: \"35979baf-dba0-453c-bafd-16985d082448\") " pod="openstack/dnsmasq-dns-6578955fd5-h99wm" Jan 29 15:50:09 crc kubenswrapper[5008]: I0129 15:50:09.570881 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/35979baf-dba0-453c-bafd-16985d082448-ovsdbserver-sb\") pod \"dnsmasq-dns-6578955fd5-h99wm\" (UID: \"35979baf-dba0-453c-bafd-16985d082448\") " pod="openstack/dnsmasq-dns-6578955fd5-h99wm" Jan 29 15:50:09 crc kubenswrapper[5008]: I0129 15:50:09.571577 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/930b6c6f-40a8-476f-ad73-069c7f2ffeb8-logs\") pod \"barbican-api-788c485464-442t2\" (UID: \"930b6c6f-40a8-476f-ad73-069c7f2ffeb8\") " pod="openstack/barbican-api-788c485464-442t2" Jan 29 15:50:09 crc kubenswrapper[5008]: I0129 15:50:09.572058 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/35979baf-dba0-453c-bafd-16985d082448-dns-svc\") pod \"dnsmasq-dns-6578955fd5-h99wm\" (UID: \"35979baf-dba0-453c-bafd-16985d082448\") " pod="openstack/dnsmasq-dns-6578955fd5-h99wm" Jan 29 15:50:09 crc kubenswrapper[5008]: I0129 15:50:09.572279 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/35979baf-dba0-453c-bafd-16985d082448-config\") pod \"dnsmasq-dns-6578955fd5-h99wm\" (UID: \"35979baf-dba0-453c-bafd-16985d082448\") " pod="openstack/dnsmasq-dns-6578955fd5-h99wm" Jan 29 15:50:09 crc kubenswrapper[5008]: I0129 15:50:09.575049 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/35979baf-dba0-453c-bafd-16985d082448-dns-swift-storage-0\") pod \"dnsmasq-dns-6578955fd5-h99wm\" (UID: \"35979baf-dba0-453c-bafd-16985d082448\") " pod="openstack/dnsmasq-dns-6578955fd5-h99wm" Jan 29 15:50:09 crc kubenswrapper[5008]: I0129 15:50:09.578064 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/930b6c6f-40a8-476f-ad73-069c7f2ffeb8-config-data-custom\") pod \"barbican-api-788c485464-442t2\" (UID: \"930b6c6f-40a8-476f-ad73-069c7f2ffeb8\") " pod="openstack/barbican-api-788c485464-442t2" Jan 29 15:50:09 crc kubenswrapper[5008]: I0129 15:50:09.583898 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/930b6c6f-40a8-476f-ad73-069c7f2ffeb8-config-data\") pod \"barbican-api-788c485464-442t2\" (UID: \"930b6c6f-40a8-476f-ad73-069c7f2ffeb8\") " pod="openstack/barbican-api-788c485464-442t2" Jan 29 15:50:09 crc kubenswrapper[5008]: I0129 15:50:09.589511 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/930b6c6f-40a8-476f-ad73-069c7f2ffeb8-combined-ca-bundle\") pod \"barbican-api-788c485464-442t2\" (UID: \"930b6c6f-40a8-476f-ad73-069c7f2ffeb8\") " pod="openstack/barbican-api-788c485464-442t2" Jan 29 15:50:09 crc kubenswrapper[5008]: I0129 15:50:09.594155 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w4hcq\" (UniqueName: \"kubernetes.io/projected/35979baf-dba0-453c-bafd-16985d082448-kube-api-access-w4hcq\") pod \"dnsmasq-dns-6578955fd5-h99wm\" (UID: \"35979baf-dba0-453c-bafd-16985d082448\") " pod="openstack/dnsmasq-dns-6578955fd5-h99wm" Jan 29 15:50:09 crc kubenswrapper[5008]: I0129 15:50:09.596563 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gt8j9\" (UniqueName: \"kubernetes.io/projected/930b6c6f-40a8-476f-ad73-069c7f2ffeb8-kube-api-access-gt8j9\") pod \"barbican-api-788c485464-442t2\" (UID: \"930b6c6f-40a8-476f-ad73-069c7f2ffeb8\") " pod="openstack/barbican-api-788c485464-442t2" Jan 29 15:50:09 crc kubenswrapper[5008]: I0129 15:50:09.604252 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-5c46c758ff-5p4jl" Jan 29 15:50:09 crc kubenswrapper[5008]: I0129 15:50:09.626463 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-55d9fbf66-r5kj8"] Jan 29 15:50:09 crc kubenswrapper[5008]: I0129 15:50:09.676805 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2c4e7961-5802-47c7-becf-75dd01d6e7d1-config-data\") pod \"cinder-scheduler-0\" (UID: \"2c4e7961-5802-47c7-becf-75dd01d6e7d1\") " pod="openstack/cinder-scheduler-0" Jan 29 15:50:09 crc kubenswrapper[5008]: I0129 15:50:09.676932 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b876n\" (UniqueName: \"kubernetes.io/projected/2c4e7961-5802-47c7-becf-75dd01d6e7d1-kube-api-access-b876n\") pod \"cinder-scheduler-0\" (UID: \"2c4e7961-5802-47c7-becf-75dd01d6e7d1\") " pod="openstack/cinder-scheduler-0" Jan 29 15:50:09 crc kubenswrapper[5008]: I0129 15:50:09.676958 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/2c4e7961-5802-47c7-becf-75dd01d6e7d1-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"2c4e7961-5802-47c7-becf-75dd01d6e7d1\") " pod="openstack/cinder-scheduler-0" Jan 29 15:50:09 crc kubenswrapper[5008]: I0129 15:50:09.676995 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2c4e7961-5802-47c7-becf-75dd01d6e7d1-scripts\") pod \"cinder-scheduler-0\" (UID: \"2c4e7961-5802-47c7-becf-75dd01d6e7d1\") " pod="openstack/cinder-scheduler-0" Jan 29 15:50:09 crc kubenswrapper[5008]: I0129 15:50:09.677108 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2c4e7961-5802-47c7-becf-75dd01d6e7d1-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"2c4e7961-5802-47c7-becf-75dd01d6e7d1\") " pod="openstack/cinder-scheduler-0" Jan 29 15:50:09 crc kubenswrapper[5008]: I0129 15:50:09.677244 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2c4e7961-5802-47c7-becf-75dd01d6e7d1-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"2c4e7961-5802-47c7-becf-75dd01d6e7d1\") " pod="openstack/cinder-scheduler-0" Jan 29 15:50:09 crc kubenswrapper[5008]: I0129 15:50:09.678222 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/2c4e7961-5802-47c7-becf-75dd01d6e7d1-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"2c4e7961-5802-47c7-becf-75dd01d6e7d1\") " pod="openstack/cinder-scheduler-0" Jan 29 15:50:09 crc kubenswrapper[5008]: I0129 15:50:09.685033 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2c4e7961-5802-47c7-becf-75dd01d6e7d1-scripts\") pod \"cinder-scheduler-0\" (UID: \"2c4e7961-5802-47c7-becf-75dd01d6e7d1\") " pod="openstack/cinder-scheduler-0" Jan 29 15:50:09 crc kubenswrapper[5008]: I0129 15:50:09.686854 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2c4e7961-5802-47c7-becf-75dd01d6e7d1-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"2c4e7961-5802-47c7-becf-75dd01d6e7d1\") " pod="openstack/cinder-scheduler-0" Jan 29 15:50:09 crc kubenswrapper[5008]: I0129 15:50:09.686924 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2c4e7961-5802-47c7-becf-75dd01d6e7d1-config-data\") pod \"cinder-scheduler-0\" (UID: \"2c4e7961-5802-47c7-becf-75dd01d6e7d1\") " pod="openstack/cinder-scheduler-0" Jan 29 15:50:09 crc kubenswrapper[5008]: I0129 15:50:09.690307 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2c4e7961-5802-47c7-becf-75dd01d6e7d1-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"2c4e7961-5802-47c7-becf-75dd01d6e7d1\") " pod="openstack/cinder-scheduler-0" Jan 29 15:50:09 crc kubenswrapper[5008]: I0129 15:50:09.694269 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b876n\" (UniqueName: \"kubernetes.io/projected/2c4e7961-5802-47c7-becf-75dd01d6e7d1-kube-api-access-b876n\") pod \"cinder-scheduler-0\" (UID: \"2c4e7961-5802-47c7-becf-75dd01d6e7d1\") " pod="openstack/cinder-scheduler-0" Jan 29 15:50:09 crc kubenswrapper[5008]: I0129 15:50:09.856071 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6578955fd5-h99wm" Jan 29 15:50:09 crc kubenswrapper[5008]: I0129 15:50:09.870236 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-788c485464-442t2" Jan 29 15:50:09 crc kubenswrapper[5008]: I0129 15:50:09.897436 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 29 15:50:09 crc kubenswrapper[5008]: I0129 15:50:09.932421 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-55d9fbf66-r5kj8" event={"ID":"85024049-9e4b-4814-a617-cd17614f2a80","Type":"ContainerStarted","Data":"49577ead3c56cfa4fc8c4afa22ed35523d5fc6a9bd2fe14bedee0cd114ebd9c9"} Jan 29 15:50:09 crc kubenswrapper[5008]: I0129 15:50:09.945029 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b98db574-9529-4d76-be4d-66b44b61a962","Type":"ContainerStarted","Data":"ac8bcb14c02650f4628017163e965fe6e1e75f1116276a7166d11c7831388a13"} Jan 29 15:50:10 crc kubenswrapper[5008]: I0129 15:50:10.031522 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-d5688bfcd-94rkm"] Jan 29 15:50:10 crc kubenswrapper[5008]: I0129 15:50:10.164937 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-5c46c758ff-5p4jl"] Jan 29 15:50:10 crc kubenswrapper[5008]: W0129 15:50:10.169138 5008 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf77f54f0_02b9_4082_8a76_dc78a9b7d08c.slice/crio-98d5e6627de94bf06ae24942bd5c032d8084ced63adeda9b6ac87f943ae2c8d1 WatchSource:0}: Error finding container 98d5e6627de94bf06ae24942bd5c032d8084ced63adeda9b6ac87f943ae2c8d1: Status 404 returned error can't find the container with id 98d5e6627de94bf06ae24942bd5c032d8084ced63adeda9b6ac87f943ae2c8d1 Jan 29 15:50:10 crc kubenswrapper[5008]: I0129 15:50:10.416405 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-788c485464-442t2"] Jan 29 15:50:10 crc kubenswrapper[5008]: W0129 15:50:10.418965 5008 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod930b6c6f_40a8_476f_ad73_069c7f2ffeb8.slice/crio-32662f5b5d6c2d9d8f2c316606503b0bdf87ca2b613c9eca5e18d259a4b9490d WatchSource:0}: Error finding container 32662f5b5d6c2d9d8f2c316606503b0bdf87ca2b613c9eca5e18d259a4b9490d: Status 404 returned error can't find the container with id 32662f5b5d6c2d9d8f2c316606503b0bdf87ca2b613c9eca5e18d259a4b9490d Jan 29 15:50:10 crc kubenswrapper[5008]: I0129 15:50:10.513333 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6578955fd5-h99wm"] Jan 29 15:50:10 crc kubenswrapper[5008]: I0129 15:50:10.592932 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 29 15:50:10 crc kubenswrapper[5008]: I0129 15:50:10.978930 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-5c46c758ff-5p4jl" event={"ID":"f77f54f0-02b9-4082-8a76-dc78a9b7d08c","Type":"ContainerStarted","Data":"98d5e6627de94bf06ae24942bd5c032d8084ced63adeda9b6ac87f943ae2c8d1"} Jan 29 15:50:11 crc kubenswrapper[5008]: I0129 15:50:11.007881 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b98db574-9529-4d76-be4d-66b44b61a962","Type":"ContainerStarted","Data":"90e79906614f1aa108747a96f77ccfe3fdb70daf711090972edf7e61f23302c4"} Jan 29 15:50:11 crc kubenswrapper[5008]: I0129 15:50:11.007928 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b98db574-9529-4d76-be4d-66b44b61a962","Type":"ContainerStarted","Data":"29377eababaf8e8e41487afa073b54e532dba60d67b967245b292537b2985d32"} Jan 29 15:50:11 crc kubenswrapper[5008]: I0129 15:50:11.028924 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-55d9fbf66-r5kj8" event={"ID":"85024049-9e4b-4814-a617-cd17614f2a80","Type":"ContainerStarted","Data":"83c43918b15aee419aec8c4f6c3c4f54f869f9668c31e8758b308bc721697e71"} Jan 29 15:50:11 crc kubenswrapper[5008]: I0129 15:50:11.028967 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-55d9fbf66-r5kj8" event={"ID":"85024049-9e4b-4814-a617-cd17614f2a80","Type":"ContainerStarted","Data":"c021a354391bbbe3f6a8013dd6a9be3fd3462137824bb5153db6eeeb65ccb07e"} Jan 29 15:50:11 crc kubenswrapper[5008]: I0129 15:50:11.031085 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-55d9fbf66-r5kj8" Jan 29 15:50:11 crc kubenswrapper[5008]: I0129 15:50:11.031123 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-55d9fbf66-r5kj8" Jan 29 15:50:11 crc kubenswrapper[5008]: I0129 15:50:11.066374 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6578955fd5-h99wm" event={"ID":"35979baf-dba0-453c-bafd-16985d082448","Type":"ContainerStarted","Data":"3e9db3acbe84cb18dcd650ffdeedfffc3c78951f208824646557062d45cea8c7"} Jan 29 15:50:11 crc kubenswrapper[5008]: I0129 15:50:11.068916 5008 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-55d9fbf66-r5kj8" podStartSLOduration=3.06890256 podStartE2EDuration="3.06890256s" podCreationTimestamp="2026-01-29 15:50:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 15:50:11.063211582 +0000 UTC m=+1354.736065819" watchObservedRunningTime="2026-01-29 15:50:11.06890256 +0000 UTC m=+1354.741756797" Jan 29 15:50:11 crc kubenswrapper[5008]: I0129 15:50:11.069015 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-788c485464-442t2" event={"ID":"930b6c6f-40a8-476f-ad73-069c7f2ffeb8","Type":"ContainerStarted","Data":"d590c476f44393281718ccb2a8a3e0af02d26c225e5b0e107a503b8af26e4e78"} Jan 29 15:50:11 crc kubenswrapper[5008]: I0129 15:50:11.069036 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-788c485464-442t2" event={"ID":"930b6c6f-40a8-476f-ad73-069c7f2ffeb8","Type":"ContainerStarted","Data":"32662f5b5d6c2d9d8f2c316606503b0bdf87ca2b613c9eca5e18d259a4b9490d"} Jan 29 15:50:11 crc kubenswrapper[5008]: I0129 15:50:11.069693 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-d5688bfcd-94rkm" event={"ID":"24c4cc25-9e50-4601-bac2-552e1aded799","Type":"ContainerStarted","Data":"fa6af8a974cb497ec74206ad3e39eb89858800b219a9a36c75932238ec4997e5"} Jan 29 15:50:11 crc kubenswrapper[5008]: I0129 15:50:11.076830 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"2c4e7961-5802-47c7-becf-75dd01d6e7d1","Type":"ContainerStarted","Data":"a630ba7450cf7ff5b68a80184b109d24ebcf6fbd8c1fb0273e1b87fb9c31dea3"} Jan 29 15:50:11 crc kubenswrapper[5008]: I0129 15:50:11.340107 5008 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d01ff2cd-2707-4765-a399-a68312196c22" path="/var/lib/kubelet/pods/d01ff2cd-2707-4765-a399-a68312196c22/volumes" Jan 29 15:50:12 crc kubenswrapper[5008]: I0129 15:50:12.043201 5008 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-7f9c9f8766-4lf97"] Jan 29 15:50:12 crc kubenswrapper[5008]: I0129 15:50:12.044909 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-7f9c9f8766-4lf97" Jan 29 15:50:12 crc kubenswrapper[5008]: I0129 15:50:12.049985 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-public-svc" Jan 29 15:50:12 crc kubenswrapper[5008]: I0129 15:50:12.053480 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-internal-svc" Jan 29 15:50:12 crc kubenswrapper[5008]: I0129 15:50:12.066997 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-7f9c9f8766-4lf97"] Jan 29 15:50:12 crc kubenswrapper[5008]: I0129 15:50:12.089967 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b98db574-9529-4d76-be4d-66b44b61a962","Type":"ContainerStarted","Data":"0b2d6292707a75e758c120738b19a67f88a7bad26c37389a75eb49abc679e069"} Jan 29 15:50:12 crc kubenswrapper[5008]: I0129 15:50:12.101574 5008 generic.go:334] "Generic (PLEG): container finished" podID="35979baf-dba0-453c-bafd-16985d082448" containerID="054e6e3ef42c95903f288b4bdf317b2b2caa13f9aeb23d4a04ff1cd84e828a41" exitCode=0 Jan 29 15:50:12 crc kubenswrapper[5008]: I0129 15:50:12.101643 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6578955fd5-h99wm" event={"ID":"35979baf-dba0-453c-bafd-16985d082448","Type":"ContainerDied","Data":"054e6e3ef42c95903f288b4bdf317b2b2caa13f9aeb23d4a04ff1cd84e828a41"} Jan 29 15:50:12 crc kubenswrapper[5008]: I0129 15:50:12.107074 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-788c485464-442t2" event={"ID":"930b6c6f-40a8-476f-ad73-069c7f2ffeb8","Type":"ContainerStarted","Data":"d6a474f9cb662a31c110199317649c60d49d6b8424e25729948f77b95945be36"} Jan 29 15:50:12 crc kubenswrapper[5008]: I0129 15:50:12.108019 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-788c485464-442t2" Jan 29 15:50:12 crc kubenswrapper[5008]: I0129 15:50:12.108048 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-788c485464-442t2" Jan 29 15:50:12 crc kubenswrapper[5008]: I0129 15:50:12.134732 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"2c4e7961-5802-47c7-becf-75dd01d6e7d1","Type":"ContainerStarted","Data":"f20b07be3b02c44f08ebde7ad6b772dc570c81268411b26108454494d2b2451c"} Jan 29 15:50:12 crc kubenswrapper[5008]: I0129 15:50:12.169016 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ce981b8e-ff53-48ad-b44e-b150c0b1b80f-logs\") pod \"barbican-api-7f9c9f8766-4lf97\" (UID: \"ce981b8e-ff53-48ad-b44e-b150c0b1b80f\") " pod="openstack/barbican-api-7f9c9f8766-4lf97" Jan 29 15:50:12 crc kubenswrapper[5008]: I0129 15:50:12.169132 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ce981b8e-ff53-48ad-b44e-b150c0b1b80f-public-tls-certs\") pod \"barbican-api-7f9c9f8766-4lf97\" (UID: \"ce981b8e-ff53-48ad-b44e-b150c0b1b80f\") " pod="openstack/barbican-api-7f9c9f8766-4lf97" Jan 29 15:50:12 crc kubenswrapper[5008]: I0129 15:50:12.169462 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ce981b8e-ff53-48ad-b44e-b150c0b1b80f-combined-ca-bundle\") pod \"barbican-api-7f9c9f8766-4lf97\" (UID: \"ce981b8e-ff53-48ad-b44e-b150c0b1b80f\") " pod="openstack/barbican-api-7f9c9f8766-4lf97" Jan 29 15:50:12 crc kubenswrapper[5008]: I0129 15:50:12.169499 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ce981b8e-ff53-48ad-b44e-b150c0b1b80f-config-data\") pod \"barbican-api-7f9c9f8766-4lf97\" (UID: \"ce981b8e-ff53-48ad-b44e-b150c0b1b80f\") " pod="openstack/barbican-api-7f9c9f8766-4lf97" Jan 29 15:50:12 crc kubenswrapper[5008]: I0129 15:50:12.169535 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7qgvf\" (UniqueName: \"kubernetes.io/projected/ce981b8e-ff53-48ad-b44e-b150c0b1b80f-kube-api-access-7qgvf\") pod \"barbican-api-7f9c9f8766-4lf97\" (UID: \"ce981b8e-ff53-48ad-b44e-b150c0b1b80f\") " pod="openstack/barbican-api-7f9c9f8766-4lf97" Jan 29 15:50:12 crc kubenswrapper[5008]: I0129 15:50:12.169609 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ce981b8e-ff53-48ad-b44e-b150c0b1b80f-config-data-custom\") pod \"barbican-api-7f9c9f8766-4lf97\" (UID: \"ce981b8e-ff53-48ad-b44e-b150c0b1b80f\") " pod="openstack/barbican-api-7f9c9f8766-4lf97" Jan 29 15:50:12 crc kubenswrapper[5008]: I0129 15:50:12.169663 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ce981b8e-ff53-48ad-b44e-b150c0b1b80f-internal-tls-certs\") pod \"barbican-api-7f9c9f8766-4lf97\" (UID: \"ce981b8e-ff53-48ad-b44e-b150c0b1b80f\") " pod="openstack/barbican-api-7f9c9f8766-4lf97" Jan 29 15:50:12 crc kubenswrapper[5008]: I0129 15:50:12.188138 5008 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-788c485464-442t2" podStartSLOduration=3.18811723 podStartE2EDuration="3.18811723s" podCreationTimestamp="2026-01-29 15:50:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 15:50:12.153386197 +0000 UTC m=+1355.826240444" watchObservedRunningTime="2026-01-29 15:50:12.18811723 +0000 UTC m=+1355.860971477" Jan 29 15:50:12 crc kubenswrapper[5008]: I0129 15:50:12.272054 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ce981b8e-ff53-48ad-b44e-b150c0b1b80f-combined-ca-bundle\") pod \"barbican-api-7f9c9f8766-4lf97\" (UID: \"ce981b8e-ff53-48ad-b44e-b150c0b1b80f\") " pod="openstack/barbican-api-7f9c9f8766-4lf97" Jan 29 15:50:12 crc kubenswrapper[5008]: I0129 15:50:12.272105 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ce981b8e-ff53-48ad-b44e-b150c0b1b80f-config-data\") pod \"barbican-api-7f9c9f8766-4lf97\" (UID: \"ce981b8e-ff53-48ad-b44e-b150c0b1b80f\") " pod="openstack/barbican-api-7f9c9f8766-4lf97" Jan 29 15:50:12 crc kubenswrapper[5008]: I0129 15:50:12.272127 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7qgvf\" (UniqueName: \"kubernetes.io/projected/ce981b8e-ff53-48ad-b44e-b150c0b1b80f-kube-api-access-7qgvf\") pod \"barbican-api-7f9c9f8766-4lf97\" (UID: \"ce981b8e-ff53-48ad-b44e-b150c0b1b80f\") " pod="openstack/barbican-api-7f9c9f8766-4lf97" Jan 29 15:50:12 crc kubenswrapper[5008]: I0129 15:50:12.272151 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ce981b8e-ff53-48ad-b44e-b150c0b1b80f-config-data-custom\") pod \"barbican-api-7f9c9f8766-4lf97\" (UID: \"ce981b8e-ff53-48ad-b44e-b150c0b1b80f\") " pod="openstack/barbican-api-7f9c9f8766-4lf97" Jan 29 15:50:12 crc kubenswrapper[5008]: I0129 15:50:12.272176 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ce981b8e-ff53-48ad-b44e-b150c0b1b80f-internal-tls-certs\") pod \"barbican-api-7f9c9f8766-4lf97\" (UID: \"ce981b8e-ff53-48ad-b44e-b150c0b1b80f\") " pod="openstack/barbican-api-7f9c9f8766-4lf97" Jan 29 15:50:12 crc kubenswrapper[5008]: I0129 15:50:12.272248 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ce981b8e-ff53-48ad-b44e-b150c0b1b80f-logs\") pod \"barbican-api-7f9c9f8766-4lf97\" (UID: \"ce981b8e-ff53-48ad-b44e-b150c0b1b80f\") " pod="openstack/barbican-api-7f9c9f8766-4lf97" Jan 29 15:50:12 crc kubenswrapper[5008]: I0129 15:50:12.273301 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ce981b8e-ff53-48ad-b44e-b150c0b1b80f-logs\") pod \"barbican-api-7f9c9f8766-4lf97\" (UID: \"ce981b8e-ff53-48ad-b44e-b150c0b1b80f\") " pod="openstack/barbican-api-7f9c9f8766-4lf97" Jan 29 15:50:12 crc kubenswrapper[5008]: I0129 15:50:12.272705 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ce981b8e-ff53-48ad-b44e-b150c0b1b80f-public-tls-certs\") pod \"barbican-api-7f9c9f8766-4lf97\" (UID: \"ce981b8e-ff53-48ad-b44e-b150c0b1b80f\") " pod="openstack/barbican-api-7f9c9f8766-4lf97" Jan 29 15:50:12 crc kubenswrapper[5008]: I0129 15:50:12.278313 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ce981b8e-ff53-48ad-b44e-b150c0b1b80f-config-data-custom\") pod \"barbican-api-7f9c9f8766-4lf97\" (UID: \"ce981b8e-ff53-48ad-b44e-b150c0b1b80f\") " pod="openstack/barbican-api-7f9c9f8766-4lf97" Jan 29 15:50:12 crc kubenswrapper[5008]: I0129 15:50:12.279606 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ce981b8e-ff53-48ad-b44e-b150c0b1b80f-internal-tls-certs\") pod \"barbican-api-7f9c9f8766-4lf97\" (UID: \"ce981b8e-ff53-48ad-b44e-b150c0b1b80f\") " pod="openstack/barbican-api-7f9c9f8766-4lf97" Jan 29 15:50:12 crc kubenswrapper[5008]: I0129 15:50:12.282062 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ce981b8e-ff53-48ad-b44e-b150c0b1b80f-public-tls-certs\") pod \"barbican-api-7f9c9f8766-4lf97\" (UID: \"ce981b8e-ff53-48ad-b44e-b150c0b1b80f\") " pod="openstack/barbican-api-7f9c9f8766-4lf97" Jan 29 15:50:12 crc kubenswrapper[5008]: I0129 15:50:12.292040 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ce981b8e-ff53-48ad-b44e-b150c0b1b80f-config-data\") pod \"barbican-api-7f9c9f8766-4lf97\" (UID: \"ce981b8e-ff53-48ad-b44e-b150c0b1b80f\") " pod="openstack/barbican-api-7f9c9f8766-4lf97" Jan 29 15:50:12 crc kubenswrapper[5008]: I0129 15:50:12.292745 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7qgvf\" (UniqueName: \"kubernetes.io/projected/ce981b8e-ff53-48ad-b44e-b150c0b1b80f-kube-api-access-7qgvf\") pod \"barbican-api-7f9c9f8766-4lf97\" (UID: \"ce981b8e-ff53-48ad-b44e-b150c0b1b80f\") " pod="openstack/barbican-api-7f9c9f8766-4lf97" Jan 29 15:50:12 crc kubenswrapper[5008]: I0129 15:50:12.301088 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ce981b8e-ff53-48ad-b44e-b150c0b1b80f-combined-ca-bundle\") pod \"barbican-api-7f9c9f8766-4lf97\" (UID: \"ce981b8e-ff53-48ad-b44e-b150c0b1b80f\") " pod="openstack/barbican-api-7f9c9f8766-4lf97" Jan 29 15:50:12 crc kubenswrapper[5008]: I0129 15:50:12.435901 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-7f9c9f8766-4lf97" Jan 29 15:50:13 crc kubenswrapper[5008]: I0129 15:50:13.149122 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"2c4e7961-5802-47c7-becf-75dd01d6e7d1","Type":"ContainerStarted","Data":"f2adc752d118aaa84aabfad36038ec09473521ded01c78fdfc0626baa53e4c0a"} Jan 29 15:50:13 crc kubenswrapper[5008]: I0129 15:50:13.175622 5008 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=4.175599915 podStartE2EDuration="4.175599915s" podCreationTimestamp="2026-01-29 15:50:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 15:50:13.166386002 +0000 UTC m=+1356.839240249" watchObservedRunningTime="2026-01-29 15:50:13.175599915 +0000 UTC m=+1356.848454152" Jan 29 15:50:13 crc kubenswrapper[5008]: I0129 15:50:13.418312 5008 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstackclient"] Jan 29 15:50:13 crc kubenswrapper[5008]: I0129 15:50:13.419794 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Jan 29 15:50:13 crc kubenswrapper[5008]: I0129 15:50:13.419954 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Jan 29 15:50:13 crc kubenswrapper[5008]: I0129 15:50:13.426926 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstackclient-openstackclient-dockercfg-cnc9x" Jan 29 15:50:13 crc kubenswrapper[5008]: I0129 15:50:13.428100 5008 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config" Jan 29 15:50:13 crc kubenswrapper[5008]: I0129 15:50:13.428647 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-config-secret" Jan 29 15:50:13 crc kubenswrapper[5008]: I0129 15:50:13.517751 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/26e3e9ce-4ea8-4746-af4e-21d6f2c9be74-openstack-config\") pod \"openstackclient\" (UID: \"26e3e9ce-4ea8-4746-af4e-21d6f2c9be74\") " pod="openstack/openstackclient" Jan 29 15:50:13 crc kubenswrapper[5008]: I0129 15:50:13.517823 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/26e3e9ce-4ea8-4746-af4e-21d6f2c9be74-openstack-config-secret\") pod \"openstackclient\" (UID: \"26e3e9ce-4ea8-4746-af4e-21d6f2c9be74\") " pod="openstack/openstackclient" Jan 29 15:50:13 crc kubenswrapper[5008]: I0129 15:50:13.517913 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qrxmg\" (UniqueName: \"kubernetes.io/projected/26e3e9ce-4ea8-4746-af4e-21d6f2c9be74-kube-api-access-qrxmg\") pod \"openstackclient\" (UID: \"26e3e9ce-4ea8-4746-af4e-21d6f2c9be74\") " pod="openstack/openstackclient" Jan 29 15:50:13 crc kubenswrapper[5008]: I0129 15:50:13.518189 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/26e3e9ce-4ea8-4746-af4e-21d6f2c9be74-combined-ca-bundle\") pod \"openstackclient\" (UID: \"26e3e9ce-4ea8-4746-af4e-21d6f2c9be74\") " pod="openstack/openstackclient" Jan 29 15:50:13 crc kubenswrapper[5008]: I0129 15:50:13.621247 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/26e3e9ce-4ea8-4746-af4e-21d6f2c9be74-combined-ca-bundle\") pod \"openstackclient\" (UID: \"26e3e9ce-4ea8-4746-af4e-21d6f2c9be74\") " pod="openstack/openstackclient" Jan 29 15:50:13 crc kubenswrapper[5008]: I0129 15:50:13.622397 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/26e3e9ce-4ea8-4746-af4e-21d6f2c9be74-openstack-config\") pod \"openstackclient\" (UID: \"26e3e9ce-4ea8-4746-af4e-21d6f2c9be74\") " pod="openstack/openstackclient" Jan 29 15:50:13 crc kubenswrapper[5008]: I0129 15:50:13.622443 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/26e3e9ce-4ea8-4746-af4e-21d6f2c9be74-openstack-config-secret\") pod \"openstackclient\" (UID: \"26e3e9ce-4ea8-4746-af4e-21d6f2c9be74\") " pod="openstack/openstackclient" Jan 29 15:50:13 crc kubenswrapper[5008]: I0129 15:50:13.622541 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qrxmg\" (UniqueName: \"kubernetes.io/projected/26e3e9ce-4ea8-4746-af4e-21d6f2c9be74-kube-api-access-qrxmg\") pod \"openstackclient\" (UID: \"26e3e9ce-4ea8-4746-af4e-21d6f2c9be74\") " pod="openstack/openstackclient" Jan 29 15:50:13 crc kubenswrapper[5008]: I0129 15:50:13.624767 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/26e3e9ce-4ea8-4746-af4e-21d6f2c9be74-openstack-config\") pod \"openstackclient\" (UID: \"26e3e9ce-4ea8-4746-af4e-21d6f2c9be74\") " pod="openstack/openstackclient" Jan 29 15:50:13 crc kubenswrapper[5008]: I0129 15:50:13.633654 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/26e3e9ce-4ea8-4746-af4e-21d6f2c9be74-combined-ca-bundle\") pod \"openstackclient\" (UID: \"26e3e9ce-4ea8-4746-af4e-21d6f2c9be74\") " pod="openstack/openstackclient" Jan 29 15:50:13 crc kubenswrapper[5008]: I0129 15:50:13.651538 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qrxmg\" (UniqueName: \"kubernetes.io/projected/26e3e9ce-4ea8-4746-af4e-21d6f2c9be74-kube-api-access-qrxmg\") pod \"openstackclient\" (UID: \"26e3e9ce-4ea8-4746-af4e-21d6f2c9be74\") " pod="openstack/openstackclient" Jan 29 15:50:13 crc kubenswrapper[5008]: I0129 15:50:13.657367 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/26e3e9ce-4ea8-4746-af4e-21d6f2c9be74-openstack-config-secret\") pod \"openstackclient\" (UID: \"26e3e9ce-4ea8-4746-af4e-21d6f2c9be74\") " pod="openstack/openstackclient" Jan 29 15:50:13 crc kubenswrapper[5008]: I0129 15:50:13.694723 5008 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-proxy-5c6fbdb57f-zvhpz"] Jan 29 15:50:13 crc kubenswrapper[5008]: I0129 15:50:13.696661 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-5c6fbdb57f-zvhpz" Jan 29 15:50:13 crc kubenswrapper[5008]: I0129 15:50:13.703553 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-internal-svc" Jan 29 15:50:13 crc kubenswrapper[5008]: I0129 15:50:13.703755 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-public-svc" Jan 29 15:50:13 crc kubenswrapper[5008]: I0129 15:50:13.703915 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Jan 29 15:50:13 crc kubenswrapper[5008]: I0129 15:50:13.711215 5008 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/openstackclient"] Jan 29 15:50:13 crc kubenswrapper[5008]: I0129 15:50:13.712366 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Jan 29 15:50:13 crc kubenswrapper[5008]: I0129 15:50:13.728867 5008 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/openstackclient"] Jan 29 15:50:13 crc kubenswrapper[5008]: I0129 15:50:13.784547 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-5c6fbdb57f-zvhpz"] Jan 29 15:50:13 crc kubenswrapper[5008]: I0129 15:50:13.804534 5008 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstackclient"] Jan 29 15:50:13 crc kubenswrapper[5008]: I0129 15:50:13.805573 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Jan 29 15:50:13 crc kubenswrapper[5008]: I0129 15:50:13.810607 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Jan 29 15:50:13 crc kubenswrapper[5008]: I0129 15:50:13.827121 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/64c08f63-12a2-4dfb-b96d-0a12e9725021-run-httpd\") pod \"swift-proxy-5c6fbdb57f-zvhpz\" (UID: \"64c08f63-12a2-4dfb-b96d-0a12e9725021\") " pod="openstack/swift-proxy-5c6fbdb57f-zvhpz" Jan 29 15:50:13 crc kubenswrapper[5008]: I0129 15:50:13.827160 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/64c08f63-12a2-4dfb-b96d-0a12e9725021-combined-ca-bundle\") pod \"swift-proxy-5c6fbdb57f-zvhpz\" (UID: \"64c08f63-12a2-4dfb-b96d-0a12e9725021\") " pod="openstack/swift-proxy-5c6fbdb57f-zvhpz" Jan 29 15:50:13 crc kubenswrapper[5008]: I0129 15:50:13.827197 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/64c08f63-12a2-4dfb-b96d-0a12e9725021-internal-tls-certs\") pod \"swift-proxy-5c6fbdb57f-zvhpz\" (UID: \"64c08f63-12a2-4dfb-b96d-0a12e9725021\") " pod="openstack/swift-proxy-5c6fbdb57f-zvhpz" Jan 29 15:50:13 crc kubenswrapper[5008]: I0129 15:50:13.827235 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hslk6\" (UniqueName: \"kubernetes.io/projected/64c08f63-12a2-4dfb-b96d-0a12e9725021-kube-api-access-hslk6\") pod \"swift-proxy-5c6fbdb57f-zvhpz\" (UID: \"64c08f63-12a2-4dfb-b96d-0a12e9725021\") " pod="openstack/swift-proxy-5c6fbdb57f-zvhpz" Jan 29 15:50:13 crc kubenswrapper[5008]: I0129 15:50:13.827258 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/64c08f63-12a2-4dfb-b96d-0a12e9725021-etc-swift\") pod \"swift-proxy-5c6fbdb57f-zvhpz\" (UID: \"64c08f63-12a2-4dfb-b96d-0a12e9725021\") " pod="openstack/swift-proxy-5c6fbdb57f-zvhpz" Jan 29 15:50:13 crc kubenswrapper[5008]: I0129 15:50:13.827314 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/64c08f63-12a2-4dfb-b96d-0a12e9725021-log-httpd\") pod \"swift-proxy-5c6fbdb57f-zvhpz\" (UID: \"64c08f63-12a2-4dfb-b96d-0a12e9725021\") " pod="openstack/swift-proxy-5c6fbdb57f-zvhpz" Jan 29 15:50:13 crc kubenswrapper[5008]: I0129 15:50:13.827354 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/64c08f63-12a2-4dfb-b96d-0a12e9725021-config-data\") pod \"swift-proxy-5c6fbdb57f-zvhpz\" (UID: \"64c08f63-12a2-4dfb-b96d-0a12e9725021\") " pod="openstack/swift-proxy-5c6fbdb57f-zvhpz" Jan 29 15:50:13 crc kubenswrapper[5008]: I0129 15:50:13.827402 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/64c08f63-12a2-4dfb-b96d-0a12e9725021-public-tls-certs\") pod \"swift-proxy-5c6fbdb57f-zvhpz\" (UID: \"64c08f63-12a2-4dfb-b96d-0a12e9725021\") " pod="openstack/swift-proxy-5c6fbdb57f-zvhpz" Jan 29 15:50:13 crc kubenswrapper[5008]: I0129 15:50:13.861501 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-7f9c9f8766-4lf97"] Jan 29 15:50:13 crc kubenswrapper[5008]: I0129 15:50:13.930561 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/64c08f63-12a2-4dfb-b96d-0a12e9725021-internal-tls-certs\") pod \"swift-proxy-5c6fbdb57f-zvhpz\" (UID: \"64c08f63-12a2-4dfb-b96d-0a12e9725021\") " pod="openstack/swift-proxy-5c6fbdb57f-zvhpz" Jan 29 15:50:13 crc kubenswrapper[5008]: I0129 15:50:13.931220 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cshvj\" (UniqueName: \"kubernetes.io/projected/3b26c725-8ee1-4144-baa0-a4a85bb7e1d2-kube-api-access-cshvj\") pod \"openstackclient\" (UID: \"3b26c725-8ee1-4144-baa0-a4a85bb7e1d2\") " pod="openstack/openstackclient" Jan 29 15:50:13 crc kubenswrapper[5008]: I0129 15:50:13.931374 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hslk6\" (UniqueName: \"kubernetes.io/projected/64c08f63-12a2-4dfb-b96d-0a12e9725021-kube-api-access-hslk6\") pod \"swift-proxy-5c6fbdb57f-zvhpz\" (UID: \"64c08f63-12a2-4dfb-b96d-0a12e9725021\") " pod="openstack/swift-proxy-5c6fbdb57f-zvhpz" Jan 29 15:50:13 crc kubenswrapper[5008]: I0129 15:50:13.931472 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/64c08f63-12a2-4dfb-b96d-0a12e9725021-etc-swift\") pod \"swift-proxy-5c6fbdb57f-zvhpz\" (UID: \"64c08f63-12a2-4dfb-b96d-0a12e9725021\") " pod="openstack/swift-proxy-5c6fbdb57f-zvhpz" Jan 29 15:50:13 crc kubenswrapper[5008]: I0129 15:50:13.931981 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/3b26c725-8ee1-4144-baa0-a4a85bb7e1d2-openstack-config-secret\") pod \"openstackclient\" (UID: \"3b26c725-8ee1-4144-baa0-a4a85bb7e1d2\") " pod="openstack/openstackclient" Jan 29 15:50:13 crc kubenswrapper[5008]: I0129 15:50:13.932065 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3b26c725-8ee1-4144-baa0-a4a85bb7e1d2-combined-ca-bundle\") pod \"openstackclient\" (UID: \"3b26c725-8ee1-4144-baa0-a4a85bb7e1d2\") " pod="openstack/openstackclient" Jan 29 15:50:13 crc kubenswrapper[5008]: I0129 15:50:13.932158 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/64c08f63-12a2-4dfb-b96d-0a12e9725021-log-httpd\") pod \"swift-proxy-5c6fbdb57f-zvhpz\" (UID: \"64c08f63-12a2-4dfb-b96d-0a12e9725021\") " pod="openstack/swift-proxy-5c6fbdb57f-zvhpz" Jan 29 15:50:13 crc kubenswrapper[5008]: I0129 15:50:13.932211 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/64c08f63-12a2-4dfb-b96d-0a12e9725021-config-data\") pod \"swift-proxy-5c6fbdb57f-zvhpz\" (UID: \"64c08f63-12a2-4dfb-b96d-0a12e9725021\") " pod="openstack/swift-proxy-5c6fbdb57f-zvhpz" Jan 29 15:50:13 crc kubenswrapper[5008]: I0129 15:50:13.932318 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/3b26c725-8ee1-4144-baa0-a4a85bb7e1d2-openstack-config\") pod \"openstackclient\" (UID: \"3b26c725-8ee1-4144-baa0-a4a85bb7e1d2\") " pod="openstack/openstackclient" Jan 29 15:50:13 crc kubenswrapper[5008]: I0129 15:50:13.932365 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/64c08f63-12a2-4dfb-b96d-0a12e9725021-public-tls-certs\") pod \"swift-proxy-5c6fbdb57f-zvhpz\" (UID: \"64c08f63-12a2-4dfb-b96d-0a12e9725021\") " pod="openstack/swift-proxy-5c6fbdb57f-zvhpz" Jan 29 15:50:13 crc kubenswrapper[5008]: I0129 15:50:13.932410 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/64c08f63-12a2-4dfb-b96d-0a12e9725021-run-httpd\") pod \"swift-proxy-5c6fbdb57f-zvhpz\" (UID: \"64c08f63-12a2-4dfb-b96d-0a12e9725021\") " pod="openstack/swift-proxy-5c6fbdb57f-zvhpz" Jan 29 15:50:13 crc kubenswrapper[5008]: I0129 15:50:13.932442 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/64c08f63-12a2-4dfb-b96d-0a12e9725021-combined-ca-bundle\") pod \"swift-proxy-5c6fbdb57f-zvhpz\" (UID: \"64c08f63-12a2-4dfb-b96d-0a12e9725021\") " pod="openstack/swift-proxy-5c6fbdb57f-zvhpz" Jan 29 15:50:13 crc kubenswrapper[5008]: I0129 15:50:13.932695 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/64c08f63-12a2-4dfb-b96d-0a12e9725021-log-httpd\") pod \"swift-proxy-5c6fbdb57f-zvhpz\" (UID: \"64c08f63-12a2-4dfb-b96d-0a12e9725021\") " pod="openstack/swift-proxy-5c6fbdb57f-zvhpz" Jan 29 15:50:13 crc kubenswrapper[5008]: I0129 15:50:13.933316 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/64c08f63-12a2-4dfb-b96d-0a12e9725021-run-httpd\") pod \"swift-proxy-5c6fbdb57f-zvhpz\" (UID: \"64c08f63-12a2-4dfb-b96d-0a12e9725021\") " pod="openstack/swift-proxy-5c6fbdb57f-zvhpz" Jan 29 15:50:13 crc kubenswrapper[5008]: I0129 15:50:13.937454 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/64c08f63-12a2-4dfb-b96d-0a12e9725021-combined-ca-bundle\") pod \"swift-proxy-5c6fbdb57f-zvhpz\" (UID: \"64c08f63-12a2-4dfb-b96d-0a12e9725021\") " pod="openstack/swift-proxy-5c6fbdb57f-zvhpz" Jan 29 15:50:13 crc kubenswrapper[5008]: I0129 15:50:13.938242 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/64c08f63-12a2-4dfb-b96d-0a12e9725021-internal-tls-certs\") pod \"swift-proxy-5c6fbdb57f-zvhpz\" (UID: \"64c08f63-12a2-4dfb-b96d-0a12e9725021\") " pod="openstack/swift-proxy-5c6fbdb57f-zvhpz" Jan 29 15:50:13 crc kubenswrapper[5008]: I0129 15:50:13.939225 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/64c08f63-12a2-4dfb-b96d-0a12e9725021-etc-swift\") pod \"swift-proxy-5c6fbdb57f-zvhpz\" (UID: \"64c08f63-12a2-4dfb-b96d-0a12e9725021\") " pod="openstack/swift-proxy-5c6fbdb57f-zvhpz" Jan 29 15:50:13 crc kubenswrapper[5008]: I0129 15:50:13.947950 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/64c08f63-12a2-4dfb-b96d-0a12e9725021-public-tls-certs\") pod \"swift-proxy-5c6fbdb57f-zvhpz\" (UID: \"64c08f63-12a2-4dfb-b96d-0a12e9725021\") " pod="openstack/swift-proxy-5c6fbdb57f-zvhpz" Jan 29 15:50:13 crc kubenswrapper[5008]: I0129 15:50:13.949296 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/64c08f63-12a2-4dfb-b96d-0a12e9725021-config-data\") pod \"swift-proxy-5c6fbdb57f-zvhpz\" (UID: \"64c08f63-12a2-4dfb-b96d-0a12e9725021\") " pod="openstack/swift-proxy-5c6fbdb57f-zvhpz" Jan 29 15:50:13 crc kubenswrapper[5008]: I0129 15:50:13.962906 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hslk6\" (UniqueName: \"kubernetes.io/projected/64c08f63-12a2-4dfb-b96d-0a12e9725021-kube-api-access-hslk6\") pod \"swift-proxy-5c6fbdb57f-zvhpz\" (UID: \"64c08f63-12a2-4dfb-b96d-0a12e9725021\") " pod="openstack/swift-proxy-5c6fbdb57f-zvhpz" Jan 29 15:50:14 crc kubenswrapper[5008]: I0129 15:50:14.033952 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/3b26c725-8ee1-4144-baa0-a4a85bb7e1d2-openstack-config\") pod \"openstackclient\" (UID: \"3b26c725-8ee1-4144-baa0-a4a85bb7e1d2\") " pod="openstack/openstackclient" Jan 29 15:50:14 crc kubenswrapper[5008]: I0129 15:50:14.034075 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cshvj\" (UniqueName: \"kubernetes.io/projected/3b26c725-8ee1-4144-baa0-a4a85bb7e1d2-kube-api-access-cshvj\") pod \"openstackclient\" (UID: \"3b26c725-8ee1-4144-baa0-a4a85bb7e1d2\") " pod="openstack/openstackclient" Jan 29 15:50:14 crc kubenswrapper[5008]: I0129 15:50:14.034141 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/3b26c725-8ee1-4144-baa0-a4a85bb7e1d2-openstack-config-secret\") pod \"openstackclient\" (UID: \"3b26c725-8ee1-4144-baa0-a4a85bb7e1d2\") " pod="openstack/openstackclient" Jan 29 15:50:14 crc kubenswrapper[5008]: I0129 15:50:14.034197 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3b26c725-8ee1-4144-baa0-a4a85bb7e1d2-combined-ca-bundle\") pod \"openstackclient\" (UID: \"3b26c725-8ee1-4144-baa0-a4a85bb7e1d2\") " pod="openstack/openstackclient" Jan 29 15:50:14 crc kubenswrapper[5008]: I0129 15:50:14.035763 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/3b26c725-8ee1-4144-baa0-a4a85bb7e1d2-openstack-config\") pod \"openstackclient\" (UID: \"3b26c725-8ee1-4144-baa0-a4a85bb7e1d2\") " pod="openstack/openstackclient" Jan 29 15:50:14 crc kubenswrapper[5008]: I0129 15:50:14.038716 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3b26c725-8ee1-4144-baa0-a4a85bb7e1d2-combined-ca-bundle\") pod \"openstackclient\" (UID: \"3b26c725-8ee1-4144-baa0-a4a85bb7e1d2\") " pod="openstack/openstackclient" Jan 29 15:50:14 crc kubenswrapper[5008]: I0129 15:50:14.046411 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/3b26c725-8ee1-4144-baa0-a4a85bb7e1d2-openstack-config-secret\") pod \"openstackclient\" (UID: \"3b26c725-8ee1-4144-baa0-a4a85bb7e1d2\") " pod="openstack/openstackclient" Jan 29 15:50:14 crc kubenswrapper[5008]: I0129 15:50:14.070474 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cshvj\" (UniqueName: \"kubernetes.io/projected/3b26c725-8ee1-4144-baa0-a4a85bb7e1d2-kube-api-access-cshvj\") pod \"openstackclient\" (UID: \"3b26c725-8ee1-4144-baa0-a4a85bb7e1d2\") " pod="openstack/openstackclient" Jan 29 15:50:14 crc kubenswrapper[5008]: E0129 15:50:14.076111 5008 log.go:32] "RunPodSandbox from runtime service failed" err=< Jan 29 15:50:14 crc kubenswrapper[5008]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_openstackclient_openstack_26e3e9ce-4ea8-4746-af4e-21d6f2c9be74_0(f99d6fce45124ddda045eb222dc3739becc85b25f9555e7e404d374652d79289): error adding pod openstack_openstackclient to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"f99d6fce45124ddda045eb222dc3739becc85b25f9555e7e404d374652d79289" Netns:"/var/run/netns/cfb342f6-07c6-44bc-9d3f-a3d9dbcd1a06" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openstack;K8S_POD_NAME=openstackclient;K8S_POD_INFRA_CONTAINER_ID=f99d6fce45124ddda045eb222dc3739becc85b25f9555e7e404d374652d79289;K8S_POD_UID=26e3e9ce-4ea8-4746-af4e-21d6f2c9be74" Path:"" ERRORED: error configuring pod [openstack/openstackclient] networking: Multus: [openstack/openstackclient/26e3e9ce-4ea8-4746-af4e-21d6f2c9be74]: expected pod UID "26e3e9ce-4ea8-4746-af4e-21d6f2c9be74" but got "3b26c725-8ee1-4144-baa0-a4a85bb7e1d2" from Kube API Jan 29 15:50:14 crc kubenswrapper[5008]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Jan 29 15:50:14 crc kubenswrapper[5008]: > Jan 29 15:50:14 crc kubenswrapper[5008]: E0129 15:50:14.076191 5008 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err=< Jan 29 15:50:14 crc kubenswrapper[5008]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_openstackclient_openstack_26e3e9ce-4ea8-4746-af4e-21d6f2c9be74_0(f99d6fce45124ddda045eb222dc3739becc85b25f9555e7e404d374652d79289): error adding pod openstack_openstackclient to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"f99d6fce45124ddda045eb222dc3739becc85b25f9555e7e404d374652d79289" Netns:"/var/run/netns/cfb342f6-07c6-44bc-9d3f-a3d9dbcd1a06" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openstack;K8S_POD_NAME=openstackclient;K8S_POD_INFRA_CONTAINER_ID=f99d6fce45124ddda045eb222dc3739becc85b25f9555e7e404d374652d79289;K8S_POD_UID=26e3e9ce-4ea8-4746-af4e-21d6f2c9be74" Path:"" ERRORED: error configuring pod [openstack/openstackclient] networking: Multus: [openstack/openstackclient/26e3e9ce-4ea8-4746-af4e-21d6f2c9be74]: expected pod UID "26e3e9ce-4ea8-4746-af4e-21d6f2c9be74" but got "3b26c725-8ee1-4144-baa0-a4a85bb7e1d2" from Kube API Jan 29 15:50:14 crc kubenswrapper[5008]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Jan 29 15:50:14 crc kubenswrapper[5008]: > pod="openstack/openstackclient" Jan 29 15:50:14 crc kubenswrapper[5008]: I0129 15:50:14.108409 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-5c6fbdb57f-zvhpz" Jan 29 15:50:14 crc kubenswrapper[5008]: I0129 15:50:14.136523 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Jan 29 15:50:14 crc kubenswrapper[5008]: I0129 15:50:14.176564 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-d5688bfcd-94rkm" event={"ID":"24c4cc25-9e50-4601-bac2-552e1aded799","Type":"ContainerStarted","Data":"d4bb9c3bf450ab33644a2c34f16296cc32f39dc53c4ed3f5e8f37b10c024982d"} Jan 29 15:50:14 crc kubenswrapper[5008]: I0129 15:50:14.182878 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-7f9c9f8766-4lf97" event={"ID":"ce981b8e-ff53-48ad-b44e-b150c0b1b80f","Type":"ContainerStarted","Data":"4e511f378927e33787b2a83a1f7b41cfa438148ffe7e2bba89ff6429ae3dda94"} Jan 29 15:50:14 crc kubenswrapper[5008]: I0129 15:50:14.182912 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-7f9c9f8766-4lf97" event={"ID":"ce981b8e-ff53-48ad-b44e-b150c0b1b80f","Type":"ContainerStarted","Data":"705f9053f043424c55ed90da76ae1b122f1f646741a6cdbb600c0bf424142cc2"} Jan 29 15:50:14 crc kubenswrapper[5008]: I0129 15:50:14.199055 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-5c46c758ff-5p4jl" event={"ID":"f77f54f0-02b9-4082-8a76-dc78a9b7d08c","Type":"ContainerStarted","Data":"188f7ccadd7fa4a7273e1f297c3797cdf32c0a0265d4db20de98f029d9d205dd"} Jan 29 15:50:14 crc kubenswrapper[5008]: I0129 15:50:14.239119 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6578955fd5-h99wm" event={"ID":"35979baf-dba0-453c-bafd-16985d082448","Type":"ContainerStarted","Data":"517994ddf8724b531c045e361104301810488aaea5740758e3935f990fbe3040"} Jan 29 15:50:14 crc kubenswrapper[5008]: I0129 15:50:14.239188 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Jan 29 15:50:14 crc kubenswrapper[5008]: I0129 15:50:14.239960 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-6578955fd5-h99wm" Jan 29 15:50:14 crc kubenswrapper[5008]: I0129 15:50:14.242364 5008 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-worker-5c46c758ff-5p4jl" podStartSLOduration=2.242246584 podStartE2EDuration="5.242334772s" podCreationTimestamp="2026-01-29 15:50:09 +0000 UTC" firstStartedPulling="2026-01-29 15:50:10.172038443 +0000 UTC m=+1353.844892680" lastFinishedPulling="2026-01-29 15:50:13.172126631 +0000 UTC m=+1356.844980868" observedRunningTime="2026-01-29 15:50:14.237217878 +0000 UTC m=+1357.910072115" watchObservedRunningTime="2026-01-29 15:50:14.242334772 +0000 UTC m=+1357.915189019" Jan 29 15:50:14 crc kubenswrapper[5008]: I0129 15:50:14.292033 5008 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openstack/openstackclient" oldPodUID="26e3e9ce-4ea8-4746-af4e-21d6f2c9be74" podUID="3b26c725-8ee1-4144-baa0-a4a85bb7e1d2" Jan 29 15:50:14 crc kubenswrapper[5008]: I0129 15:50:14.292726 5008 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-6578955fd5-h99wm" podStartSLOduration=5.292704084 podStartE2EDuration="5.292704084s" podCreationTimestamp="2026-01-29 15:50:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 15:50:14.279956235 +0000 UTC m=+1357.952810492" watchObservedRunningTime="2026-01-29 15:50:14.292704084 +0000 UTC m=+1357.965558321" Jan 29 15:50:14 crc kubenswrapper[5008]: I0129 15:50:14.563513 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Jan 29 15:50:14 crc kubenswrapper[5008]: I0129 15:50:14.575376 5008 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 29 15:50:14 crc kubenswrapper[5008]: I0129 15:50:14.658436 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/26e3e9ce-4ea8-4746-af4e-21d6f2c9be74-combined-ca-bundle\") pod \"26e3e9ce-4ea8-4746-af4e-21d6f2c9be74\" (UID: \"26e3e9ce-4ea8-4746-af4e-21d6f2c9be74\") " Jan 29 15:50:14 crc kubenswrapper[5008]: I0129 15:50:14.658598 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qrxmg\" (UniqueName: \"kubernetes.io/projected/26e3e9ce-4ea8-4746-af4e-21d6f2c9be74-kube-api-access-qrxmg\") pod \"26e3e9ce-4ea8-4746-af4e-21d6f2c9be74\" (UID: \"26e3e9ce-4ea8-4746-af4e-21d6f2c9be74\") " Jan 29 15:50:14 crc kubenswrapper[5008]: I0129 15:50:14.658648 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/26e3e9ce-4ea8-4746-af4e-21d6f2c9be74-openstack-config\") pod \"26e3e9ce-4ea8-4746-af4e-21d6f2c9be74\" (UID: \"26e3e9ce-4ea8-4746-af4e-21d6f2c9be74\") " Jan 29 15:50:14 crc kubenswrapper[5008]: I0129 15:50:14.658675 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/26e3e9ce-4ea8-4746-af4e-21d6f2c9be74-openstack-config-secret\") pod \"26e3e9ce-4ea8-4746-af4e-21d6f2c9be74\" (UID: \"26e3e9ce-4ea8-4746-af4e-21d6f2c9be74\") " Jan 29 15:50:14 crc kubenswrapper[5008]: I0129 15:50:14.668588 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/26e3e9ce-4ea8-4746-af4e-21d6f2c9be74-openstack-config" (OuterVolumeSpecName: "openstack-config") pod "26e3e9ce-4ea8-4746-af4e-21d6f2c9be74" (UID: "26e3e9ce-4ea8-4746-af4e-21d6f2c9be74"). InnerVolumeSpecName "openstack-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:50:14 crc kubenswrapper[5008]: I0129 15:50:14.673057 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/26e3e9ce-4ea8-4746-af4e-21d6f2c9be74-openstack-config-secret" (OuterVolumeSpecName: "openstack-config-secret") pod "26e3e9ce-4ea8-4746-af4e-21d6f2c9be74" (UID: "26e3e9ce-4ea8-4746-af4e-21d6f2c9be74"). InnerVolumeSpecName "openstack-config-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:50:14 crc kubenswrapper[5008]: I0129 15:50:14.677750 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/26e3e9ce-4ea8-4746-af4e-21d6f2c9be74-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "26e3e9ce-4ea8-4746-af4e-21d6f2c9be74" (UID: "26e3e9ce-4ea8-4746-af4e-21d6f2c9be74"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:50:14 crc kubenswrapper[5008]: I0129 15:50:14.708747 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/26e3e9ce-4ea8-4746-af4e-21d6f2c9be74-kube-api-access-qrxmg" (OuterVolumeSpecName: "kube-api-access-qrxmg") pod "26e3e9ce-4ea8-4746-af4e-21d6f2c9be74" (UID: "26e3e9ce-4ea8-4746-af4e-21d6f2c9be74"). InnerVolumeSpecName "kube-api-access-qrxmg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:50:14 crc kubenswrapper[5008]: I0129 15:50:14.761488 5008 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/26e3e9ce-4ea8-4746-af4e-21d6f2c9be74-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 15:50:14 crc kubenswrapper[5008]: I0129 15:50:14.761520 5008 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qrxmg\" (UniqueName: \"kubernetes.io/projected/26e3e9ce-4ea8-4746-af4e-21d6f2c9be74-kube-api-access-qrxmg\") on node \"crc\" DevicePath \"\"" Jan 29 15:50:14 crc kubenswrapper[5008]: I0129 15:50:14.761534 5008 reconciler_common.go:293] "Volume detached for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/26e3e9ce-4ea8-4746-af4e-21d6f2c9be74-openstack-config\") on node \"crc\" DevicePath \"\"" Jan 29 15:50:14 crc kubenswrapper[5008]: I0129 15:50:14.761543 5008 reconciler_common.go:293] "Volume detached for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/26e3e9ce-4ea8-4746-af4e-21d6f2c9be74-openstack-config-secret\") on node \"crc\" DevicePath \"\"" Jan 29 15:50:14 crc kubenswrapper[5008]: I0129 15:50:14.808028 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Jan 29 15:50:14 crc kubenswrapper[5008]: I0129 15:50:14.898902 5008 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Jan 29 15:50:15 crc kubenswrapper[5008]: W0129 15:50:15.053146 5008 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod64c08f63_12a2_4dfb_b96d_0a12e9725021.slice/crio-0179304a91aeaf706bcbe516bc83c3b2cef97f31f00801d669e6faf9f746a3b0 WatchSource:0}: Error finding container 0179304a91aeaf706bcbe516bc83c3b2cef97f31f00801d669e6faf9f746a3b0: Status 404 returned error can't find the container with id 0179304a91aeaf706bcbe516bc83c3b2cef97f31f00801d669e6faf9f746a3b0 Jan 29 15:50:15 crc kubenswrapper[5008]: I0129 15:50:15.053264 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-5c6fbdb57f-zvhpz"] Jan 29 15:50:15 crc kubenswrapper[5008]: I0129 15:50:15.251350 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-7f9c9f8766-4lf97" event={"ID":"ce981b8e-ff53-48ad-b44e-b150c0b1b80f","Type":"ContainerStarted","Data":"21ab3e5e9c098630f4e65ce2d9d27c6c6fb172b9c0728335e8e146f72a60d6a6"} Jan 29 15:50:15 crc kubenswrapper[5008]: I0129 15:50:15.251619 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-7f9c9f8766-4lf97" Jan 29 15:50:15 crc kubenswrapper[5008]: I0129 15:50:15.251815 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-7f9c9f8766-4lf97" Jan 29 15:50:15 crc kubenswrapper[5008]: I0129 15:50:15.252924 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"3b26c725-8ee1-4144-baa0-a4a85bb7e1d2","Type":"ContainerStarted","Data":"e11174b8f1bc4882d3aaac37c4f644a3449e0accefc630d8e4b56b876aefa9f7"} Jan 29 15:50:15 crc kubenswrapper[5008]: I0129 15:50:15.255498 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-5c46c758ff-5p4jl" event={"ID":"f77f54f0-02b9-4082-8a76-dc78a9b7d08c","Type":"ContainerStarted","Data":"024a5adcb36407fd6632a358885d2bf858f9bbe76adf4c504c995d93e17ab4b9"} Jan 29 15:50:15 crc kubenswrapper[5008]: I0129 15:50:15.259129 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b98db574-9529-4d76-be4d-66b44b61a962","Type":"ContainerStarted","Data":"87637b6186649510ad5e6bf9fde94d36421576f604a4ea89ea5a377eb7dc8200"} Jan 29 15:50:15 crc kubenswrapper[5008]: I0129 15:50:15.259656 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 29 15:50:15 crc kubenswrapper[5008]: I0129 15:50:15.261411 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-5c6fbdb57f-zvhpz" event={"ID":"64c08f63-12a2-4dfb-b96d-0a12e9725021","Type":"ContainerStarted","Data":"0179304a91aeaf706bcbe516bc83c3b2cef97f31f00801d669e6faf9f746a3b0"} Jan 29 15:50:15 crc kubenswrapper[5008]: I0129 15:50:15.263952 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-d5688bfcd-94rkm" event={"ID":"24c4cc25-9e50-4601-bac2-552e1aded799","Type":"ContainerStarted","Data":"8e1bf41bb7757d4555c8defd7cda4fb736b4a3836e9261f60d5c0dae9d4b367d"} Jan 29 15:50:15 crc kubenswrapper[5008]: I0129 15:50:15.264008 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Jan 29 15:50:15 crc kubenswrapper[5008]: I0129 15:50:15.282198 5008 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-7f9c9f8766-4lf97" podStartSLOduration=3.282175497 podStartE2EDuration="3.282175497s" podCreationTimestamp="2026-01-29 15:50:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 15:50:15.269716325 +0000 UTC m=+1358.942570562" watchObservedRunningTime="2026-01-29 15:50:15.282175497 +0000 UTC m=+1358.955029734" Jan 29 15:50:15 crc kubenswrapper[5008]: I0129 15:50:15.313315 5008 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=3.51854739 podStartE2EDuration="8.313291662s" podCreationTimestamp="2026-01-29 15:50:07 +0000 UTC" firstStartedPulling="2026-01-29 15:50:09.125259471 +0000 UTC m=+1352.798113698" lastFinishedPulling="2026-01-29 15:50:13.920003733 +0000 UTC m=+1357.592857970" observedRunningTime="2026-01-29 15:50:15.30334024 +0000 UTC m=+1358.976194497" watchObservedRunningTime="2026-01-29 15:50:15.313291662 +0000 UTC m=+1358.986145899" Jan 29 15:50:15 crc kubenswrapper[5008]: I0129 15:50:15.327654 5008 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-keystone-listener-d5688bfcd-94rkm" podStartSLOduration=3.188357436 podStartE2EDuration="6.327639719s" podCreationTimestamp="2026-01-29 15:50:09 +0000 UTC" firstStartedPulling="2026-01-29 15:50:10.054875412 +0000 UTC m=+1353.727729649" lastFinishedPulling="2026-01-29 15:50:13.194157695 +0000 UTC m=+1356.867011932" observedRunningTime="2026-01-29 15:50:15.322958956 +0000 UTC m=+1358.995813203" watchObservedRunningTime="2026-01-29 15:50:15.327639719 +0000 UTC m=+1359.000493956" Jan 29 15:50:15 crc kubenswrapper[5008]: I0129 15:50:15.328415 5008 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openstack/openstackclient" oldPodUID="26e3e9ce-4ea8-4746-af4e-21d6f2c9be74" podUID="3b26c725-8ee1-4144-baa0-a4a85bb7e1d2" Jan 29 15:50:15 crc kubenswrapper[5008]: I0129 15:50:15.339762 5008 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="26e3e9ce-4ea8-4746-af4e-21d6f2c9be74" path="/var/lib/kubelet/pods/26e3e9ce-4ea8-4746-af4e-21d6f2c9be74/volumes" Jan 29 15:50:16 crc kubenswrapper[5008]: I0129 15:50:16.017698 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cinder-api-0" Jan 29 15:50:16 crc kubenswrapper[5008]: I0129 15:50:16.283028 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-5c6fbdb57f-zvhpz" event={"ID":"64c08f63-12a2-4dfb-b96d-0a12e9725021","Type":"ContainerStarted","Data":"ec2fce16711c316062f9db7e235b90573123117bde5ab9b3c69b07d18ad9760c"} Jan 29 15:50:16 crc kubenswrapper[5008]: I0129 15:50:16.283373 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-5c6fbdb57f-zvhpz" event={"ID":"64c08f63-12a2-4dfb-b96d-0a12e9725021","Type":"ContainerStarted","Data":"782f8005c6f1c16206119dca644f958e03a7a1c84c42e66652134cff73017c15"} Jan 29 15:50:16 crc kubenswrapper[5008]: I0129 15:50:16.283924 5008 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="b98db574-9529-4d76-be4d-66b44b61a962" containerName="ceilometer-central-agent" containerID="cri-o://29377eababaf8e8e41487afa073b54e532dba60d67b967245b292537b2985d32" gracePeriod=30 Jan 29 15:50:16 crc kubenswrapper[5008]: I0129 15:50:16.284013 5008 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="b98db574-9529-4d76-be4d-66b44b61a962" containerName="sg-core" containerID="cri-o://0b2d6292707a75e758c120738b19a67f88a7bad26c37389a75eb49abc679e069" gracePeriod=30 Jan 29 15:50:16 crc kubenswrapper[5008]: I0129 15:50:16.284044 5008 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="b98db574-9529-4d76-be4d-66b44b61a962" containerName="ceilometer-notification-agent" containerID="cri-o://90e79906614f1aa108747a96f77ccfe3fdb70daf711090972edf7e61f23302c4" gracePeriod=30 Jan 29 15:50:16 crc kubenswrapper[5008]: I0129 15:50:16.284081 5008 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="b98db574-9529-4d76-be4d-66b44b61a962" containerName="proxy-httpd" containerID="cri-o://87637b6186649510ad5e6bf9fde94d36421576f604a4ea89ea5a377eb7dc8200" gracePeriod=30 Jan 29 15:50:16 crc kubenswrapper[5008]: I0129 15:50:16.284164 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-5c6fbdb57f-zvhpz" Jan 29 15:50:16 crc kubenswrapper[5008]: I0129 15:50:16.284197 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-5c6fbdb57f-zvhpz" Jan 29 15:50:16 crc kubenswrapper[5008]: I0129 15:50:16.319601 5008 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-proxy-5c6fbdb57f-zvhpz" podStartSLOduration=3.319577682 podStartE2EDuration="3.319577682s" podCreationTimestamp="2026-01-29 15:50:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 15:50:16.29847121 +0000 UTC m=+1359.971325447" watchObservedRunningTime="2026-01-29 15:50:16.319577682 +0000 UTC m=+1359.992431929" Jan 29 15:50:17 crc kubenswrapper[5008]: I0129 15:50:17.159750 5008 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 29 15:50:17 crc kubenswrapper[5008]: I0129 15:50:17.209491 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/b98db574-9529-4d76-be4d-66b44b61a962-sg-core-conf-yaml\") pod \"b98db574-9529-4d76-be4d-66b44b61a962\" (UID: \"b98db574-9529-4d76-be4d-66b44b61a962\") " Jan 29 15:50:17 crc kubenswrapper[5008]: I0129 15:50:17.209541 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b98db574-9529-4d76-be4d-66b44b61a962-log-httpd\") pod \"b98db574-9529-4d76-be4d-66b44b61a962\" (UID: \"b98db574-9529-4d76-be4d-66b44b61a962\") " Jan 29 15:50:17 crc kubenswrapper[5008]: I0129 15:50:17.209594 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7nrdl\" (UniqueName: \"kubernetes.io/projected/b98db574-9529-4d76-be4d-66b44b61a962-kube-api-access-7nrdl\") pod \"b98db574-9529-4d76-be4d-66b44b61a962\" (UID: \"b98db574-9529-4d76-be4d-66b44b61a962\") " Jan 29 15:50:17 crc kubenswrapper[5008]: I0129 15:50:17.209630 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b98db574-9529-4d76-be4d-66b44b61a962-combined-ca-bundle\") pod \"b98db574-9529-4d76-be4d-66b44b61a962\" (UID: \"b98db574-9529-4d76-be4d-66b44b61a962\") " Jan 29 15:50:17 crc kubenswrapper[5008]: I0129 15:50:17.210201 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b98db574-9529-4d76-be4d-66b44b61a962-config-data\") pod \"b98db574-9529-4d76-be4d-66b44b61a962\" (UID: \"b98db574-9529-4d76-be4d-66b44b61a962\") " Jan 29 15:50:17 crc kubenswrapper[5008]: I0129 15:50:17.210340 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b98db574-9529-4d76-be4d-66b44b61a962-run-httpd\") pod \"b98db574-9529-4d76-be4d-66b44b61a962\" (UID: \"b98db574-9529-4d76-be4d-66b44b61a962\") " Jan 29 15:50:17 crc kubenswrapper[5008]: I0129 15:50:17.210367 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b98db574-9529-4d76-be4d-66b44b61a962-scripts\") pod \"b98db574-9529-4d76-be4d-66b44b61a962\" (UID: \"b98db574-9529-4d76-be4d-66b44b61a962\") " Jan 29 15:50:17 crc kubenswrapper[5008]: I0129 15:50:17.210413 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b98db574-9529-4d76-be4d-66b44b61a962-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "b98db574-9529-4d76-be4d-66b44b61a962" (UID: "b98db574-9529-4d76-be4d-66b44b61a962"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 15:50:17 crc kubenswrapper[5008]: I0129 15:50:17.210697 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b98db574-9529-4d76-be4d-66b44b61a962-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "b98db574-9529-4d76-be4d-66b44b61a962" (UID: "b98db574-9529-4d76-be4d-66b44b61a962"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 15:50:17 crc kubenswrapper[5008]: I0129 15:50:17.210854 5008 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b98db574-9529-4d76-be4d-66b44b61a962-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 29 15:50:17 crc kubenswrapper[5008]: I0129 15:50:17.210867 5008 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b98db574-9529-4d76-be4d-66b44b61a962-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 29 15:50:17 crc kubenswrapper[5008]: I0129 15:50:17.216489 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b98db574-9529-4d76-be4d-66b44b61a962-scripts" (OuterVolumeSpecName: "scripts") pod "b98db574-9529-4d76-be4d-66b44b61a962" (UID: "b98db574-9529-4d76-be4d-66b44b61a962"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:50:17 crc kubenswrapper[5008]: I0129 15:50:17.216505 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b98db574-9529-4d76-be4d-66b44b61a962-kube-api-access-7nrdl" (OuterVolumeSpecName: "kube-api-access-7nrdl") pod "b98db574-9529-4d76-be4d-66b44b61a962" (UID: "b98db574-9529-4d76-be4d-66b44b61a962"). InnerVolumeSpecName "kube-api-access-7nrdl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:50:17 crc kubenswrapper[5008]: I0129 15:50:17.306966 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b98db574-9529-4d76-be4d-66b44b61a962-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "b98db574-9529-4d76-be4d-66b44b61a962" (UID: "b98db574-9529-4d76-be4d-66b44b61a962"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:50:17 crc kubenswrapper[5008]: I0129 15:50:17.317212 5008 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b98db574-9529-4d76-be4d-66b44b61a962-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 15:50:17 crc kubenswrapper[5008]: I0129 15:50:17.317246 5008 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/b98db574-9529-4d76-be4d-66b44b61a962-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 29 15:50:17 crc kubenswrapper[5008]: I0129 15:50:17.317263 5008 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7nrdl\" (UniqueName: \"kubernetes.io/projected/b98db574-9529-4d76-be4d-66b44b61a962-kube-api-access-7nrdl\") on node \"crc\" DevicePath \"\"" Jan 29 15:50:17 crc kubenswrapper[5008]: I0129 15:50:17.351315 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b98db574-9529-4d76-be4d-66b44b61a962-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b98db574-9529-4d76-be4d-66b44b61a962" (UID: "b98db574-9529-4d76-be4d-66b44b61a962"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:50:17 crc kubenswrapper[5008]: I0129 15:50:17.379650 5008 generic.go:334] "Generic (PLEG): container finished" podID="b98db574-9529-4d76-be4d-66b44b61a962" containerID="87637b6186649510ad5e6bf9fde94d36421576f604a4ea89ea5a377eb7dc8200" exitCode=0 Jan 29 15:50:17 crc kubenswrapper[5008]: I0129 15:50:17.379684 5008 generic.go:334] "Generic (PLEG): container finished" podID="b98db574-9529-4d76-be4d-66b44b61a962" containerID="0b2d6292707a75e758c120738b19a67f88a7bad26c37389a75eb49abc679e069" exitCode=2 Jan 29 15:50:17 crc kubenswrapper[5008]: I0129 15:50:17.379693 5008 generic.go:334] "Generic (PLEG): container finished" podID="b98db574-9529-4d76-be4d-66b44b61a962" containerID="90e79906614f1aa108747a96f77ccfe3fdb70daf711090972edf7e61f23302c4" exitCode=0 Jan 29 15:50:17 crc kubenswrapper[5008]: I0129 15:50:17.379700 5008 generic.go:334] "Generic (PLEG): container finished" podID="b98db574-9529-4d76-be4d-66b44b61a962" containerID="29377eababaf8e8e41487afa073b54e532dba60d67b967245b292537b2985d32" exitCode=0 Jan 29 15:50:17 crc kubenswrapper[5008]: I0129 15:50:17.380708 5008 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 29 15:50:17 crc kubenswrapper[5008]: I0129 15:50:17.388309 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b98db574-9529-4d76-be4d-66b44b61a962","Type":"ContainerDied","Data":"87637b6186649510ad5e6bf9fde94d36421576f604a4ea89ea5a377eb7dc8200"} Jan 29 15:50:17 crc kubenswrapper[5008]: I0129 15:50:17.388360 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b98db574-9529-4d76-be4d-66b44b61a962","Type":"ContainerDied","Data":"0b2d6292707a75e758c120738b19a67f88a7bad26c37389a75eb49abc679e069"} Jan 29 15:50:17 crc kubenswrapper[5008]: I0129 15:50:17.388375 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b98db574-9529-4d76-be4d-66b44b61a962","Type":"ContainerDied","Data":"90e79906614f1aa108747a96f77ccfe3fdb70daf711090972edf7e61f23302c4"} Jan 29 15:50:17 crc kubenswrapper[5008]: I0129 15:50:17.388386 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b98db574-9529-4d76-be4d-66b44b61a962","Type":"ContainerDied","Data":"29377eababaf8e8e41487afa073b54e532dba60d67b967245b292537b2985d32"} Jan 29 15:50:17 crc kubenswrapper[5008]: I0129 15:50:17.388398 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b98db574-9529-4d76-be4d-66b44b61a962","Type":"ContainerDied","Data":"ac8bcb14c02650f4628017163e965fe6e1e75f1116276a7166d11c7831388a13"} Jan 29 15:50:17 crc kubenswrapper[5008]: I0129 15:50:17.388417 5008 scope.go:117] "RemoveContainer" containerID="87637b6186649510ad5e6bf9fde94d36421576f604a4ea89ea5a377eb7dc8200" Jan 29 15:50:17 crc kubenswrapper[5008]: I0129 15:50:17.430457 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b98db574-9529-4d76-be4d-66b44b61a962-config-data" (OuterVolumeSpecName: "config-data") pod "b98db574-9529-4d76-be4d-66b44b61a962" (UID: "b98db574-9529-4d76-be4d-66b44b61a962"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:50:17 crc kubenswrapper[5008]: I0129 15:50:17.430995 5008 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b98db574-9529-4d76-be4d-66b44b61a962-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 15:50:17 crc kubenswrapper[5008]: I0129 15:50:17.431021 5008 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b98db574-9529-4d76-be4d-66b44b61a962-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 15:50:17 crc kubenswrapper[5008]: I0129 15:50:17.524610 5008 scope.go:117] "RemoveContainer" containerID="0b2d6292707a75e758c120738b19a67f88a7bad26c37389a75eb49abc679e069" Jan 29 15:50:17 crc kubenswrapper[5008]: I0129 15:50:17.551996 5008 scope.go:117] "RemoveContainer" containerID="90e79906614f1aa108747a96f77ccfe3fdb70daf711090972edf7e61f23302c4" Jan 29 15:50:17 crc kubenswrapper[5008]: I0129 15:50:17.591822 5008 scope.go:117] "RemoveContainer" containerID="29377eababaf8e8e41487afa073b54e532dba60d67b967245b292537b2985d32" Jan 29 15:50:17 crc kubenswrapper[5008]: I0129 15:50:17.612327 5008 scope.go:117] "RemoveContainer" containerID="87637b6186649510ad5e6bf9fde94d36421576f604a4ea89ea5a377eb7dc8200" Jan 29 15:50:17 crc kubenswrapper[5008]: E0129 15:50:17.612815 5008 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"87637b6186649510ad5e6bf9fde94d36421576f604a4ea89ea5a377eb7dc8200\": container with ID starting with 87637b6186649510ad5e6bf9fde94d36421576f604a4ea89ea5a377eb7dc8200 not found: ID does not exist" containerID="87637b6186649510ad5e6bf9fde94d36421576f604a4ea89ea5a377eb7dc8200" Jan 29 15:50:17 crc kubenswrapper[5008]: I0129 15:50:17.612862 5008 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"87637b6186649510ad5e6bf9fde94d36421576f604a4ea89ea5a377eb7dc8200"} err="failed to get container status \"87637b6186649510ad5e6bf9fde94d36421576f604a4ea89ea5a377eb7dc8200\": rpc error: code = NotFound desc = could not find container \"87637b6186649510ad5e6bf9fde94d36421576f604a4ea89ea5a377eb7dc8200\": container with ID starting with 87637b6186649510ad5e6bf9fde94d36421576f604a4ea89ea5a377eb7dc8200 not found: ID does not exist" Jan 29 15:50:17 crc kubenswrapper[5008]: I0129 15:50:17.612892 5008 scope.go:117] "RemoveContainer" containerID="0b2d6292707a75e758c120738b19a67f88a7bad26c37389a75eb49abc679e069" Jan 29 15:50:17 crc kubenswrapper[5008]: E0129 15:50:17.613279 5008 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0b2d6292707a75e758c120738b19a67f88a7bad26c37389a75eb49abc679e069\": container with ID starting with 0b2d6292707a75e758c120738b19a67f88a7bad26c37389a75eb49abc679e069 not found: ID does not exist" containerID="0b2d6292707a75e758c120738b19a67f88a7bad26c37389a75eb49abc679e069" Jan 29 15:50:17 crc kubenswrapper[5008]: I0129 15:50:17.613312 5008 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0b2d6292707a75e758c120738b19a67f88a7bad26c37389a75eb49abc679e069"} err="failed to get container status \"0b2d6292707a75e758c120738b19a67f88a7bad26c37389a75eb49abc679e069\": rpc error: code = NotFound desc = could not find container \"0b2d6292707a75e758c120738b19a67f88a7bad26c37389a75eb49abc679e069\": container with ID starting with 0b2d6292707a75e758c120738b19a67f88a7bad26c37389a75eb49abc679e069 not found: ID does not exist" Jan 29 15:50:17 crc kubenswrapper[5008]: I0129 15:50:17.613325 5008 scope.go:117] "RemoveContainer" containerID="90e79906614f1aa108747a96f77ccfe3fdb70daf711090972edf7e61f23302c4" Jan 29 15:50:17 crc kubenswrapper[5008]: E0129 15:50:17.613560 5008 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"90e79906614f1aa108747a96f77ccfe3fdb70daf711090972edf7e61f23302c4\": container with ID starting with 90e79906614f1aa108747a96f77ccfe3fdb70daf711090972edf7e61f23302c4 not found: ID does not exist" containerID="90e79906614f1aa108747a96f77ccfe3fdb70daf711090972edf7e61f23302c4" Jan 29 15:50:17 crc kubenswrapper[5008]: I0129 15:50:17.613581 5008 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"90e79906614f1aa108747a96f77ccfe3fdb70daf711090972edf7e61f23302c4"} err="failed to get container status \"90e79906614f1aa108747a96f77ccfe3fdb70daf711090972edf7e61f23302c4\": rpc error: code = NotFound desc = could not find container \"90e79906614f1aa108747a96f77ccfe3fdb70daf711090972edf7e61f23302c4\": container with ID starting with 90e79906614f1aa108747a96f77ccfe3fdb70daf711090972edf7e61f23302c4 not found: ID does not exist" Jan 29 15:50:17 crc kubenswrapper[5008]: I0129 15:50:17.613593 5008 scope.go:117] "RemoveContainer" containerID="29377eababaf8e8e41487afa073b54e532dba60d67b967245b292537b2985d32" Jan 29 15:50:17 crc kubenswrapper[5008]: E0129 15:50:17.613766 5008 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"29377eababaf8e8e41487afa073b54e532dba60d67b967245b292537b2985d32\": container with ID starting with 29377eababaf8e8e41487afa073b54e532dba60d67b967245b292537b2985d32 not found: ID does not exist" containerID="29377eababaf8e8e41487afa073b54e532dba60d67b967245b292537b2985d32" Jan 29 15:50:17 crc kubenswrapper[5008]: I0129 15:50:17.613796 5008 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"29377eababaf8e8e41487afa073b54e532dba60d67b967245b292537b2985d32"} err="failed to get container status \"29377eababaf8e8e41487afa073b54e532dba60d67b967245b292537b2985d32\": rpc error: code = NotFound desc = could not find container \"29377eababaf8e8e41487afa073b54e532dba60d67b967245b292537b2985d32\": container with ID starting with 29377eababaf8e8e41487afa073b54e532dba60d67b967245b292537b2985d32 not found: ID does not exist" Jan 29 15:50:17 crc kubenswrapper[5008]: I0129 15:50:17.613808 5008 scope.go:117] "RemoveContainer" containerID="87637b6186649510ad5e6bf9fde94d36421576f604a4ea89ea5a377eb7dc8200" Jan 29 15:50:17 crc kubenswrapper[5008]: I0129 15:50:17.613990 5008 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"87637b6186649510ad5e6bf9fde94d36421576f604a4ea89ea5a377eb7dc8200"} err="failed to get container status \"87637b6186649510ad5e6bf9fde94d36421576f604a4ea89ea5a377eb7dc8200\": rpc error: code = NotFound desc = could not find container \"87637b6186649510ad5e6bf9fde94d36421576f604a4ea89ea5a377eb7dc8200\": container with ID starting with 87637b6186649510ad5e6bf9fde94d36421576f604a4ea89ea5a377eb7dc8200 not found: ID does not exist" Jan 29 15:50:17 crc kubenswrapper[5008]: I0129 15:50:17.614015 5008 scope.go:117] "RemoveContainer" containerID="0b2d6292707a75e758c120738b19a67f88a7bad26c37389a75eb49abc679e069" Jan 29 15:50:17 crc kubenswrapper[5008]: I0129 15:50:17.614207 5008 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0b2d6292707a75e758c120738b19a67f88a7bad26c37389a75eb49abc679e069"} err="failed to get container status \"0b2d6292707a75e758c120738b19a67f88a7bad26c37389a75eb49abc679e069\": rpc error: code = NotFound desc = could not find container \"0b2d6292707a75e758c120738b19a67f88a7bad26c37389a75eb49abc679e069\": container with ID starting with 0b2d6292707a75e758c120738b19a67f88a7bad26c37389a75eb49abc679e069 not found: ID does not exist" Jan 29 15:50:17 crc kubenswrapper[5008]: I0129 15:50:17.614225 5008 scope.go:117] "RemoveContainer" containerID="90e79906614f1aa108747a96f77ccfe3fdb70daf711090972edf7e61f23302c4" Jan 29 15:50:17 crc kubenswrapper[5008]: I0129 15:50:17.614393 5008 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"90e79906614f1aa108747a96f77ccfe3fdb70daf711090972edf7e61f23302c4"} err="failed to get container status \"90e79906614f1aa108747a96f77ccfe3fdb70daf711090972edf7e61f23302c4\": rpc error: code = NotFound desc = could not find container \"90e79906614f1aa108747a96f77ccfe3fdb70daf711090972edf7e61f23302c4\": container with ID starting with 90e79906614f1aa108747a96f77ccfe3fdb70daf711090972edf7e61f23302c4 not found: ID does not exist" Jan 29 15:50:17 crc kubenswrapper[5008]: I0129 15:50:17.614428 5008 scope.go:117] "RemoveContainer" containerID="29377eababaf8e8e41487afa073b54e532dba60d67b967245b292537b2985d32" Jan 29 15:50:17 crc kubenswrapper[5008]: I0129 15:50:17.614607 5008 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"29377eababaf8e8e41487afa073b54e532dba60d67b967245b292537b2985d32"} err="failed to get container status \"29377eababaf8e8e41487afa073b54e532dba60d67b967245b292537b2985d32\": rpc error: code = NotFound desc = could not find container \"29377eababaf8e8e41487afa073b54e532dba60d67b967245b292537b2985d32\": container with ID starting with 29377eababaf8e8e41487afa073b54e532dba60d67b967245b292537b2985d32 not found: ID does not exist" Jan 29 15:50:17 crc kubenswrapper[5008]: I0129 15:50:17.614630 5008 scope.go:117] "RemoveContainer" containerID="87637b6186649510ad5e6bf9fde94d36421576f604a4ea89ea5a377eb7dc8200" Jan 29 15:50:17 crc kubenswrapper[5008]: I0129 15:50:17.614833 5008 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"87637b6186649510ad5e6bf9fde94d36421576f604a4ea89ea5a377eb7dc8200"} err="failed to get container status \"87637b6186649510ad5e6bf9fde94d36421576f604a4ea89ea5a377eb7dc8200\": rpc error: code = NotFound desc = could not find container \"87637b6186649510ad5e6bf9fde94d36421576f604a4ea89ea5a377eb7dc8200\": container with ID starting with 87637b6186649510ad5e6bf9fde94d36421576f604a4ea89ea5a377eb7dc8200 not found: ID does not exist" Jan 29 15:50:17 crc kubenswrapper[5008]: I0129 15:50:17.614851 5008 scope.go:117] "RemoveContainer" containerID="0b2d6292707a75e758c120738b19a67f88a7bad26c37389a75eb49abc679e069" Jan 29 15:50:17 crc kubenswrapper[5008]: I0129 15:50:17.615036 5008 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0b2d6292707a75e758c120738b19a67f88a7bad26c37389a75eb49abc679e069"} err="failed to get container status \"0b2d6292707a75e758c120738b19a67f88a7bad26c37389a75eb49abc679e069\": rpc error: code = NotFound desc = could not find container \"0b2d6292707a75e758c120738b19a67f88a7bad26c37389a75eb49abc679e069\": container with ID starting with 0b2d6292707a75e758c120738b19a67f88a7bad26c37389a75eb49abc679e069 not found: ID does not exist" Jan 29 15:50:17 crc kubenswrapper[5008]: I0129 15:50:17.615060 5008 scope.go:117] "RemoveContainer" containerID="90e79906614f1aa108747a96f77ccfe3fdb70daf711090972edf7e61f23302c4" Jan 29 15:50:17 crc kubenswrapper[5008]: I0129 15:50:17.615232 5008 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"90e79906614f1aa108747a96f77ccfe3fdb70daf711090972edf7e61f23302c4"} err="failed to get container status \"90e79906614f1aa108747a96f77ccfe3fdb70daf711090972edf7e61f23302c4\": rpc error: code = NotFound desc = could not find container \"90e79906614f1aa108747a96f77ccfe3fdb70daf711090972edf7e61f23302c4\": container with ID starting with 90e79906614f1aa108747a96f77ccfe3fdb70daf711090972edf7e61f23302c4 not found: ID does not exist" Jan 29 15:50:17 crc kubenswrapper[5008]: I0129 15:50:17.615255 5008 scope.go:117] "RemoveContainer" containerID="29377eababaf8e8e41487afa073b54e532dba60d67b967245b292537b2985d32" Jan 29 15:50:17 crc kubenswrapper[5008]: I0129 15:50:17.615416 5008 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"29377eababaf8e8e41487afa073b54e532dba60d67b967245b292537b2985d32"} err="failed to get container status \"29377eababaf8e8e41487afa073b54e532dba60d67b967245b292537b2985d32\": rpc error: code = NotFound desc = could not find container \"29377eababaf8e8e41487afa073b54e532dba60d67b967245b292537b2985d32\": container with ID starting with 29377eababaf8e8e41487afa073b54e532dba60d67b967245b292537b2985d32 not found: ID does not exist" Jan 29 15:50:17 crc kubenswrapper[5008]: I0129 15:50:17.615434 5008 scope.go:117] "RemoveContainer" containerID="87637b6186649510ad5e6bf9fde94d36421576f604a4ea89ea5a377eb7dc8200" Jan 29 15:50:17 crc kubenswrapper[5008]: I0129 15:50:17.615607 5008 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"87637b6186649510ad5e6bf9fde94d36421576f604a4ea89ea5a377eb7dc8200"} err="failed to get container status \"87637b6186649510ad5e6bf9fde94d36421576f604a4ea89ea5a377eb7dc8200\": rpc error: code = NotFound desc = could not find container \"87637b6186649510ad5e6bf9fde94d36421576f604a4ea89ea5a377eb7dc8200\": container with ID starting with 87637b6186649510ad5e6bf9fde94d36421576f604a4ea89ea5a377eb7dc8200 not found: ID does not exist" Jan 29 15:50:17 crc kubenswrapper[5008]: I0129 15:50:17.615642 5008 scope.go:117] "RemoveContainer" containerID="0b2d6292707a75e758c120738b19a67f88a7bad26c37389a75eb49abc679e069" Jan 29 15:50:17 crc kubenswrapper[5008]: I0129 15:50:17.615827 5008 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0b2d6292707a75e758c120738b19a67f88a7bad26c37389a75eb49abc679e069"} err="failed to get container status \"0b2d6292707a75e758c120738b19a67f88a7bad26c37389a75eb49abc679e069\": rpc error: code = NotFound desc = could not find container \"0b2d6292707a75e758c120738b19a67f88a7bad26c37389a75eb49abc679e069\": container with ID starting with 0b2d6292707a75e758c120738b19a67f88a7bad26c37389a75eb49abc679e069 not found: ID does not exist" Jan 29 15:50:17 crc kubenswrapper[5008]: I0129 15:50:17.615860 5008 scope.go:117] "RemoveContainer" containerID="90e79906614f1aa108747a96f77ccfe3fdb70daf711090972edf7e61f23302c4" Jan 29 15:50:17 crc kubenswrapper[5008]: I0129 15:50:17.616017 5008 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"90e79906614f1aa108747a96f77ccfe3fdb70daf711090972edf7e61f23302c4"} err="failed to get container status \"90e79906614f1aa108747a96f77ccfe3fdb70daf711090972edf7e61f23302c4\": rpc error: code = NotFound desc = could not find container \"90e79906614f1aa108747a96f77ccfe3fdb70daf711090972edf7e61f23302c4\": container with ID starting with 90e79906614f1aa108747a96f77ccfe3fdb70daf711090972edf7e61f23302c4 not found: ID does not exist" Jan 29 15:50:17 crc kubenswrapper[5008]: I0129 15:50:17.616035 5008 scope.go:117] "RemoveContainer" containerID="29377eababaf8e8e41487afa073b54e532dba60d67b967245b292537b2985d32" Jan 29 15:50:17 crc kubenswrapper[5008]: I0129 15:50:17.620188 5008 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"29377eababaf8e8e41487afa073b54e532dba60d67b967245b292537b2985d32"} err="failed to get container status \"29377eababaf8e8e41487afa073b54e532dba60d67b967245b292537b2985d32\": rpc error: code = NotFound desc = could not find container \"29377eababaf8e8e41487afa073b54e532dba60d67b967245b292537b2985d32\": container with ID starting with 29377eababaf8e8e41487afa073b54e532dba60d67b967245b292537b2985d32 not found: ID does not exist" Jan 29 15:50:17 crc kubenswrapper[5008]: I0129 15:50:17.721310 5008 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 29 15:50:17 crc kubenswrapper[5008]: I0129 15:50:17.733404 5008 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 29 15:50:17 crc kubenswrapper[5008]: I0129 15:50:17.744620 5008 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 29 15:50:17 crc kubenswrapper[5008]: E0129 15:50:17.745211 5008 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b98db574-9529-4d76-be4d-66b44b61a962" containerName="sg-core" Jan 29 15:50:17 crc kubenswrapper[5008]: I0129 15:50:17.745237 5008 state_mem.go:107] "Deleted CPUSet assignment" podUID="b98db574-9529-4d76-be4d-66b44b61a962" containerName="sg-core" Jan 29 15:50:17 crc kubenswrapper[5008]: E0129 15:50:17.745254 5008 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b98db574-9529-4d76-be4d-66b44b61a962" containerName="ceilometer-central-agent" Jan 29 15:50:17 crc kubenswrapper[5008]: I0129 15:50:17.745262 5008 state_mem.go:107] "Deleted CPUSet assignment" podUID="b98db574-9529-4d76-be4d-66b44b61a962" containerName="ceilometer-central-agent" Jan 29 15:50:17 crc kubenswrapper[5008]: E0129 15:50:17.745273 5008 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b98db574-9529-4d76-be4d-66b44b61a962" containerName="ceilometer-notification-agent" Jan 29 15:50:17 crc kubenswrapper[5008]: I0129 15:50:17.745278 5008 state_mem.go:107] "Deleted CPUSet assignment" podUID="b98db574-9529-4d76-be4d-66b44b61a962" containerName="ceilometer-notification-agent" Jan 29 15:50:17 crc kubenswrapper[5008]: E0129 15:50:17.745292 5008 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b98db574-9529-4d76-be4d-66b44b61a962" containerName="proxy-httpd" Jan 29 15:50:17 crc kubenswrapper[5008]: I0129 15:50:17.745298 5008 state_mem.go:107] "Deleted CPUSet assignment" podUID="b98db574-9529-4d76-be4d-66b44b61a962" containerName="proxy-httpd" Jan 29 15:50:17 crc kubenswrapper[5008]: I0129 15:50:17.745524 5008 memory_manager.go:354] "RemoveStaleState removing state" podUID="b98db574-9529-4d76-be4d-66b44b61a962" containerName="sg-core" Jan 29 15:50:17 crc kubenswrapper[5008]: I0129 15:50:17.745545 5008 memory_manager.go:354] "RemoveStaleState removing state" podUID="b98db574-9529-4d76-be4d-66b44b61a962" containerName="ceilometer-central-agent" Jan 29 15:50:17 crc kubenswrapper[5008]: I0129 15:50:17.745563 5008 memory_manager.go:354] "RemoveStaleState removing state" podUID="b98db574-9529-4d76-be4d-66b44b61a962" containerName="ceilometer-notification-agent" Jan 29 15:50:17 crc kubenswrapper[5008]: I0129 15:50:17.745576 5008 memory_manager.go:354] "RemoveStaleState removing state" podUID="b98db574-9529-4d76-be4d-66b44b61a962" containerName="proxy-httpd" Jan 29 15:50:17 crc kubenswrapper[5008]: I0129 15:50:17.748123 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 29 15:50:17 crc kubenswrapper[5008]: I0129 15:50:17.750589 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 29 15:50:17 crc kubenswrapper[5008]: I0129 15:50:17.751005 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 29 15:50:17 crc kubenswrapper[5008]: I0129 15:50:17.755676 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 29 15:50:17 crc kubenswrapper[5008]: I0129 15:50:17.839593 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b2bd431d-b897-47c3-a9cd-0dc161e88e4b-scripts\") pod \"ceilometer-0\" (UID: \"b2bd431d-b897-47c3-a9cd-0dc161e88e4b\") " pod="openstack/ceilometer-0" Jan 29 15:50:17 crc kubenswrapper[5008]: I0129 15:50:17.839668 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b2bd431d-b897-47c3-a9cd-0dc161e88e4b-log-httpd\") pod \"ceilometer-0\" (UID: \"b2bd431d-b897-47c3-a9cd-0dc161e88e4b\") " pod="openstack/ceilometer-0" Jan 29 15:50:17 crc kubenswrapper[5008]: I0129 15:50:17.839728 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b2bd431d-b897-47c3-a9cd-0dc161e88e4b-run-httpd\") pod \"ceilometer-0\" (UID: \"b2bd431d-b897-47c3-a9cd-0dc161e88e4b\") " pod="openstack/ceilometer-0" Jan 29 15:50:17 crc kubenswrapper[5008]: I0129 15:50:17.839813 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/b2bd431d-b897-47c3-a9cd-0dc161e88e4b-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"b2bd431d-b897-47c3-a9cd-0dc161e88e4b\") " pod="openstack/ceilometer-0" Jan 29 15:50:17 crc kubenswrapper[5008]: I0129 15:50:17.839855 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m56fh\" (UniqueName: \"kubernetes.io/projected/b2bd431d-b897-47c3-a9cd-0dc161e88e4b-kube-api-access-m56fh\") pod \"ceilometer-0\" (UID: \"b2bd431d-b897-47c3-a9cd-0dc161e88e4b\") " pod="openstack/ceilometer-0" Jan 29 15:50:17 crc kubenswrapper[5008]: I0129 15:50:17.839902 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b2bd431d-b897-47c3-a9cd-0dc161e88e4b-config-data\") pod \"ceilometer-0\" (UID: \"b2bd431d-b897-47c3-a9cd-0dc161e88e4b\") " pod="openstack/ceilometer-0" Jan 29 15:50:17 crc kubenswrapper[5008]: I0129 15:50:17.839933 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b2bd431d-b897-47c3-a9cd-0dc161e88e4b-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"b2bd431d-b897-47c3-a9cd-0dc161e88e4b\") " pod="openstack/ceilometer-0" Jan 29 15:50:17 crc kubenswrapper[5008]: I0129 15:50:17.941489 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b2bd431d-b897-47c3-a9cd-0dc161e88e4b-scripts\") pod \"ceilometer-0\" (UID: \"b2bd431d-b897-47c3-a9cd-0dc161e88e4b\") " pod="openstack/ceilometer-0" Jan 29 15:50:17 crc kubenswrapper[5008]: I0129 15:50:17.941534 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b2bd431d-b897-47c3-a9cd-0dc161e88e4b-log-httpd\") pod \"ceilometer-0\" (UID: \"b2bd431d-b897-47c3-a9cd-0dc161e88e4b\") " pod="openstack/ceilometer-0" Jan 29 15:50:17 crc kubenswrapper[5008]: I0129 15:50:17.941568 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b2bd431d-b897-47c3-a9cd-0dc161e88e4b-run-httpd\") pod \"ceilometer-0\" (UID: \"b2bd431d-b897-47c3-a9cd-0dc161e88e4b\") " pod="openstack/ceilometer-0" Jan 29 15:50:17 crc kubenswrapper[5008]: I0129 15:50:17.941608 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/b2bd431d-b897-47c3-a9cd-0dc161e88e4b-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"b2bd431d-b897-47c3-a9cd-0dc161e88e4b\") " pod="openstack/ceilometer-0" Jan 29 15:50:17 crc kubenswrapper[5008]: I0129 15:50:17.941640 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m56fh\" (UniqueName: \"kubernetes.io/projected/b2bd431d-b897-47c3-a9cd-0dc161e88e4b-kube-api-access-m56fh\") pod \"ceilometer-0\" (UID: \"b2bd431d-b897-47c3-a9cd-0dc161e88e4b\") " pod="openstack/ceilometer-0" Jan 29 15:50:17 crc kubenswrapper[5008]: I0129 15:50:17.941656 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b2bd431d-b897-47c3-a9cd-0dc161e88e4b-config-data\") pod \"ceilometer-0\" (UID: \"b2bd431d-b897-47c3-a9cd-0dc161e88e4b\") " pod="openstack/ceilometer-0" Jan 29 15:50:17 crc kubenswrapper[5008]: I0129 15:50:17.941678 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b2bd431d-b897-47c3-a9cd-0dc161e88e4b-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"b2bd431d-b897-47c3-a9cd-0dc161e88e4b\") " pod="openstack/ceilometer-0" Jan 29 15:50:17 crc kubenswrapper[5008]: I0129 15:50:17.942517 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b2bd431d-b897-47c3-a9cd-0dc161e88e4b-run-httpd\") pod \"ceilometer-0\" (UID: \"b2bd431d-b897-47c3-a9cd-0dc161e88e4b\") " pod="openstack/ceilometer-0" Jan 29 15:50:17 crc kubenswrapper[5008]: I0129 15:50:17.942537 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b2bd431d-b897-47c3-a9cd-0dc161e88e4b-log-httpd\") pod \"ceilometer-0\" (UID: \"b2bd431d-b897-47c3-a9cd-0dc161e88e4b\") " pod="openstack/ceilometer-0" Jan 29 15:50:17 crc kubenswrapper[5008]: I0129 15:50:17.949045 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/b2bd431d-b897-47c3-a9cd-0dc161e88e4b-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"b2bd431d-b897-47c3-a9cd-0dc161e88e4b\") " pod="openstack/ceilometer-0" Jan 29 15:50:17 crc kubenswrapper[5008]: I0129 15:50:17.949106 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b2bd431d-b897-47c3-a9cd-0dc161e88e4b-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"b2bd431d-b897-47c3-a9cd-0dc161e88e4b\") " pod="openstack/ceilometer-0" Jan 29 15:50:17 crc kubenswrapper[5008]: I0129 15:50:17.950884 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b2bd431d-b897-47c3-a9cd-0dc161e88e4b-scripts\") pod \"ceilometer-0\" (UID: \"b2bd431d-b897-47c3-a9cd-0dc161e88e4b\") " pod="openstack/ceilometer-0" Jan 29 15:50:17 crc kubenswrapper[5008]: I0129 15:50:17.953864 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b2bd431d-b897-47c3-a9cd-0dc161e88e4b-config-data\") pod \"ceilometer-0\" (UID: \"b2bd431d-b897-47c3-a9cd-0dc161e88e4b\") " pod="openstack/ceilometer-0" Jan 29 15:50:17 crc kubenswrapper[5008]: I0129 15:50:17.968574 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m56fh\" (UniqueName: \"kubernetes.io/projected/b2bd431d-b897-47c3-a9cd-0dc161e88e4b-kube-api-access-m56fh\") pod \"ceilometer-0\" (UID: \"b2bd431d-b897-47c3-a9cd-0dc161e88e4b\") " pod="openstack/ceilometer-0" Jan 29 15:50:18 crc kubenswrapper[5008]: I0129 15:50:18.068456 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 29 15:50:18 crc kubenswrapper[5008]: I0129 15:50:18.660578 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 29 15:50:18 crc kubenswrapper[5008]: W0129 15:50:18.669875 5008 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb2bd431d_b897_47c3_a9cd_0dc161e88e4b.slice/crio-7a51db6eb1e7e8ce07e43b1ef14d4eb0c28d9c277551db9458bdd280aa7a4d57 WatchSource:0}: Error finding container 7a51db6eb1e7e8ce07e43b1ef14d4eb0c28d9c277551db9458bdd280aa7a4d57: Status 404 returned error can't find the container with id 7a51db6eb1e7e8ce07e43b1ef14d4eb0c28d9c277551db9458bdd280aa7a4d57 Jan 29 15:50:19 crc kubenswrapper[5008]: I0129 15:50:19.135193 5008 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-7f49b8c48b-x77zl" podUID="8c3bbcd6-6512-4439-b70d-f46dd6382cfe" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.145:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.145:8443: connect: connection refused" Jan 29 15:50:19 crc kubenswrapper[5008]: I0129 15:50:19.135370 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-7f49b8c48b-x77zl" Jan 29 15:50:19 crc kubenswrapper[5008]: I0129 15:50:19.334205 5008 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b98db574-9529-4d76-be4d-66b44b61a962" path="/var/lib/kubelet/pods/b98db574-9529-4d76-be4d-66b44b61a962/volumes" Jan 29 15:50:19 crc kubenswrapper[5008]: I0129 15:50:19.416694 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b2bd431d-b897-47c3-a9cd-0dc161e88e4b","Type":"ContainerStarted","Data":"7a51db6eb1e7e8ce07e43b1ef14d4eb0c28d9c277551db9458bdd280aa7a4d57"} Jan 29 15:50:19 crc kubenswrapper[5008]: I0129 15:50:19.858009 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-6578955fd5-h99wm" Jan 29 15:50:19 crc kubenswrapper[5008]: I0129 15:50:19.959316 5008 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-774db89647-tm89m"] Jan 29 15:50:19 crc kubenswrapper[5008]: I0129 15:50:19.959640 5008 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-774db89647-tm89m" podUID="198c1bb9-c544-4f02-9b28-983302b67f85" containerName="dnsmasq-dns" containerID="cri-o://3b493622238ba247bd3a423fda4a6f572ff13e66c0b2cd863b93d7fa09956597" gracePeriod=10 Jan 29 15:50:20 crc kubenswrapper[5008]: I0129 15:50:20.402196 5008 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Jan 29 15:50:20 crc kubenswrapper[5008]: I0129 15:50:20.438744 5008 generic.go:334] "Generic (PLEG): container finished" podID="198c1bb9-c544-4f02-9b28-983302b67f85" containerID="3b493622238ba247bd3a423fda4a6f572ff13e66c0b2cd863b93d7fa09956597" exitCode=0 Jan 29 15:50:20 crc kubenswrapper[5008]: I0129 15:50:20.438875 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-774db89647-tm89m" event={"ID":"198c1bb9-c544-4f02-9b28-983302b67f85","Type":"ContainerDied","Data":"3b493622238ba247bd3a423fda4a6f572ff13e66c0b2cd863b93d7fa09956597"} Jan 29 15:50:20 crc kubenswrapper[5008]: I0129 15:50:20.447437 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b2bd431d-b897-47c3-a9cd-0dc161e88e4b","Type":"ContainerStarted","Data":"2cd69329d810ce3fa3b4611eebfac91e371ed346ff2bb24f32850c62a9775351"} Jan 29 15:50:21 crc kubenswrapper[5008]: I0129 15:50:21.056603 5008 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-774db89647-tm89m" Jan 29 15:50:21 crc kubenswrapper[5008]: I0129 15:50:21.156129 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/198c1bb9-c544-4f02-9b28-983302b67f85-ovsdbserver-sb\") pod \"198c1bb9-c544-4f02-9b28-983302b67f85\" (UID: \"198c1bb9-c544-4f02-9b28-983302b67f85\") " Jan 29 15:50:21 crc kubenswrapper[5008]: I0129 15:50:21.156364 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/198c1bb9-c544-4f02-9b28-983302b67f85-dns-svc\") pod \"198c1bb9-c544-4f02-9b28-983302b67f85\" (UID: \"198c1bb9-c544-4f02-9b28-983302b67f85\") " Jan 29 15:50:21 crc kubenswrapper[5008]: I0129 15:50:21.156439 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/198c1bb9-c544-4f02-9b28-983302b67f85-ovsdbserver-nb\") pod \"198c1bb9-c544-4f02-9b28-983302b67f85\" (UID: \"198c1bb9-c544-4f02-9b28-983302b67f85\") " Jan 29 15:50:21 crc kubenswrapper[5008]: I0129 15:50:21.156494 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/198c1bb9-c544-4f02-9b28-983302b67f85-dns-swift-storage-0\") pod \"198c1bb9-c544-4f02-9b28-983302b67f85\" (UID: \"198c1bb9-c544-4f02-9b28-983302b67f85\") " Jan 29 15:50:21 crc kubenswrapper[5008]: I0129 15:50:21.156536 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xlzfw\" (UniqueName: \"kubernetes.io/projected/198c1bb9-c544-4f02-9b28-983302b67f85-kube-api-access-xlzfw\") pod \"198c1bb9-c544-4f02-9b28-983302b67f85\" (UID: \"198c1bb9-c544-4f02-9b28-983302b67f85\") " Jan 29 15:50:21 crc kubenswrapper[5008]: I0129 15:50:21.156572 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/198c1bb9-c544-4f02-9b28-983302b67f85-config\") pod \"198c1bb9-c544-4f02-9b28-983302b67f85\" (UID: \"198c1bb9-c544-4f02-9b28-983302b67f85\") " Jan 29 15:50:21 crc kubenswrapper[5008]: I0129 15:50:21.188719 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/198c1bb9-c544-4f02-9b28-983302b67f85-kube-api-access-xlzfw" (OuterVolumeSpecName: "kube-api-access-xlzfw") pod "198c1bb9-c544-4f02-9b28-983302b67f85" (UID: "198c1bb9-c544-4f02-9b28-983302b67f85"). InnerVolumeSpecName "kube-api-access-xlzfw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:50:21 crc kubenswrapper[5008]: I0129 15:50:21.235674 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/198c1bb9-c544-4f02-9b28-983302b67f85-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "198c1bb9-c544-4f02-9b28-983302b67f85" (UID: "198c1bb9-c544-4f02-9b28-983302b67f85"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:50:21 crc kubenswrapper[5008]: I0129 15:50:21.253113 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/198c1bb9-c544-4f02-9b28-983302b67f85-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "198c1bb9-c544-4f02-9b28-983302b67f85" (UID: "198c1bb9-c544-4f02-9b28-983302b67f85"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:50:21 crc kubenswrapper[5008]: I0129 15:50:21.259162 5008 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/198c1bb9-c544-4f02-9b28-983302b67f85-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 29 15:50:21 crc kubenswrapper[5008]: I0129 15:50:21.259196 5008 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xlzfw\" (UniqueName: \"kubernetes.io/projected/198c1bb9-c544-4f02-9b28-983302b67f85-kube-api-access-xlzfw\") on node \"crc\" DevicePath \"\"" Jan 29 15:50:21 crc kubenswrapper[5008]: I0129 15:50:21.259206 5008 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/198c1bb9-c544-4f02-9b28-983302b67f85-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 29 15:50:21 crc kubenswrapper[5008]: I0129 15:50:21.281612 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/198c1bb9-c544-4f02-9b28-983302b67f85-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "198c1bb9-c544-4f02-9b28-983302b67f85" (UID: "198c1bb9-c544-4f02-9b28-983302b67f85"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:50:21 crc kubenswrapper[5008]: I0129 15:50:21.305578 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/198c1bb9-c544-4f02-9b28-983302b67f85-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "198c1bb9-c544-4f02-9b28-983302b67f85" (UID: "198c1bb9-c544-4f02-9b28-983302b67f85"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:50:21 crc kubenswrapper[5008]: I0129 15:50:21.309303 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/198c1bb9-c544-4f02-9b28-983302b67f85-config" (OuterVolumeSpecName: "config") pod "198c1bb9-c544-4f02-9b28-983302b67f85" (UID: "198c1bb9-c544-4f02-9b28-983302b67f85"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:50:21 crc kubenswrapper[5008]: I0129 15:50:21.360903 5008 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/198c1bb9-c544-4f02-9b28-983302b67f85-config\") on node \"crc\" DevicePath \"\"" Jan 29 15:50:21 crc kubenswrapper[5008]: I0129 15:50:21.360942 5008 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/198c1bb9-c544-4f02-9b28-983302b67f85-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 29 15:50:21 crc kubenswrapper[5008]: I0129 15:50:21.360952 5008 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/198c1bb9-c544-4f02-9b28-983302b67f85-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 29 15:50:21 crc kubenswrapper[5008]: I0129 15:50:21.504754 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b2bd431d-b897-47c3-a9cd-0dc161e88e4b","Type":"ContainerStarted","Data":"e7c5f4991b1ad149f9042ef8cc16274e62273bff4a1c35032832220f1212f3cc"} Jan 29 15:50:21 crc kubenswrapper[5008]: I0129 15:50:21.512557 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-774db89647-tm89m" event={"ID":"198c1bb9-c544-4f02-9b28-983302b67f85","Type":"ContainerDied","Data":"fe4d27a42fca0f64cafefb978a52eff74b34c4b2a357e4ac6b7f8c5c5f84788a"} Jan 29 15:50:21 crc kubenswrapper[5008]: I0129 15:50:21.512619 5008 scope.go:117] "RemoveContainer" containerID="3b493622238ba247bd3a423fda4a6f572ff13e66c0b2cd863b93d7fa09956597" Jan 29 15:50:21 crc kubenswrapper[5008]: I0129 15:50:21.512774 5008 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-774db89647-tm89m" Jan 29 15:50:21 crc kubenswrapper[5008]: I0129 15:50:21.685399 5008 scope.go:117] "RemoveContainer" containerID="5992353136cc63043471174685289b57a122a180a840f4ae96151af03ba57534" Jan 29 15:50:21 crc kubenswrapper[5008]: I0129 15:50:21.686128 5008 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-774db89647-tm89m"] Jan 29 15:50:21 crc kubenswrapper[5008]: I0129 15:50:21.703213 5008 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-774db89647-tm89m"] Jan 29 15:50:22 crc kubenswrapper[5008]: I0129 15:50:22.235563 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-788c485464-442t2" Jan 29 15:50:22 crc kubenswrapper[5008]: I0129 15:50:22.521250 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b2bd431d-b897-47c3-a9cd-0dc161e88e4b","Type":"ContainerStarted","Data":"2b910cb96fa849f84931b8751a79732414e4c199f41fefcef2d399ce6b622d6b"} Jan 29 15:50:22 crc kubenswrapper[5008]: I0129 15:50:22.787896 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-788c485464-442t2" Jan 29 15:50:23 crc kubenswrapper[5008]: I0129 15:50:23.335199 5008 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="198c1bb9-c544-4f02-9b28-983302b67f85" path="/var/lib/kubelet/pods/198c1bb9-c544-4f02-9b28-983302b67f85/volumes" Jan 29 15:50:23 crc kubenswrapper[5008]: I0129 15:50:23.953222 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-7f9c9f8766-4lf97" Jan 29 15:50:24 crc kubenswrapper[5008]: I0129 15:50:24.113154 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-5c6fbdb57f-zvhpz" Jan 29 15:50:24 crc kubenswrapper[5008]: I0129 15:50:24.120422 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-5c6fbdb57f-zvhpz" Jan 29 15:50:24 crc kubenswrapper[5008]: I0129 15:50:24.124543 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-74c948b66b-9krkd" Jan 29 15:50:24 crc kubenswrapper[5008]: I0129 15:50:24.554512 5008 generic.go:334] "Generic (PLEG): container finished" podID="8c3bbcd6-6512-4439-b70d-f46dd6382cfe" containerID="c27f9304d6725c80976f2a7ffbaadb3b415bca1c1d26fe7cd46a2a94470354ae" exitCode=137 Jan 29 15:50:24 crc kubenswrapper[5008]: I0129 15:50:24.555812 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7f49b8c48b-x77zl" event={"ID":"8c3bbcd6-6512-4439-b70d-f46dd6382cfe","Type":"ContainerDied","Data":"c27f9304d6725c80976f2a7ffbaadb3b415bca1c1d26fe7cd46a2a94470354ae"} Jan 29 15:50:24 crc kubenswrapper[5008]: I0129 15:50:24.882463 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-7f9c9f8766-4lf97" Jan 29 15:50:24 crc kubenswrapper[5008]: I0129 15:50:24.946618 5008 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-788c485464-442t2"] Jan 29 15:50:24 crc kubenswrapper[5008]: I0129 15:50:24.946828 5008 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-788c485464-442t2" podUID="930b6c6f-40a8-476f-ad73-069c7f2ffeb8" containerName="barbican-api-log" containerID="cri-o://d590c476f44393281718ccb2a8a3e0af02d26c225e5b0e107a503b8af26e4e78" gracePeriod=30 Jan 29 15:50:24 crc kubenswrapper[5008]: I0129 15:50:24.947196 5008 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-788c485464-442t2" podUID="930b6c6f-40a8-476f-ad73-069c7f2ffeb8" containerName="barbican-api" containerID="cri-o://d6a474f9cb662a31c110199317649c60d49d6b8424e25729948f77b95945be36" gracePeriod=30 Jan 29 15:50:24 crc kubenswrapper[5008]: I0129 15:50:24.968202 5008 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-788c485464-442t2" podUID="930b6c6f-40a8-476f-ad73-069c7f2ffeb8" containerName="barbican-api" probeResult="failure" output="Get \"http://10.217.0.167:9311/healthcheck\": EOF" Jan 29 15:50:25 crc kubenswrapper[5008]: I0129 15:50:25.584720 5008 generic.go:334] "Generic (PLEG): container finished" podID="930b6c6f-40a8-476f-ad73-069c7f2ffeb8" containerID="d590c476f44393281718ccb2a8a3e0af02d26c225e5b0e107a503b8af26e4e78" exitCode=143 Jan 29 15:50:25 crc kubenswrapper[5008]: I0129 15:50:25.584766 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-788c485464-442t2" event={"ID":"930b6c6f-40a8-476f-ad73-069c7f2ffeb8","Type":"ContainerDied","Data":"d590c476f44393281718ccb2a8a3e0af02d26c225e5b0e107a503b8af26e4e78"} Jan 29 15:50:27 crc kubenswrapper[5008]: I0129 15:50:27.476974 5008 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 29 15:50:28 crc kubenswrapper[5008]: I0129 15:50:28.388374 5008 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-788c485464-442t2" podUID="930b6c6f-40a8-476f-ad73-069c7f2ffeb8" containerName="barbican-api" probeResult="failure" output="Get \"http://10.217.0.167:9311/healthcheck\": read tcp 10.217.0.2:38762->10.217.0.167:9311: read: connection reset by peer" Jan 29 15:50:28 crc kubenswrapper[5008]: E0129 15:50:28.598475 5008 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod930b6c6f_40a8_476f_ad73_069c7f2ffeb8.slice/crio-d6a474f9cb662a31c110199317649c60d49d6b8424e25729948f77b95945be36.scope\": RecentStats: unable to find data in memory cache]" Jan 29 15:50:28 crc kubenswrapper[5008]: I0129 15:50:28.645557 5008 generic.go:334] "Generic (PLEG): container finished" podID="930b6c6f-40a8-476f-ad73-069c7f2ffeb8" containerID="d6a474f9cb662a31c110199317649c60d49d6b8424e25729948f77b95945be36" exitCode=0 Jan 29 15:50:28 crc kubenswrapper[5008]: I0129 15:50:28.645633 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-788c485464-442t2" event={"ID":"930b6c6f-40a8-476f-ad73-069c7f2ffeb8","Type":"ContainerDied","Data":"d6a474f9cb662a31c110199317649c60d49d6b8424e25729948f77b95945be36"} Jan 29 15:50:28 crc kubenswrapper[5008]: I0129 15:50:28.812035 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-98cff5df-8qpcl" Jan 29 15:50:28 crc kubenswrapper[5008]: I0129 15:50:28.881466 5008 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-74c948b66b-9krkd"] Jan 29 15:50:28 crc kubenswrapper[5008]: I0129 15:50:28.881939 5008 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-74c948b66b-9krkd" podUID="0a0310f9-e8a2-4f0f-8e33-0b6fa798c4e2" containerName="neutron-api" containerID="cri-o://bdd8b5ad2f9dd0f7075ba3ebd36ca61dffe898dd3c726e03f48336bce5f5eb32" gracePeriod=30 Jan 29 15:50:28 crc kubenswrapper[5008]: I0129 15:50:28.882098 5008 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-74c948b66b-9krkd" podUID="0a0310f9-e8a2-4f0f-8e33-0b6fa798c4e2" containerName="neutron-httpd" containerID="cri-o://07ed4b32a695d898c860c162dfa7b0d1cb072e63d6b2dbb86d1f05987c9972fb" gracePeriod=30 Jan 29 15:50:29 crc kubenswrapper[5008]: I0129 15:50:29.136226 5008 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-7f49b8c48b-x77zl" podUID="8c3bbcd6-6512-4439-b70d-f46dd6382cfe" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.145:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.145:8443: connect: connection refused" Jan 29 15:50:29 crc kubenswrapper[5008]: I0129 15:50:29.658462 5008 generic.go:334] "Generic (PLEG): container finished" podID="0a0310f9-e8a2-4f0f-8e33-0b6fa798c4e2" containerID="07ed4b32a695d898c860c162dfa7b0d1cb072e63d6b2dbb86d1f05987c9972fb" exitCode=0 Jan 29 15:50:29 crc kubenswrapper[5008]: I0129 15:50:29.658508 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-74c948b66b-9krkd" event={"ID":"0a0310f9-e8a2-4f0f-8e33-0b6fa798c4e2","Type":"ContainerDied","Data":"07ed4b32a695d898c860c162dfa7b0d1cb072e63d6b2dbb86d1f05987c9972fb"} Jan 29 15:50:29 crc kubenswrapper[5008]: I0129 15:50:29.870664 5008 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-788c485464-442t2" podUID="930b6c6f-40a8-476f-ad73-069c7f2ffeb8" containerName="barbican-api" probeResult="failure" output="Get \"http://10.217.0.167:9311/healthcheck\": dial tcp 10.217.0.167:9311: connect: connection refused" Jan 29 15:50:29 crc kubenswrapper[5008]: I0129 15:50:29.871076 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-788c485464-442t2" Jan 29 15:50:29 crc kubenswrapper[5008]: I0129 15:50:29.870693 5008 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-788c485464-442t2" podUID="930b6c6f-40a8-476f-ad73-069c7f2ffeb8" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.0.167:9311/healthcheck\": dial tcp 10.217.0.167:9311: connect: connection refused" Jan 29 15:50:31 crc kubenswrapper[5008]: I0129 15:50:31.782768 5008 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-7f49b8c48b-x77zl" Jan 29 15:50:31 crc kubenswrapper[5008]: I0129 15:50:31.839990 5008 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-788c485464-442t2" Jan 29 15:50:31 crc kubenswrapper[5008]: I0129 15:50:31.871453 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/8c3bbcd6-6512-4439-b70d-f46dd6382cfe-horizon-tls-certs\") pod \"8c3bbcd6-6512-4439-b70d-f46dd6382cfe\" (UID: \"8c3bbcd6-6512-4439-b70d-f46dd6382cfe\") " Jan 29 15:50:31 crc kubenswrapper[5008]: I0129 15:50:31.871572 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8c3bbcd6-6512-4439-b70d-f46dd6382cfe-combined-ca-bundle\") pod \"8c3bbcd6-6512-4439-b70d-f46dd6382cfe\" (UID: \"8c3bbcd6-6512-4439-b70d-f46dd6382cfe\") " Jan 29 15:50:31 crc kubenswrapper[5008]: I0129 15:50:31.871638 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/8c3bbcd6-6512-4439-b70d-f46dd6382cfe-config-data\") pod \"8c3bbcd6-6512-4439-b70d-f46dd6382cfe\" (UID: \"8c3bbcd6-6512-4439-b70d-f46dd6382cfe\") " Jan 29 15:50:31 crc kubenswrapper[5008]: I0129 15:50:31.871665 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/8c3bbcd6-6512-4439-b70d-f46dd6382cfe-horizon-secret-key\") pod \"8c3bbcd6-6512-4439-b70d-f46dd6382cfe\" (UID: \"8c3bbcd6-6512-4439-b70d-f46dd6382cfe\") " Jan 29 15:50:31 crc kubenswrapper[5008]: I0129 15:50:31.871762 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8c3bbcd6-6512-4439-b70d-f46dd6382cfe-logs\") pod \"8c3bbcd6-6512-4439-b70d-f46dd6382cfe\" (UID: \"8c3bbcd6-6512-4439-b70d-f46dd6382cfe\") " Jan 29 15:50:31 crc kubenswrapper[5008]: I0129 15:50:31.871833 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vxxxg\" (UniqueName: \"kubernetes.io/projected/8c3bbcd6-6512-4439-b70d-f46dd6382cfe-kube-api-access-vxxxg\") pod \"8c3bbcd6-6512-4439-b70d-f46dd6382cfe\" (UID: \"8c3bbcd6-6512-4439-b70d-f46dd6382cfe\") " Jan 29 15:50:31 crc kubenswrapper[5008]: I0129 15:50:31.871897 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/8c3bbcd6-6512-4439-b70d-f46dd6382cfe-scripts\") pod \"8c3bbcd6-6512-4439-b70d-f46dd6382cfe\" (UID: \"8c3bbcd6-6512-4439-b70d-f46dd6382cfe\") " Jan 29 15:50:31 crc kubenswrapper[5008]: I0129 15:50:31.879446 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8c3bbcd6-6512-4439-b70d-f46dd6382cfe-logs" (OuterVolumeSpecName: "logs") pod "8c3bbcd6-6512-4439-b70d-f46dd6382cfe" (UID: "8c3bbcd6-6512-4439-b70d-f46dd6382cfe"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 15:50:31 crc kubenswrapper[5008]: I0129 15:50:31.883810 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8c3bbcd6-6512-4439-b70d-f46dd6382cfe-kube-api-access-vxxxg" (OuterVolumeSpecName: "kube-api-access-vxxxg") pod "8c3bbcd6-6512-4439-b70d-f46dd6382cfe" (UID: "8c3bbcd6-6512-4439-b70d-f46dd6382cfe"). InnerVolumeSpecName "kube-api-access-vxxxg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:50:31 crc kubenswrapper[5008]: I0129 15:50:31.883875 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8c3bbcd6-6512-4439-b70d-f46dd6382cfe-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "8c3bbcd6-6512-4439-b70d-f46dd6382cfe" (UID: "8c3bbcd6-6512-4439-b70d-f46dd6382cfe"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:50:31 crc kubenswrapper[5008]: I0129 15:50:31.900076 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8c3bbcd6-6512-4439-b70d-f46dd6382cfe-config-data" (OuterVolumeSpecName: "config-data") pod "8c3bbcd6-6512-4439-b70d-f46dd6382cfe" (UID: "8c3bbcd6-6512-4439-b70d-f46dd6382cfe"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:50:31 crc kubenswrapper[5008]: I0129 15:50:31.904549 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8c3bbcd6-6512-4439-b70d-f46dd6382cfe-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "8c3bbcd6-6512-4439-b70d-f46dd6382cfe" (UID: "8c3bbcd6-6512-4439-b70d-f46dd6382cfe"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:50:31 crc kubenswrapper[5008]: I0129 15:50:31.922060 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8c3bbcd6-6512-4439-b70d-f46dd6382cfe-scripts" (OuterVolumeSpecName: "scripts") pod "8c3bbcd6-6512-4439-b70d-f46dd6382cfe" (UID: "8c3bbcd6-6512-4439-b70d-f46dd6382cfe"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:50:31 crc kubenswrapper[5008]: I0129 15:50:31.932808 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8c3bbcd6-6512-4439-b70d-f46dd6382cfe-horizon-tls-certs" (OuterVolumeSpecName: "horizon-tls-certs") pod "8c3bbcd6-6512-4439-b70d-f46dd6382cfe" (UID: "8c3bbcd6-6512-4439-b70d-f46dd6382cfe"). InnerVolumeSpecName "horizon-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:50:31 crc kubenswrapper[5008]: I0129 15:50:31.973289 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/930b6c6f-40a8-476f-ad73-069c7f2ffeb8-config-data-custom\") pod \"930b6c6f-40a8-476f-ad73-069c7f2ffeb8\" (UID: \"930b6c6f-40a8-476f-ad73-069c7f2ffeb8\") " Jan 29 15:50:31 crc kubenswrapper[5008]: I0129 15:50:31.973424 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/930b6c6f-40a8-476f-ad73-069c7f2ffeb8-config-data\") pod \"930b6c6f-40a8-476f-ad73-069c7f2ffeb8\" (UID: \"930b6c6f-40a8-476f-ad73-069c7f2ffeb8\") " Jan 29 15:50:31 crc kubenswrapper[5008]: I0129 15:50:31.973463 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/930b6c6f-40a8-476f-ad73-069c7f2ffeb8-combined-ca-bundle\") pod \"930b6c6f-40a8-476f-ad73-069c7f2ffeb8\" (UID: \"930b6c6f-40a8-476f-ad73-069c7f2ffeb8\") " Jan 29 15:50:31 crc kubenswrapper[5008]: I0129 15:50:31.973545 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/930b6c6f-40a8-476f-ad73-069c7f2ffeb8-logs\") pod \"930b6c6f-40a8-476f-ad73-069c7f2ffeb8\" (UID: \"930b6c6f-40a8-476f-ad73-069c7f2ffeb8\") " Jan 29 15:50:31 crc kubenswrapper[5008]: I0129 15:50:31.973590 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gt8j9\" (UniqueName: \"kubernetes.io/projected/930b6c6f-40a8-476f-ad73-069c7f2ffeb8-kube-api-access-gt8j9\") pod \"930b6c6f-40a8-476f-ad73-069c7f2ffeb8\" (UID: \"930b6c6f-40a8-476f-ad73-069c7f2ffeb8\") " Jan 29 15:50:31 crc kubenswrapper[5008]: I0129 15:50:31.974105 5008 reconciler_common.go:293] "Volume detached for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/8c3bbcd6-6512-4439-b70d-f46dd6382cfe-horizon-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 29 15:50:31 crc kubenswrapper[5008]: I0129 15:50:31.974132 5008 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8c3bbcd6-6512-4439-b70d-f46dd6382cfe-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 15:50:31 crc kubenswrapper[5008]: I0129 15:50:31.974146 5008 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/8c3bbcd6-6512-4439-b70d-f46dd6382cfe-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 15:50:31 crc kubenswrapper[5008]: I0129 15:50:31.974157 5008 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/8c3bbcd6-6512-4439-b70d-f46dd6382cfe-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Jan 29 15:50:31 crc kubenswrapper[5008]: I0129 15:50:31.974167 5008 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8c3bbcd6-6512-4439-b70d-f46dd6382cfe-logs\") on node \"crc\" DevicePath \"\"" Jan 29 15:50:31 crc kubenswrapper[5008]: I0129 15:50:31.974178 5008 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vxxxg\" (UniqueName: \"kubernetes.io/projected/8c3bbcd6-6512-4439-b70d-f46dd6382cfe-kube-api-access-vxxxg\") on node \"crc\" DevicePath \"\"" Jan 29 15:50:31 crc kubenswrapper[5008]: I0129 15:50:31.974190 5008 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/8c3bbcd6-6512-4439-b70d-f46dd6382cfe-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 15:50:31 crc kubenswrapper[5008]: I0129 15:50:31.983005 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/930b6c6f-40a8-476f-ad73-069c7f2ffeb8-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "930b6c6f-40a8-476f-ad73-069c7f2ffeb8" (UID: "930b6c6f-40a8-476f-ad73-069c7f2ffeb8"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:50:31 crc kubenswrapper[5008]: I0129 15:50:31.989366 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/930b6c6f-40a8-476f-ad73-069c7f2ffeb8-logs" (OuterVolumeSpecName: "logs") pod "930b6c6f-40a8-476f-ad73-069c7f2ffeb8" (UID: "930b6c6f-40a8-476f-ad73-069c7f2ffeb8"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 15:50:31 crc kubenswrapper[5008]: I0129 15:50:31.990770 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/930b6c6f-40a8-476f-ad73-069c7f2ffeb8-kube-api-access-gt8j9" (OuterVolumeSpecName: "kube-api-access-gt8j9") pod "930b6c6f-40a8-476f-ad73-069c7f2ffeb8" (UID: "930b6c6f-40a8-476f-ad73-069c7f2ffeb8"). InnerVolumeSpecName "kube-api-access-gt8j9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:50:32 crc kubenswrapper[5008]: I0129 15:50:32.016928 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/930b6c6f-40a8-476f-ad73-069c7f2ffeb8-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "930b6c6f-40a8-476f-ad73-069c7f2ffeb8" (UID: "930b6c6f-40a8-476f-ad73-069c7f2ffeb8"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:50:32 crc kubenswrapper[5008]: I0129 15:50:32.069724 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/930b6c6f-40a8-476f-ad73-069c7f2ffeb8-config-data" (OuterVolumeSpecName: "config-data") pod "930b6c6f-40a8-476f-ad73-069c7f2ffeb8" (UID: "930b6c6f-40a8-476f-ad73-069c7f2ffeb8"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:50:32 crc kubenswrapper[5008]: I0129 15:50:32.075868 5008 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/930b6c6f-40a8-476f-ad73-069c7f2ffeb8-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 15:50:32 crc kubenswrapper[5008]: I0129 15:50:32.075913 5008 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/930b6c6f-40a8-476f-ad73-069c7f2ffeb8-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 15:50:32 crc kubenswrapper[5008]: I0129 15:50:32.075927 5008 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/930b6c6f-40a8-476f-ad73-069c7f2ffeb8-logs\") on node \"crc\" DevicePath \"\"" Jan 29 15:50:32 crc kubenswrapper[5008]: I0129 15:50:32.075935 5008 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gt8j9\" (UniqueName: \"kubernetes.io/projected/930b6c6f-40a8-476f-ad73-069c7f2ffeb8-kube-api-access-gt8j9\") on node \"crc\" DevicePath \"\"" Jan 29 15:50:32 crc kubenswrapper[5008]: I0129 15:50:32.075945 5008 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/930b6c6f-40a8-476f-ad73-069c7f2ffeb8-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 29 15:50:32 crc kubenswrapper[5008]: I0129 15:50:32.689161 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7f49b8c48b-x77zl" event={"ID":"8c3bbcd6-6512-4439-b70d-f46dd6382cfe","Type":"ContainerDied","Data":"dac0f8e5f596bebb7822b413588359e7076b890b5ffed6cda246c2680781b018"} Jan 29 15:50:32 crc kubenswrapper[5008]: I0129 15:50:32.689172 5008 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-7f49b8c48b-x77zl" Jan 29 15:50:32 crc kubenswrapper[5008]: I0129 15:50:32.689222 5008 scope.go:117] "RemoveContainer" containerID="864603c565caf07038d917f5b4aaaeae46b873a4ad67b66ea1932218a20e7fdd" Jan 29 15:50:32 crc kubenswrapper[5008]: I0129 15:50:32.691192 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"3b26c725-8ee1-4144-baa0-a4a85bb7e1d2","Type":"ContainerStarted","Data":"a65066fbb5d55199948471854794db9995525f198fcafd03654ba2cce2be6f2e"} Jan 29 15:50:32 crc kubenswrapper[5008]: I0129 15:50:32.693584 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-788c485464-442t2" event={"ID":"930b6c6f-40a8-476f-ad73-069c7f2ffeb8","Type":"ContainerDied","Data":"32662f5b5d6c2d9d8f2c316606503b0bdf87ca2b613c9eca5e18d259a4b9490d"} Jan 29 15:50:32 crc kubenswrapper[5008]: I0129 15:50:32.693664 5008 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-788c485464-442t2" Jan 29 15:50:32 crc kubenswrapper[5008]: I0129 15:50:32.696342 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b2bd431d-b897-47c3-a9cd-0dc161e88e4b","Type":"ContainerStarted","Data":"b2d472f4d9757fbdb9a1f6bd1271a797915cd6d1101f35ba32bd90669d6f3415"} Jan 29 15:50:32 crc kubenswrapper[5008]: I0129 15:50:32.696511 5008 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="b2bd431d-b897-47c3-a9cd-0dc161e88e4b" containerName="ceilometer-central-agent" containerID="cri-o://2cd69329d810ce3fa3b4611eebfac91e371ed346ff2bb24f32850c62a9775351" gracePeriod=30 Jan 29 15:50:32 crc kubenswrapper[5008]: I0129 15:50:32.696615 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 29 15:50:32 crc kubenswrapper[5008]: I0129 15:50:32.696650 5008 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="b2bd431d-b897-47c3-a9cd-0dc161e88e4b" containerName="sg-core" containerID="cri-o://2b910cb96fa849f84931b8751a79732414e4c199f41fefcef2d399ce6b622d6b" gracePeriod=30 Jan 29 15:50:32 crc kubenswrapper[5008]: I0129 15:50:32.696684 5008 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="b2bd431d-b897-47c3-a9cd-0dc161e88e4b" containerName="ceilometer-notification-agent" containerID="cri-o://e7c5f4991b1ad149f9042ef8cc16274e62273bff4a1c35032832220f1212f3cc" gracePeriod=30 Jan 29 15:50:32 crc kubenswrapper[5008]: I0129 15:50:32.696664 5008 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="b2bd431d-b897-47c3-a9cd-0dc161e88e4b" containerName="proxy-httpd" containerID="cri-o://b2d472f4d9757fbdb9a1f6bd1271a797915cd6d1101f35ba32bd90669d6f3415" gracePeriod=30 Jan 29 15:50:32 crc kubenswrapper[5008]: I0129 15:50:32.711256 5008 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstackclient" podStartSLOduration=2.966000365 podStartE2EDuration="19.711232926s" podCreationTimestamp="2026-01-29 15:50:13 +0000 UTC" firstStartedPulling="2026-01-29 15:50:14.810032094 +0000 UTC m=+1358.482886331" lastFinishedPulling="2026-01-29 15:50:31.555264655 +0000 UTC m=+1375.228118892" observedRunningTime="2026-01-29 15:50:32.711041312 +0000 UTC m=+1376.383895549" watchObservedRunningTime="2026-01-29 15:50:32.711232926 +0000 UTC m=+1376.384087183" Jan 29 15:50:32 crc kubenswrapper[5008]: I0129 15:50:32.775308 5008 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.963267787 podStartE2EDuration="15.77529053s" podCreationTimestamp="2026-01-29 15:50:17 +0000 UTC" firstStartedPulling="2026-01-29 15:50:18.674092305 +0000 UTC m=+1362.346946542" lastFinishedPulling="2026-01-29 15:50:31.486115048 +0000 UTC m=+1375.158969285" observedRunningTime="2026-01-29 15:50:32.755875419 +0000 UTC m=+1376.428729666" watchObservedRunningTime="2026-01-29 15:50:32.77529053 +0000 UTC m=+1376.448144767" Jan 29 15:50:32 crc kubenswrapper[5008]: I0129 15:50:32.814075 5008 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-7f49b8c48b-x77zl"] Jan 29 15:50:32 crc kubenswrapper[5008]: I0129 15:50:32.842033 5008 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-7f49b8c48b-x77zl"] Jan 29 15:50:32 crc kubenswrapper[5008]: I0129 15:50:32.842065 5008 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-788c485464-442t2"] Jan 29 15:50:32 crc kubenswrapper[5008]: I0129 15:50:32.842076 5008 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-api-788c485464-442t2"] Jan 29 15:50:32 crc kubenswrapper[5008]: I0129 15:50:32.892961 5008 scope.go:117] "RemoveContainer" containerID="c27f9304d6725c80976f2a7ffbaadb3b415bca1c1d26fe7cd46a2a94470354ae" Jan 29 15:50:32 crc kubenswrapper[5008]: I0129 15:50:32.957083 5008 scope.go:117] "RemoveContainer" containerID="d6a474f9cb662a31c110199317649c60d49d6b8424e25729948f77b95945be36" Jan 29 15:50:32 crc kubenswrapper[5008]: I0129 15:50:32.985769 5008 scope.go:117] "RemoveContainer" containerID="d590c476f44393281718ccb2a8a3e0af02d26c225e5b0e107a503b8af26e4e78" Jan 29 15:50:33 crc kubenswrapper[5008]: I0129 15:50:33.335265 5008 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8c3bbcd6-6512-4439-b70d-f46dd6382cfe" path="/var/lib/kubelet/pods/8c3bbcd6-6512-4439-b70d-f46dd6382cfe/volumes" Jan 29 15:50:33 crc kubenswrapper[5008]: I0129 15:50:33.336061 5008 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="930b6c6f-40a8-476f-ad73-069c7f2ffeb8" path="/var/lib/kubelet/pods/930b6c6f-40a8-476f-ad73-069c7f2ffeb8/volumes" Jan 29 15:50:33 crc kubenswrapper[5008]: I0129 15:50:33.707294 5008 generic.go:334] "Generic (PLEG): container finished" podID="b2bd431d-b897-47c3-a9cd-0dc161e88e4b" containerID="b2d472f4d9757fbdb9a1f6bd1271a797915cd6d1101f35ba32bd90669d6f3415" exitCode=0 Jan 29 15:50:33 crc kubenswrapper[5008]: I0129 15:50:33.707318 5008 generic.go:334] "Generic (PLEG): container finished" podID="b2bd431d-b897-47c3-a9cd-0dc161e88e4b" containerID="2b910cb96fa849f84931b8751a79732414e4c199f41fefcef2d399ce6b622d6b" exitCode=2 Jan 29 15:50:33 crc kubenswrapper[5008]: I0129 15:50:33.707325 5008 generic.go:334] "Generic (PLEG): container finished" podID="b2bd431d-b897-47c3-a9cd-0dc161e88e4b" containerID="2cd69329d810ce3fa3b4611eebfac91e371ed346ff2bb24f32850c62a9775351" exitCode=0 Jan 29 15:50:33 crc kubenswrapper[5008]: I0129 15:50:33.707355 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b2bd431d-b897-47c3-a9cd-0dc161e88e4b","Type":"ContainerDied","Data":"b2d472f4d9757fbdb9a1f6bd1271a797915cd6d1101f35ba32bd90669d6f3415"} Jan 29 15:50:33 crc kubenswrapper[5008]: I0129 15:50:33.707375 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b2bd431d-b897-47c3-a9cd-0dc161e88e4b","Type":"ContainerDied","Data":"2b910cb96fa849f84931b8751a79732414e4c199f41fefcef2d399ce6b622d6b"} Jan 29 15:50:33 crc kubenswrapper[5008]: I0129 15:50:33.707385 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b2bd431d-b897-47c3-a9cd-0dc161e88e4b","Type":"ContainerDied","Data":"2cd69329d810ce3fa3b4611eebfac91e371ed346ff2bb24f32850c62a9775351"} Jan 29 15:50:35 crc kubenswrapper[5008]: I0129 15:50:35.249985 5008 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 29 15:50:35 crc kubenswrapper[5008]: I0129 15:50:35.338963 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b2bd431d-b897-47c3-a9cd-0dc161e88e4b-config-data\") pod \"b2bd431d-b897-47c3-a9cd-0dc161e88e4b\" (UID: \"b2bd431d-b897-47c3-a9cd-0dc161e88e4b\") " Jan 29 15:50:35 crc kubenswrapper[5008]: I0129 15:50:35.339014 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b2bd431d-b897-47c3-a9cd-0dc161e88e4b-combined-ca-bundle\") pod \"b2bd431d-b897-47c3-a9cd-0dc161e88e4b\" (UID: \"b2bd431d-b897-47c3-a9cd-0dc161e88e4b\") " Jan 29 15:50:35 crc kubenswrapper[5008]: I0129 15:50:35.339171 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b2bd431d-b897-47c3-a9cd-0dc161e88e4b-log-httpd\") pod \"b2bd431d-b897-47c3-a9cd-0dc161e88e4b\" (UID: \"b2bd431d-b897-47c3-a9cd-0dc161e88e4b\") " Jan 29 15:50:35 crc kubenswrapper[5008]: I0129 15:50:35.339249 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m56fh\" (UniqueName: \"kubernetes.io/projected/b2bd431d-b897-47c3-a9cd-0dc161e88e4b-kube-api-access-m56fh\") pod \"b2bd431d-b897-47c3-a9cd-0dc161e88e4b\" (UID: \"b2bd431d-b897-47c3-a9cd-0dc161e88e4b\") " Jan 29 15:50:35 crc kubenswrapper[5008]: I0129 15:50:35.339287 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b2bd431d-b897-47c3-a9cd-0dc161e88e4b-run-httpd\") pod \"b2bd431d-b897-47c3-a9cd-0dc161e88e4b\" (UID: \"b2bd431d-b897-47c3-a9cd-0dc161e88e4b\") " Jan 29 15:50:35 crc kubenswrapper[5008]: I0129 15:50:35.339340 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/b2bd431d-b897-47c3-a9cd-0dc161e88e4b-sg-core-conf-yaml\") pod \"b2bd431d-b897-47c3-a9cd-0dc161e88e4b\" (UID: \"b2bd431d-b897-47c3-a9cd-0dc161e88e4b\") " Jan 29 15:50:35 crc kubenswrapper[5008]: I0129 15:50:35.339364 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b2bd431d-b897-47c3-a9cd-0dc161e88e4b-scripts\") pod \"b2bd431d-b897-47c3-a9cd-0dc161e88e4b\" (UID: \"b2bd431d-b897-47c3-a9cd-0dc161e88e4b\") " Jan 29 15:50:35 crc kubenswrapper[5008]: I0129 15:50:35.341440 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b2bd431d-b897-47c3-a9cd-0dc161e88e4b-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "b2bd431d-b897-47c3-a9cd-0dc161e88e4b" (UID: "b2bd431d-b897-47c3-a9cd-0dc161e88e4b"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 15:50:35 crc kubenswrapper[5008]: I0129 15:50:35.342965 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b2bd431d-b897-47c3-a9cd-0dc161e88e4b-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "b2bd431d-b897-47c3-a9cd-0dc161e88e4b" (UID: "b2bd431d-b897-47c3-a9cd-0dc161e88e4b"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 15:50:35 crc kubenswrapper[5008]: I0129 15:50:35.347911 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b2bd431d-b897-47c3-a9cd-0dc161e88e4b-scripts" (OuterVolumeSpecName: "scripts") pod "b2bd431d-b897-47c3-a9cd-0dc161e88e4b" (UID: "b2bd431d-b897-47c3-a9cd-0dc161e88e4b"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:50:35 crc kubenswrapper[5008]: I0129 15:50:35.366377 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b2bd431d-b897-47c3-a9cd-0dc161e88e4b-kube-api-access-m56fh" (OuterVolumeSpecName: "kube-api-access-m56fh") pod "b2bd431d-b897-47c3-a9cd-0dc161e88e4b" (UID: "b2bd431d-b897-47c3-a9cd-0dc161e88e4b"). InnerVolumeSpecName "kube-api-access-m56fh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:50:35 crc kubenswrapper[5008]: I0129 15:50:35.378943 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b2bd431d-b897-47c3-a9cd-0dc161e88e4b-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "b2bd431d-b897-47c3-a9cd-0dc161e88e4b" (UID: "b2bd431d-b897-47c3-a9cd-0dc161e88e4b"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:50:35 crc kubenswrapper[5008]: I0129 15:50:35.442939 5008 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b2bd431d-b897-47c3-a9cd-0dc161e88e4b-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 29 15:50:35 crc kubenswrapper[5008]: I0129 15:50:35.442972 5008 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b2bd431d-b897-47c3-a9cd-0dc161e88e4b-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 15:50:35 crc kubenswrapper[5008]: I0129 15:50:35.442981 5008 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/b2bd431d-b897-47c3-a9cd-0dc161e88e4b-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 29 15:50:35 crc kubenswrapper[5008]: I0129 15:50:35.442990 5008 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b2bd431d-b897-47c3-a9cd-0dc161e88e4b-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 29 15:50:35 crc kubenswrapper[5008]: I0129 15:50:35.443000 5008 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m56fh\" (UniqueName: \"kubernetes.io/projected/b2bd431d-b897-47c3-a9cd-0dc161e88e4b-kube-api-access-m56fh\") on node \"crc\" DevicePath \"\"" Jan 29 15:50:35 crc kubenswrapper[5008]: I0129 15:50:35.521668 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b2bd431d-b897-47c3-a9cd-0dc161e88e4b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b2bd431d-b897-47c3-a9cd-0dc161e88e4b" (UID: "b2bd431d-b897-47c3-a9cd-0dc161e88e4b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:50:35 crc kubenswrapper[5008]: I0129 15:50:35.544179 5008 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b2bd431d-b897-47c3-a9cd-0dc161e88e4b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 15:50:35 crc kubenswrapper[5008]: I0129 15:50:35.602431 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b2bd431d-b897-47c3-a9cd-0dc161e88e4b-config-data" (OuterVolumeSpecName: "config-data") pod "b2bd431d-b897-47c3-a9cd-0dc161e88e4b" (UID: "b2bd431d-b897-47c3-a9cd-0dc161e88e4b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:50:35 crc kubenswrapper[5008]: I0129 15:50:35.646319 5008 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b2bd431d-b897-47c3-a9cd-0dc161e88e4b-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 15:50:35 crc kubenswrapper[5008]: I0129 15:50:35.727317 5008 generic.go:334] "Generic (PLEG): container finished" podID="b2bd431d-b897-47c3-a9cd-0dc161e88e4b" containerID="e7c5f4991b1ad149f9042ef8cc16274e62273bff4a1c35032832220f1212f3cc" exitCode=0 Jan 29 15:50:35 crc kubenswrapper[5008]: I0129 15:50:35.727374 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b2bd431d-b897-47c3-a9cd-0dc161e88e4b","Type":"ContainerDied","Data":"e7c5f4991b1ad149f9042ef8cc16274e62273bff4a1c35032832220f1212f3cc"} Jan 29 15:50:35 crc kubenswrapper[5008]: I0129 15:50:35.727400 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b2bd431d-b897-47c3-a9cd-0dc161e88e4b","Type":"ContainerDied","Data":"7a51db6eb1e7e8ce07e43b1ef14d4eb0c28d9c277551db9458bdd280aa7a4d57"} Jan 29 15:50:35 crc kubenswrapper[5008]: I0129 15:50:35.727418 5008 scope.go:117] "RemoveContainer" containerID="b2d472f4d9757fbdb9a1f6bd1271a797915cd6d1101f35ba32bd90669d6f3415" Jan 29 15:50:35 crc kubenswrapper[5008]: I0129 15:50:35.727502 5008 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 29 15:50:35 crc kubenswrapper[5008]: I0129 15:50:35.734631 5008 generic.go:334] "Generic (PLEG): container finished" podID="0a0310f9-e8a2-4f0f-8e33-0b6fa798c4e2" containerID="bdd8b5ad2f9dd0f7075ba3ebd36ca61dffe898dd3c726e03f48336bce5f5eb32" exitCode=0 Jan 29 15:50:35 crc kubenswrapper[5008]: I0129 15:50:35.734680 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-74c948b66b-9krkd" event={"ID":"0a0310f9-e8a2-4f0f-8e33-0b6fa798c4e2","Type":"ContainerDied","Data":"bdd8b5ad2f9dd0f7075ba3ebd36ca61dffe898dd3c726e03f48336bce5f5eb32"} Jan 29 15:50:35 crc kubenswrapper[5008]: I0129 15:50:35.755549 5008 scope.go:117] "RemoveContainer" containerID="2b910cb96fa849f84931b8751a79732414e4c199f41fefcef2d399ce6b622d6b" Jan 29 15:50:35 crc kubenswrapper[5008]: I0129 15:50:35.775084 5008 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 29 15:50:35 crc kubenswrapper[5008]: I0129 15:50:35.799260 5008 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 29 15:50:35 crc kubenswrapper[5008]: I0129 15:50:35.804018 5008 scope.go:117] "RemoveContainer" containerID="e7c5f4991b1ad149f9042ef8cc16274e62273bff4a1c35032832220f1212f3cc" Jan 29 15:50:35 crc kubenswrapper[5008]: I0129 15:50:35.810140 5008 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 29 15:50:35 crc kubenswrapper[5008]: E0129 15:50:35.810608 5008 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b2bd431d-b897-47c3-a9cd-0dc161e88e4b" containerName="proxy-httpd" Jan 29 15:50:35 crc kubenswrapper[5008]: I0129 15:50:35.810629 5008 state_mem.go:107] "Deleted CPUSet assignment" podUID="b2bd431d-b897-47c3-a9cd-0dc161e88e4b" containerName="proxy-httpd" Jan 29 15:50:35 crc kubenswrapper[5008]: E0129 15:50:35.810642 5008 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b2bd431d-b897-47c3-a9cd-0dc161e88e4b" containerName="ceilometer-notification-agent" Jan 29 15:50:35 crc kubenswrapper[5008]: I0129 15:50:35.810658 5008 state_mem.go:107] "Deleted CPUSet assignment" podUID="b2bd431d-b897-47c3-a9cd-0dc161e88e4b" containerName="ceilometer-notification-agent" Jan 29 15:50:35 crc kubenswrapper[5008]: E0129 15:50:35.810678 5008 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="198c1bb9-c544-4f02-9b28-983302b67f85" containerName="init" Jan 29 15:50:35 crc kubenswrapper[5008]: I0129 15:50:35.810685 5008 state_mem.go:107] "Deleted CPUSet assignment" podUID="198c1bb9-c544-4f02-9b28-983302b67f85" containerName="init" Jan 29 15:50:35 crc kubenswrapper[5008]: E0129 15:50:35.810694 5008 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="930b6c6f-40a8-476f-ad73-069c7f2ffeb8" containerName="barbican-api-log" Jan 29 15:50:35 crc kubenswrapper[5008]: I0129 15:50:35.810700 5008 state_mem.go:107] "Deleted CPUSet assignment" podUID="930b6c6f-40a8-476f-ad73-069c7f2ffeb8" containerName="barbican-api-log" Jan 29 15:50:35 crc kubenswrapper[5008]: E0129 15:50:35.810716 5008 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8c3bbcd6-6512-4439-b70d-f46dd6382cfe" containerName="horizon" Jan 29 15:50:35 crc kubenswrapper[5008]: I0129 15:50:35.810722 5008 state_mem.go:107] "Deleted CPUSet assignment" podUID="8c3bbcd6-6512-4439-b70d-f46dd6382cfe" containerName="horizon" Jan 29 15:50:35 crc kubenswrapper[5008]: E0129 15:50:35.810732 5008 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b2bd431d-b897-47c3-a9cd-0dc161e88e4b" containerName="sg-core" Jan 29 15:50:35 crc kubenswrapper[5008]: I0129 15:50:35.810751 5008 state_mem.go:107] "Deleted CPUSet assignment" podUID="b2bd431d-b897-47c3-a9cd-0dc161e88e4b" containerName="sg-core" Jan 29 15:50:35 crc kubenswrapper[5008]: E0129 15:50:35.810764 5008 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8c3bbcd6-6512-4439-b70d-f46dd6382cfe" containerName="horizon-log" Jan 29 15:50:35 crc kubenswrapper[5008]: I0129 15:50:35.810770 5008 state_mem.go:107] "Deleted CPUSet assignment" podUID="8c3bbcd6-6512-4439-b70d-f46dd6382cfe" containerName="horizon-log" Jan 29 15:50:35 crc kubenswrapper[5008]: E0129 15:50:35.810828 5008 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b2bd431d-b897-47c3-a9cd-0dc161e88e4b" containerName="ceilometer-central-agent" Jan 29 15:50:35 crc kubenswrapper[5008]: I0129 15:50:35.810837 5008 state_mem.go:107] "Deleted CPUSet assignment" podUID="b2bd431d-b897-47c3-a9cd-0dc161e88e4b" containerName="ceilometer-central-agent" Jan 29 15:50:35 crc kubenswrapper[5008]: E0129 15:50:35.810848 5008 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="198c1bb9-c544-4f02-9b28-983302b67f85" containerName="dnsmasq-dns" Jan 29 15:50:35 crc kubenswrapper[5008]: I0129 15:50:35.810857 5008 state_mem.go:107] "Deleted CPUSet assignment" podUID="198c1bb9-c544-4f02-9b28-983302b67f85" containerName="dnsmasq-dns" Jan 29 15:50:35 crc kubenswrapper[5008]: E0129 15:50:35.810873 5008 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="930b6c6f-40a8-476f-ad73-069c7f2ffeb8" containerName="barbican-api" Jan 29 15:50:35 crc kubenswrapper[5008]: I0129 15:50:35.810879 5008 state_mem.go:107] "Deleted CPUSet assignment" podUID="930b6c6f-40a8-476f-ad73-069c7f2ffeb8" containerName="barbican-api" Jan 29 15:50:35 crc kubenswrapper[5008]: I0129 15:50:35.811069 5008 memory_manager.go:354] "RemoveStaleState removing state" podUID="198c1bb9-c544-4f02-9b28-983302b67f85" containerName="dnsmasq-dns" Jan 29 15:50:35 crc kubenswrapper[5008]: I0129 15:50:35.811086 5008 memory_manager.go:354] "RemoveStaleState removing state" podUID="b2bd431d-b897-47c3-a9cd-0dc161e88e4b" containerName="ceilometer-notification-agent" Jan 29 15:50:35 crc kubenswrapper[5008]: I0129 15:50:35.811096 5008 memory_manager.go:354] "RemoveStaleState removing state" podUID="b2bd431d-b897-47c3-a9cd-0dc161e88e4b" containerName="ceilometer-central-agent" Jan 29 15:50:35 crc kubenswrapper[5008]: I0129 15:50:35.811107 5008 memory_manager.go:354] "RemoveStaleState removing state" podUID="930b6c6f-40a8-476f-ad73-069c7f2ffeb8" containerName="barbican-api" Jan 29 15:50:35 crc kubenswrapper[5008]: I0129 15:50:35.811120 5008 memory_manager.go:354] "RemoveStaleState removing state" podUID="b2bd431d-b897-47c3-a9cd-0dc161e88e4b" containerName="sg-core" Jan 29 15:50:35 crc kubenswrapper[5008]: I0129 15:50:35.811129 5008 memory_manager.go:354] "RemoveStaleState removing state" podUID="8c3bbcd6-6512-4439-b70d-f46dd6382cfe" containerName="horizon-log" Jan 29 15:50:35 crc kubenswrapper[5008]: I0129 15:50:35.811136 5008 memory_manager.go:354] "RemoveStaleState removing state" podUID="b2bd431d-b897-47c3-a9cd-0dc161e88e4b" containerName="proxy-httpd" Jan 29 15:50:35 crc kubenswrapper[5008]: I0129 15:50:35.811146 5008 memory_manager.go:354] "RemoveStaleState removing state" podUID="8c3bbcd6-6512-4439-b70d-f46dd6382cfe" containerName="horizon" Jan 29 15:50:35 crc kubenswrapper[5008]: I0129 15:50:35.811154 5008 memory_manager.go:354] "RemoveStaleState removing state" podUID="930b6c6f-40a8-476f-ad73-069c7f2ffeb8" containerName="barbican-api-log" Jan 29 15:50:35 crc kubenswrapper[5008]: I0129 15:50:35.821575 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 29 15:50:35 crc kubenswrapper[5008]: I0129 15:50:35.825026 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 29 15:50:35 crc kubenswrapper[5008]: I0129 15:50:35.825191 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 29 15:50:35 crc kubenswrapper[5008]: I0129 15:50:35.825227 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 29 15:50:35 crc kubenswrapper[5008]: I0129 15:50:35.833771 5008 scope.go:117] "RemoveContainer" containerID="2cd69329d810ce3fa3b4611eebfac91e371ed346ff2bb24f32850c62a9775351" Jan 29 15:50:35 crc kubenswrapper[5008]: I0129 15:50:35.947513 5008 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-74c948b66b-9krkd" Jan 29 15:50:35 crc kubenswrapper[5008]: I0129 15:50:35.950775 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c81636ad-f799-43f6-8304-b2121e7bb427-log-httpd\") pod \"ceilometer-0\" (UID: \"c81636ad-f799-43f6-8304-b2121e7bb427\") " pod="openstack/ceilometer-0" Jan 29 15:50:35 crc kubenswrapper[5008]: I0129 15:50:35.950852 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c81636ad-f799-43f6-8304-b2121e7bb427-config-data\") pod \"ceilometer-0\" (UID: \"c81636ad-f799-43f6-8304-b2121e7bb427\") " pod="openstack/ceilometer-0" Jan 29 15:50:35 crc kubenswrapper[5008]: I0129 15:50:35.950872 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c81636ad-f799-43f6-8304-b2121e7bb427-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"c81636ad-f799-43f6-8304-b2121e7bb427\") " pod="openstack/ceilometer-0" Jan 29 15:50:35 crc kubenswrapper[5008]: I0129 15:50:35.950922 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/c81636ad-f799-43f6-8304-b2121e7bb427-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"c81636ad-f799-43f6-8304-b2121e7bb427\") " pod="openstack/ceilometer-0" Jan 29 15:50:35 crc kubenswrapper[5008]: I0129 15:50:35.950980 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c81636ad-f799-43f6-8304-b2121e7bb427-run-httpd\") pod \"ceilometer-0\" (UID: \"c81636ad-f799-43f6-8304-b2121e7bb427\") " pod="openstack/ceilometer-0" Jan 29 15:50:35 crc kubenswrapper[5008]: I0129 15:50:35.951156 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6rfpc\" (UniqueName: \"kubernetes.io/projected/c81636ad-f799-43f6-8304-b2121e7bb427-kube-api-access-6rfpc\") pod \"ceilometer-0\" (UID: \"c81636ad-f799-43f6-8304-b2121e7bb427\") " pod="openstack/ceilometer-0" Jan 29 15:50:35 crc kubenswrapper[5008]: I0129 15:50:35.951242 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c81636ad-f799-43f6-8304-b2121e7bb427-scripts\") pod \"ceilometer-0\" (UID: \"c81636ad-f799-43f6-8304-b2121e7bb427\") " pod="openstack/ceilometer-0" Jan 29 15:50:35 crc kubenswrapper[5008]: I0129 15:50:35.951654 5008 scope.go:117] "RemoveContainer" containerID="b2d472f4d9757fbdb9a1f6bd1271a797915cd6d1101f35ba32bd90669d6f3415" Jan 29 15:50:35 crc kubenswrapper[5008]: E0129 15:50:35.952017 5008 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b2d472f4d9757fbdb9a1f6bd1271a797915cd6d1101f35ba32bd90669d6f3415\": container with ID starting with b2d472f4d9757fbdb9a1f6bd1271a797915cd6d1101f35ba32bd90669d6f3415 not found: ID does not exist" containerID="b2d472f4d9757fbdb9a1f6bd1271a797915cd6d1101f35ba32bd90669d6f3415" Jan 29 15:50:35 crc kubenswrapper[5008]: I0129 15:50:35.952062 5008 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b2d472f4d9757fbdb9a1f6bd1271a797915cd6d1101f35ba32bd90669d6f3415"} err="failed to get container status \"b2d472f4d9757fbdb9a1f6bd1271a797915cd6d1101f35ba32bd90669d6f3415\": rpc error: code = NotFound desc = could not find container \"b2d472f4d9757fbdb9a1f6bd1271a797915cd6d1101f35ba32bd90669d6f3415\": container with ID starting with b2d472f4d9757fbdb9a1f6bd1271a797915cd6d1101f35ba32bd90669d6f3415 not found: ID does not exist" Jan 29 15:50:35 crc kubenswrapper[5008]: I0129 15:50:35.952097 5008 scope.go:117] "RemoveContainer" containerID="2b910cb96fa849f84931b8751a79732414e4c199f41fefcef2d399ce6b622d6b" Jan 29 15:50:35 crc kubenswrapper[5008]: E0129 15:50:35.952365 5008 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2b910cb96fa849f84931b8751a79732414e4c199f41fefcef2d399ce6b622d6b\": container with ID starting with 2b910cb96fa849f84931b8751a79732414e4c199f41fefcef2d399ce6b622d6b not found: ID does not exist" containerID="2b910cb96fa849f84931b8751a79732414e4c199f41fefcef2d399ce6b622d6b" Jan 29 15:50:35 crc kubenswrapper[5008]: I0129 15:50:35.952391 5008 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2b910cb96fa849f84931b8751a79732414e4c199f41fefcef2d399ce6b622d6b"} err="failed to get container status \"2b910cb96fa849f84931b8751a79732414e4c199f41fefcef2d399ce6b622d6b\": rpc error: code = NotFound desc = could not find container \"2b910cb96fa849f84931b8751a79732414e4c199f41fefcef2d399ce6b622d6b\": container with ID starting with 2b910cb96fa849f84931b8751a79732414e4c199f41fefcef2d399ce6b622d6b not found: ID does not exist" Jan 29 15:50:35 crc kubenswrapper[5008]: I0129 15:50:35.952408 5008 scope.go:117] "RemoveContainer" containerID="e7c5f4991b1ad149f9042ef8cc16274e62273bff4a1c35032832220f1212f3cc" Jan 29 15:50:35 crc kubenswrapper[5008]: E0129 15:50:35.952594 5008 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e7c5f4991b1ad149f9042ef8cc16274e62273bff4a1c35032832220f1212f3cc\": container with ID starting with e7c5f4991b1ad149f9042ef8cc16274e62273bff4a1c35032832220f1212f3cc not found: ID does not exist" containerID="e7c5f4991b1ad149f9042ef8cc16274e62273bff4a1c35032832220f1212f3cc" Jan 29 15:50:35 crc kubenswrapper[5008]: I0129 15:50:35.952618 5008 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e7c5f4991b1ad149f9042ef8cc16274e62273bff4a1c35032832220f1212f3cc"} err="failed to get container status \"e7c5f4991b1ad149f9042ef8cc16274e62273bff4a1c35032832220f1212f3cc\": rpc error: code = NotFound desc = could not find container \"e7c5f4991b1ad149f9042ef8cc16274e62273bff4a1c35032832220f1212f3cc\": container with ID starting with e7c5f4991b1ad149f9042ef8cc16274e62273bff4a1c35032832220f1212f3cc not found: ID does not exist" Jan 29 15:50:35 crc kubenswrapper[5008]: I0129 15:50:35.952634 5008 scope.go:117] "RemoveContainer" containerID="2cd69329d810ce3fa3b4611eebfac91e371ed346ff2bb24f32850c62a9775351" Jan 29 15:50:35 crc kubenswrapper[5008]: E0129 15:50:35.952852 5008 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2cd69329d810ce3fa3b4611eebfac91e371ed346ff2bb24f32850c62a9775351\": container with ID starting with 2cd69329d810ce3fa3b4611eebfac91e371ed346ff2bb24f32850c62a9775351 not found: ID does not exist" containerID="2cd69329d810ce3fa3b4611eebfac91e371ed346ff2bb24f32850c62a9775351" Jan 29 15:50:35 crc kubenswrapper[5008]: I0129 15:50:35.952881 5008 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2cd69329d810ce3fa3b4611eebfac91e371ed346ff2bb24f32850c62a9775351"} err="failed to get container status \"2cd69329d810ce3fa3b4611eebfac91e371ed346ff2bb24f32850c62a9775351\": rpc error: code = NotFound desc = could not find container \"2cd69329d810ce3fa3b4611eebfac91e371ed346ff2bb24f32850c62a9775351\": container with ID starting with 2cd69329d810ce3fa3b4611eebfac91e371ed346ff2bb24f32850c62a9775351 not found: ID does not exist" Jan 29 15:50:36 crc kubenswrapper[5008]: I0129 15:50:36.052131 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/0a0310f9-e8a2-4f0f-8e33-0b6fa798c4e2-httpd-config\") pod \"0a0310f9-e8a2-4f0f-8e33-0b6fa798c4e2\" (UID: \"0a0310f9-e8a2-4f0f-8e33-0b6fa798c4e2\") " Jan 29 15:50:36 crc kubenswrapper[5008]: I0129 15:50:36.052228 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/0a0310f9-e8a2-4f0f-8e33-0b6fa798c4e2-ovndb-tls-certs\") pod \"0a0310f9-e8a2-4f0f-8e33-0b6fa798c4e2\" (UID: \"0a0310f9-e8a2-4f0f-8e33-0b6fa798c4e2\") " Jan 29 15:50:36 crc kubenswrapper[5008]: I0129 15:50:36.052294 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/0a0310f9-e8a2-4f0f-8e33-0b6fa798c4e2-config\") pod \"0a0310f9-e8a2-4f0f-8e33-0b6fa798c4e2\" (UID: \"0a0310f9-e8a2-4f0f-8e33-0b6fa798c4e2\") " Jan 29 15:50:36 crc kubenswrapper[5008]: I0129 15:50:36.052414 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lhflq\" (UniqueName: \"kubernetes.io/projected/0a0310f9-e8a2-4f0f-8e33-0b6fa798c4e2-kube-api-access-lhflq\") pod \"0a0310f9-e8a2-4f0f-8e33-0b6fa798c4e2\" (UID: \"0a0310f9-e8a2-4f0f-8e33-0b6fa798c4e2\") " Jan 29 15:50:36 crc kubenswrapper[5008]: I0129 15:50:36.052463 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0a0310f9-e8a2-4f0f-8e33-0b6fa798c4e2-combined-ca-bundle\") pod \"0a0310f9-e8a2-4f0f-8e33-0b6fa798c4e2\" (UID: \"0a0310f9-e8a2-4f0f-8e33-0b6fa798c4e2\") " Jan 29 15:50:36 crc kubenswrapper[5008]: I0129 15:50:36.052655 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c81636ad-f799-43f6-8304-b2121e7bb427-config-data\") pod \"ceilometer-0\" (UID: \"c81636ad-f799-43f6-8304-b2121e7bb427\") " pod="openstack/ceilometer-0" Jan 29 15:50:36 crc kubenswrapper[5008]: I0129 15:50:36.052675 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c81636ad-f799-43f6-8304-b2121e7bb427-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"c81636ad-f799-43f6-8304-b2121e7bb427\") " pod="openstack/ceilometer-0" Jan 29 15:50:36 crc kubenswrapper[5008]: I0129 15:50:36.052691 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/c81636ad-f799-43f6-8304-b2121e7bb427-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"c81636ad-f799-43f6-8304-b2121e7bb427\") " pod="openstack/ceilometer-0" Jan 29 15:50:36 crc kubenswrapper[5008]: I0129 15:50:36.052726 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c81636ad-f799-43f6-8304-b2121e7bb427-run-httpd\") pod \"ceilometer-0\" (UID: \"c81636ad-f799-43f6-8304-b2121e7bb427\") " pod="openstack/ceilometer-0" Jan 29 15:50:36 crc kubenswrapper[5008]: I0129 15:50:36.052772 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6rfpc\" (UniqueName: \"kubernetes.io/projected/c81636ad-f799-43f6-8304-b2121e7bb427-kube-api-access-6rfpc\") pod \"ceilometer-0\" (UID: \"c81636ad-f799-43f6-8304-b2121e7bb427\") " pod="openstack/ceilometer-0" Jan 29 15:50:36 crc kubenswrapper[5008]: I0129 15:50:36.052813 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c81636ad-f799-43f6-8304-b2121e7bb427-scripts\") pod \"ceilometer-0\" (UID: \"c81636ad-f799-43f6-8304-b2121e7bb427\") " pod="openstack/ceilometer-0" Jan 29 15:50:36 crc kubenswrapper[5008]: I0129 15:50:36.052884 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c81636ad-f799-43f6-8304-b2121e7bb427-log-httpd\") pod \"ceilometer-0\" (UID: \"c81636ad-f799-43f6-8304-b2121e7bb427\") " pod="openstack/ceilometer-0" Jan 29 15:50:36 crc kubenswrapper[5008]: I0129 15:50:36.053354 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c81636ad-f799-43f6-8304-b2121e7bb427-log-httpd\") pod \"ceilometer-0\" (UID: \"c81636ad-f799-43f6-8304-b2121e7bb427\") " pod="openstack/ceilometer-0" Jan 29 15:50:36 crc kubenswrapper[5008]: I0129 15:50:36.057669 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c81636ad-f799-43f6-8304-b2121e7bb427-config-data\") pod \"ceilometer-0\" (UID: \"c81636ad-f799-43f6-8304-b2121e7bb427\") " pod="openstack/ceilometer-0" Jan 29 15:50:36 crc kubenswrapper[5008]: I0129 15:50:36.058556 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c81636ad-f799-43f6-8304-b2121e7bb427-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"c81636ad-f799-43f6-8304-b2121e7bb427\") " pod="openstack/ceilometer-0" Jan 29 15:50:36 crc kubenswrapper[5008]: I0129 15:50:36.059939 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c81636ad-f799-43f6-8304-b2121e7bb427-run-httpd\") pod \"ceilometer-0\" (UID: \"c81636ad-f799-43f6-8304-b2121e7bb427\") " pod="openstack/ceilometer-0" Jan 29 15:50:36 crc kubenswrapper[5008]: I0129 15:50:36.060061 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0a0310f9-e8a2-4f0f-8e33-0b6fa798c4e2-httpd-config" (OuterVolumeSpecName: "httpd-config") pod "0a0310f9-e8a2-4f0f-8e33-0b6fa798c4e2" (UID: "0a0310f9-e8a2-4f0f-8e33-0b6fa798c4e2"). InnerVolumeSpecName "httpd-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:50:36 crc kubenswrapper[5008]: I0129 15:50:36.061614 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/c81636ad-f799-43f6-8304-b2121e7bb427-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"c81636ad-f799-43f6-8304-b2121e7bb427\") " pod="openstack/ceilometer-0" Jan 29 15:50:36 crc kubenswrapper[5008]: I0129 15:50:36.063410 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c81636ad-f799-43f6-8304-b2121e7bb427-scripts\") pod \"ceilometer-0\" (UID: \"c81636ad-f799-43f6-8304-b2121e7bb427\") " pod="openstack/ceilometer-0" Jan 29 15:50:36 crc kubenswrapper[5008]: I0129 15:50:36.064673 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0a0310f9-e8a2-4f0f-8e33-0b6fa798c4e2-kube-api-access-lhflq" (OuterVolumeSpecName: "kube-api-access-lhflq") pod "0a0310f9-e8a2-4f0f-8e33-0b6fa798c4e2" (UID: "0a0310f9-e8a2-4f0f-8e33-0b6fa798c4e2"). InnerVolumeSpecName "kube-api-access-lhflq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:50:36 crc kubenswrapper[5008]: I0129 15:50:36.077034 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6rfpc\" (UniqueName: \"kubernetes.io/projected/c81636ad-f799-43f6-8304-b2121e7bb427-kube-api-access-6rfpc\") pod \"ceilometer-0\" (UID: \"c81636ad-f799-43f6-8304-b2121e7bb427\") " pod="openstack/ceilometer-0" Jan 29 15:50:36 crc kubenswrapper[5008]: I0129 15:50:36.108191 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0a0310f9-e8a2-4f0f-8e33-0b6fa798c4e2-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "0a0310f9-e8a2-4f0f-8e33-0b6fa798c4e2" (UID: "0a0310f9-e8a2-4f0f-8e33-0b6fa798c4e2"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:50:36 crc kubenswrapper[5008]: I0129 15:50:36.122851 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0a0310f9-e8a2-4f0f-8e33-0b6fa798c4e2-config" (OuterVolumeSpecName: "config") pod "0a0310f9-e8a2-4f0f-8e33-0b6fa798c4e2" (UID: "0a0310f9-e8a2-4f0f-8e33-0b6fa798c4e2"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:50:36 crc kubenswrapper[5008]: I0129 15:50:36.140330 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0a0310f9-e8a2-4f0f-8e33-0b6fa798c4e2-ovndb-tls-certs" (OuterVolumeSpecName: "ovndb-tls-certs") pod "0a0310f9-e8a2-4f0f-8e33-0b6fa798c4e2" (UID: "0a0310f9-e8a2-4f0f-8e33-0b6fa798c4e2"). InnerVolumeSpecName "ovndb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:50:36 crc kubenswrapper[5008]: I0129 15:50:36.154533 5008 reconciler_common.go:293] "Volume detached for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/0a0310f9-e8a2-4f0f-8e33-0b6fa798c4e2-ovndb-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 29 15:50:36 crc kubenswrapper[5008]: I0129 15:50:36.154563 5008 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/0a0310f9-e8a2-4f0f-8e33-0b6fa798c4e2-config\") on node \"crc\" DevicePath \"\"" Jan 29 15:50:36 crc kubenswrapper[5008]: I0129 15:50:36.154573 5008 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lhflq\" (UniqueName: \"kubernetes.io/projected/0a0310f9-e8a2-4f0f-8e33-0b6fa798c4e2-kube-api-access-lhflq\") on node \"crc\" DevicePath \"\"" Jan 29 15:50:36 crc kubenswrapper[5008]: I0129 15:50:36.154584 5008 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0a0310f9-e8a2-4f0f-8e33-0b6fa798c4e2-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 15:50:36 crc kubenswrapper[5008]: I0129 15:50:36.154592 5008 reconciler_common.go:293] "Volume detached for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/0a0310f9-e8a2-4f0f-8e33-0b6fa798c4e2-httpd-config\") on node \"crc\" DevicePath \"\"" Jan 29 15:50:36 crc kubenswrapper[5008]: I0129 15:50:36.243878 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 29 15:50:36 crc kubenswrapper[5008]: W0129 15:50:36.693686 5008 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc81636ad_f799_43f6_8304_b2121e7bb427.slice/crio-0e23d38c1351d3b9d8ce539ce39bcaaeb12db97fb4d36c36c739e94b79c66551 WatchSource:0}: Error finding container 0e23d38c1351d3b9d8ce539ce39bcaaeb12db97fb4d36c36c739e94b79c66551: Status 404 returned error can't find the container with id 0e23d38c1351d3b9d8ce539ce39bcaaeb12db97fb4d36c36c739e94b79c66551 Jan 29 15:50:36 crc kubenswrapper[5008]: I0129 15:50:36.694055 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 29 15:50:36 crc kubenswrapper[5008]: I0129 15:50:36.744279 5008 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-74c948b66b-9krkd" Jan 29 15:50:36 crc kubenswrapper[5008]: I0129 15:50:36.744474 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-74c948b66b-9krkd" event={"ID":"0a0310f9-e8a2-4f0f-8e33-0b6fa798c4e2","Type":"ContainerDied","Data":"04b65eba50b91345633c6fc5a3520c31c3922a473da83be590641f8a8f92912a"} Jan 29 15:50:36 crc kubenswrapper[5008]: I0129 15:50:36.744629 5008 scope.go:117] "RemoveContainer" containerID="07ed4b32a695d898c860c162dfa7b0d1cb072e63d6b2dbb86d1f05987c9972fb" Jan 29 15:50:36 crc kubenswrapper[5008]: I0129 15:50:36.745501 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c81636ad-f799-43f6-8304-b2121e7bb427","Type":"ContainerStarted","Data":"0e23d38c1351d3b9d8ce539ce39bcaaeb12db97fb4d36c36c739e94b79c66551"} Jan 29 15:50:36 crc kubenswrapper[5008]: I0129 15:50:36.781444 5008 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-74c948b66b-9krkd"] Jan 29 15:50:36 crc kubenswrapper[5008]: I0129 15:50:36.788463 5008 scope.go:117] "RemoveContainer" containerID="bdd8b5ad2f9dd0f7075ba3ebd36ca61dffe898dd3c726e03f48336bce5f5eb32" Jan 29 15:50:36 crc kubenswrapper[5008]: I0129 15:50:36.796674 5008 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-74c948b66b-9krkd"] Jan 29 15:50:37 crc kubenswrapper[5008]: I0129 15:50:37.337773 5008 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0a0310f9-e8a2-4f0f-8e33-0b6fa798c4e2" path="/var/lib/kubelet/pods/0a0310f9-e8a2-4f0f-8e33-0b6fa798c4e2/volumes" Jan 29 15:50:37 crc kubenswrapper[5008]: I0129 15:50:37.342320 5008 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b2bd431d-b897-47c3-a9cd-0dc161e88e4b" path="/var/lib/kubelet/pods/b2bd431d-b897-47c3-a9cd-0dc161e88e4b/volumes" Jan 29 15:50:37 crc kubenswrapper[5008]: I0129 15:50:37.757026 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c81636ad-f799-43f6-8304-b2121e7bb427","Type":"ContainerStarted","Data":"5bdb92bd8804311389315e1c2733efae43b86032b34ec9f92e93486c776777f4"} Jan 29 15:50:38 crc kubenswrapper[5008]: I0129 15:50:38.775244 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c81636ad-f799-43f6-8304-b2121e7bb427","Type":"ContainerStarted","Data":"57b9f0118bc63b684df15ec4953cbf43eb08b4c8cd41ed4c65c18bdbe33f4dce"} Jan 29 15:50:38 crc kubenswrapper[5008]: I0129 15:50:38.889618 5008 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-db-create-lmdpk"] Jan 29 15:50:38 crc kubenswrapper[5008]: E0129 15:50:38.890482 5008 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0a0310f9-e8a2-4f0f-8e33-0b6fa798c4e2" containerName="neutron-httpd" Jan 29 15:50:38 crc kubenswrapper[5008]: I0129 15:50:38.890546 5008 state_mem.go:107] "Deleted CPUSet assignment" podUID="0a0310f9-e8a2-4f0f-8e33-0b6fa798c4e2" containerName="neutron-httpd" Jan 29 15:50:38 crc kubenswrapper[5008]: E0129 15:50:38.890676 5008 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0a0310f9-e8a2-4f0f-8e33-0b6fa798c4e2" containerName="neutron-api" Jan 29 15:50:38 crc kubenswrapper[5008]: I0129 15:50:38.890727 5008 state_mem.go:107] "Deleted CPUSet assignment" podUID="0a0310f9-e8a2-4f0f-8e33-0b6fa798c4e2" containerName="neutron-api" Jan 29 15:50:38 crc kubenswrapper[5008]: I0129 15:50:38.890944 5008 memory_manager.go:354] "RemoveStaleState removing state" podUID="0a0310f9-e8a2-4f0f-8e33-0b6fa798c4e2" containerName="neutron-httpd" Jan 29 15:50:38 crc kubenswrapper[5008]: I0129 15:50:38.891008 5008 memory_manager.go:354] "RemoveStaleState removing state" podUID="0a0310f9-e8a2-4f0f-8e33-0b6fa798c4e2" containerName="neutron-api" Jan 29 15:50:38 crc kubenswrapper[5008]: I0129 15:50:38.891610 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-lmdpk" Jan 29 15:50:38 crc kubenswrapper[5008]: I0129 15:50:38.939918 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-lmdpk"] Jan 29 15:50:38 crc kubenswrapper[5008]: I0129 15:50:38.999943 5008 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-db-create-9xnkt"] Jan 29 15:50:39 crc kubenswrapper[5008]: I0129 15:50:39.000937 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-9xnkt" Jan 29 15:50:39 crc kubenswrapper[5008]: I0129 15:50:39.003206 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wh4cb\" (UniqueName: \"kubernetes.io/projected/7f34f608-b2f8-452e-8f0d-ef600929c36e-kube-api-access-wh4cb\") pod \"nova-api-db-create-lmdpk\" (UID: \"7f34f608-b2f8-452e-8f0d-ef600929c36e\") " pod="openstack/nova-api-db-create-lmdpk" Jan 29 15:50:39 crc kubenswrapper[5008]: I0129 15:50:39.003242 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7f34f608-b2f8-452e-8f0d-ef600929c36e-operator-scripts\") pod \"nova-api-db-create-lmdpk\" (UID: \"7f34f608-b2f8-452e-8f0d-ef600929c36e\") " pod="openstack/nova-api-db-create-lmdpk" Jan 29 15:50:39 crc kubenswrapper[5008]: I0129 15:50:39.021097 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-9xnkt"] Jan 29 15:50:39 crc kubenswrapper[5008]: I0129 15:50:39.104496 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gdg6w\" (UniqueName: \"kubernetes.io/projected/d6a58042-fefd-43b8-b186-905dcfc7b1af-kube-api-access-gdg6w\") pod \"nova-cell0-db-create-9xnkt\" (UID: \"d6a58042-fefd-43b8-b186-905dcfc7b1af\") " pod="openstack/nova-cell0-db-create-9xnkt" Jan 29 15:50:39 crc kubenswrapper[5008]: I0129 15:50:39.104537 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d6a58042-fefd-43b8-b186-905dcfc7b1af-operator-scripts\") pod \"nova-cell0-db-create-9xnkt\" (UID: \"d6a58042-fefd-43b8-b186-905dcfc7b1af\") " pod="openstack/nova-cell0-db-create-9xnkt" Jan 29 15:50:39 crc kubenswrapper[5008]: I0129 15:50:39.104593 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wh4cb\" (UniqueName: \"kubernetes.io/projected/7f34f608-b2f8-452e-8f0d-ef600929c36e-kube-api-access-wh4cb\") pod \"nova-api-db-create-lmdpk\" (UID: \"7f34f608-b2f8-452e-8f0d-ef600929c36e\") " pod="openstack/nova-api-db-create-lmdpk" Jan 29 15:50:39 crc kubenswrapper[5008]: I0129 15:50:39.104625 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7f34f608-b2f8-452e-8f0d-ef600929c36e-operator-scripts\") pod \"nova-api-db-create-lmdpk\" (UID: \"7f34f608-b2f8-452e-8f0d-ef600929c36e\") " pod="openstack/nova-api-db-create-lmdpk" Jan 29 15:50:39 crc kubenswrapper[5008]: I0129 15:50:39.105434 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7f34f608-b2f8-452e-8f0d-ef600929c36e-operator-scripts\") pod \"nova-api-db-create-lmdpk\" (UID: \"7f34f608-b2f8-452e-8f0d-ef600929c36e\") " pod="openstack/nova-api-db-create-lmdpk" Jan 29 15:50:39 crc kubenswrapper[5008]: I0129 15:50:39.130444 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wh4cb\" (UniqueName: \"kubernetes.io/projected/7f34f608-b2f8-452e-8f0d-ef600929c36e-kube-api-access-wh4cb\") pod \"nova-api-db-create-lmdpk\" (UID: \"7f34f608-b2f8-452e-8f0d-ef600929c36e\") " pod="openstack/nova-api-db-create-lmdpk" Jan 29 15:50:39 crc kubenswrapper[5008]: I0129 15:50:39.175462 5008 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-db-create-stxgj"] Jan 29 15:50:39 crc kubenswrapper[5008]: I0129 15:50:39.184933 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-stxgj" Jan 29 15:50:39 crc kubenswrapper[5008]: I0129 15:50:39.193693 5008 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-e284-account-create-update-cz9rj"] Jan 29 15:50:39 crc kubenswrapper[5008]: I0129 15:50:39.195470 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-e284-account-create-update-cz9rj" Jan 29 15:50:39 crc kubenswrapper[5008]: I0129 15:50:39.197354 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-db-secret" Jan 29 15:50:39 crc kubenswrapper[5008]: I0129 15:50:39.205848 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gdg6w\" (UniqueName: \"kubernetes.io/projected/d6a58042-fefd-43b8-b186-905dcfc7b1af-kube-api-access-gdg6w\") pod \"nova-cell0-db-create-9xnkt\" (UID: \"d6a58042-fefd-43b8-b186-905dcfc7b1af\") " pod="openstack/nova-cell0-db-create-9xnkt" Jan 29 15:50:39 crc kubenswrapper[5008]: I0129 15:50:39.205881 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d6a58042-fefd-43b8-b186-905dcfc7b1af-operator-scripts\") pod \"nova-cell0-db-create-9xnkt\" (UID: \"d6a58042-fefd-43b8-b186-905dcfc7b1af\") " pod="openstack/nova-cell0-db-create-9xnkt" Jan 29 15:50:39 crc kubenswrapper[5008]: I0129 15:50:39.206866 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d6a58042-fefd-43b8-b186-905dcfc7b1af-operator-scripts\") pod \"nova-cell0-db-create-9xnkt\" (UID: \"d6a58042-fefd-43b8-b186-905dcfc7b1af\") " pod="openstack/nova-cell0-db-create-9xnkt" Jan 29 15:50:39 crc kubenswrapper[5008]: I0129 15:50:39.219023 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-stxgj"] Jan 29 15:50:39 crc kubenswrapper[5008]: I0129 15:50:39.223091 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-lmdpk" Jan 29 15:50:39 crc kubenswrapper[5008]: I0129 15:50:39.226500 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-e284-account-create-update-cz9rj"] Jan 29 15:50:39 crc kubenswrapper[5008]: I0129 15:50:39.241705 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gdg6w\" (UniqueName: \"kubernetes.io/projected/d6a58042-fefd-43b8-b186-905dcfc7b1af-kube-api-access-gdg6w\") pod \"nova-cell0-db-create-9xnkt\" (UID: \"d6a58042-fefd-43b8-b186-905dcfc7b1af\") " pod="openstack/nova-cell0-db-create-9xnkt" Jan 29 15:50:39 crc kubenswrapper[5008]: I0129 15:50:39.309744 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/110f96e6-c230-44f3-9247-90283da8976c-operator-scripts\") pod \"nova-cell1-db-create-stxgj\" (UID: \"110f96e6-c230-44f3-9247-90283da8976c\") " pod="openstack/nova-cell1-db-create-stxgj" Jan 29 15:50:39 crc kubenswrapper[5008]: I0129 15:50:39.310234 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b5t4k\" (UniqueName: \"kubernetes.io/projected/110f96e6-c230-44f3-9247-90283da8976c-kube-api-access-b5t4k\") pod \"nova-cell1-db-create-stxgj\" (UID: \"110f96e6-c230-44f3-9247-90283da8976c\") " pod="openstack/nova-cell1-db-create-stxgj" Jan 29 15:50:39 crc kubenswrapper[5008]: I0129 15:50:39.310342 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ac86c8fe-7377-4407-aef2-ef0c1a6e1c5e-operator-scripts\") pod \"nova-api-e284-account-create-update-cz9rj\" (UID: \"ac86c8fe-7377-4407-aef2-ef0c1a6e1c5e\") " pod="openstack/nova-api-e284-account-create-update-cz9rj" Jan 29 15:50:39 crc kubenswrapper[5008]: I0129 15:50:39.310359 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2zz5q\" (UniqueName: \"kubernetes.io/projected/ac86c8fe-7377-4407-aef2-ef0c1a6e1c5e-kube-api-access-2zz5q\") pod \"nova-api-e284-account-create-update-cz9rj\" (UID: \"ac86c8fe-7377-4407-aef2-ef0c1a6e1c5e\") " pod="openstack/nova-api-e284-account-create-update-cz9rj" Jan 29 15:50:39 crc kubenswrapper[5008]: I0129 15:50:39.329058 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-9xnkt" Jan 29 15:50:39 crc kubenswrapper[5008]: I0129 15:50:39.412033 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ac86c8fe-7377-4407-aef2-ef0c1a6e1c5e-operator-scripts\") pod \"nova-api-e284-account-create-update-cz9rj\" (UID: \"ac86c8fe-7377-4407-aef2-ef0c1a6e1c5e\") " pod="openstack/nova-api-e284-account-create-update-cz9rj" Jan 29 15:50:39 crc kubenswrapper[5008]: I0129 15:50:39.412078 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2zz5q\" (UniqueName: \"kubernetes.io/projected/ac86c8fe-7377-4407-aef2-ef0c1a6e1c5e-kube-api-access-2zz5q\") pod \"nova-api-e284-account-create-update-cz9rj\" (UID: \"ac86c8fe-7377-4407-aef2-ef0c1a6e1c5e\") " pod="openstack/nova-api-e284-account-create-update-cz9rj" Jan 29 15:50:39 crc kubenswrapper[5008]: I0129 15:50:39.412138 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/110f96e6-c230-44f3-9247-90283da8976c-operator-scripts\") pod \"nova-cell1-db-create-stxgj\" (UID: \"110f96e6-c230-44f3-9247-90283da8976c\") " pod="openstack/nova-cell1-db-create-stxgj" Jan 29 15:50:39 crc kubenswrapper[5008]: I0129 15:50:39.412165 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b5t4k\" (UniqueName: \"kubernetes.io/projected/110f96e6-c230-44f3-9247-90283da8976c-kube-api-access-b5t4k\") pod \"nova-cell1-db-create-stxgj\" (UID: \"110f96e6-c230-44f3-9247-90283da8976c\") " pod="openstack/nova-cell1-db-create-stxgj" Jan 29 15:50:39 crc kubenswrapper[5008]: I0129 15:50:39.415153 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ac86c8fe-7377-4407-aef2-ef0c1a6e1c5e-operator-scripts\") pod \"nova-api-e284-account-create-update-cz9rj\" (UID: \"ac86c8fe-7377-4407-aef2-ef0c1a6e1c5e\") " pod="openstack/nova-api-e284-account-create-update-cz9rj" Jan 29 15:50:39 crc kubenswrapper[5008]: I0129 15:50:39.415939 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/110f96e6-c230-44f3-9247-90283da8976c-operator-scripts\") pod \"nova-cell1-db-create-stxgj\" (UID: \"110f96e6-c230-44f3-9247-90283da8976c\") " pod="openstack/nova-cell1-db-create-stxgj" Jan 29 15:50:39 crc kubenswrapper[5008]: I0129 15:50:39.439853 5008 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-fe67-account-create-update-bk5t9"] Jan 29 15:50:39 crc kubenswrapper[5008]: I0129 15:50:39.441172 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-fe67-account-create-update-bk5t9" Jan 29 15:50:39 crc kubenswrapper[5008]: I0129 15:50:39.448670 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-db-secret" Jan 29 15:50:39 crc kubenswrapper[5008]: I0129 15:50:39.449953 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-fe67-account-create-update-bk5t9"] Jan 29 15:50:39 crc kubenswrapper[5008]: I0129 15:50:39.457617 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b5t4k\" (UniqueName: \"kubernetes.io/projected/110f96e6-c230-44f3-9247-90283da8976c-kube-api-access-b5t4k\") pod \"nova-cell1-db-create-stxgj\" (UID: \"110f96e6-c230-44f3-9247-90283da8976c\") " pod="openstack/nova-cell1-db-create-stxgj" Jan 29 15:50:39 crc kubenswrapper[5008]: I0129 15:50:39.466589 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2zz5q\" (UniqueName: \"kubernetes.io/projected/ac86c8fe-7377-4407-aef2-ef0c1a6e1c5e-kube-api-access-2zz5q\") pod \"nova-api-e284-account-create-update-cz9rj\" (UID: \"ac86c8fe-7377-4407-aef2-ef0c1a6e1c5e\") " pod="openstack/nova-api-e284-account-create-update-cz9rj" Jan 29 15:50:39 crc kubenswrapper[5008]: I0129 15:50:39.504688 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-stxgj" Jan 29 15:50:39 crc kubenswrapper[5008]: I0129 15:50:39.515029 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-e284-account-create-update-cz9rj" Jan 29 15:50:39 crc kubenswrapper[5008]: I0129 15:50:39.518836 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9z7ch\" (UniqueName: \"kubernetes.io/projected/804a6c8c-4d3d-4949-adad-bf28d059ac39-kube-api-access-9z7ch\") pod \"nova-cell0-fe67-account-create-update-bk5t9\" (UID: \"804a6c8c-4d3d-4949-adad-bf28d059ac39\") " pod="openstack/nova-cell0-fe67-account-create-update-bk5t9" Jan 29 15:50:39 crc kubenswrapper[5008]: I0129 15:50:39.518979 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/804a6c8c-4d3d-4949-adad-bf28d059ac39-operator-scripts\") pod \"nova-cell0-fe67-account-create-update-bk5t9\" (UID: \"804a6c8c-4d3d-4949-adad-bf28d059ac39\") " pod="openstack/nova-cell0-fe67-account-create-update-bk5t9" Jan 29 15:50:39 crc kubenswrapper[5008]: I0129 15:50:39.527086 5008 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-4e36-account-create-update-mthn6"] Jan 29 15:50:39 crc kubenswrapper[5008]: I0129 15:50:39.529084 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-4e36-account-create-update-mthn6" Jan 29 15:50:39 crc kubenswrapper[5008]: I0129 15:50:39.531196 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-db-secret" Jan 29 15:50:39 crc kubenswrapper[5008]: I0129 15:50:39.538431 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-4e36-account-create-update-mthn6"] Jan 29 15:50:39 crc kubenswrapper[5008]: I0129 15:50:39.621165 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/63f2899c-3ee5-4d2c-ae4f-487783fede07-operator-scripts\") pod \"nova-cell1-4e36-account-create-update-mthn6\" (UID: \"63f2899c-3ee5-4d2c-ae4f-487783fede07\") " pod="openstack/nova-cell1-4e36-account-create-update-mthn6" Jan 29 15:50:39 crc kubenswrapper[5008]: I0129 15:50:39.621338 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/804a6c8c-4d3d-4949-adad-bf28d059ac39-operator-scripts\") pod \"nova-cell0-fe67-account-create-update-bk5t9\" (UID: \"804a6c8c-4d3d-4949-adad-bf28d059ac39\") " pod="openstack/nova-cell0-fe67-account-create-update-bk5t9" Jan 29 15:50:39 crc kubenswrapper[5008]: I0129 15:50:39.621379 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pq7rj\" (UniqueName: \"kubernetes.io/projected/63f2899c-3ee5-4d2c-ae4f-487783fede07-kube-api-access-pq7rj\") pod \"nova-cell1-4e36-account-create-update-mthn6\" (UID: \"63f2899c-3ee5-4d2c-ae4f-487783fede07\") " pod="openstack/nova-cell1-4e36-account-create-update-mthn6" Jan 29 15:50:39 crc kubenswrapper[5008]: I0129 15:50:39.621406 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9z7ch\" (UniqueName: \"kubernetes.io/projected/804a6c8c-4d3d-4949-adad-bf28d059ac39-kube-api-access-9z7ch\") pod \"nova-cell0-fe67-account-create-update-bk5t9\" (UID: \"804a6c8c-4d3d-4949-adad-bf28d059ac39\") " pod="openstack/nova-cell0-fe67-account-create-update-bk5t9" Jan 29 15:50:39 crc kubenswrapper[5008]: I0129 15:50:39.622361 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/804a6c8c-4d3d-4949-adad-bf28d059ac39-operator-scripts\") pod \"nova-cell0-fe67-account-create-update-bk5t9\" (UID: \"804a6c8c-4d3d-4949-adad-bf28d059ac39\") " pod="openstack/nova-cell0-fe67-account-create-update-bk5t9" Jan 29 15:50:39 crc kubenswrapper[5008]: I0129 15:50:39.644246 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9z7ch\" (UniqueName: \"kubernetes.io/projected/804a6c8c-4d3d-4949-adad-bf28d059ac39-kube-api-access-9z7ch\") pod \"nova-cell0-fe67-account-create-update-bk5t9\" (UID: \"804a6c8c-4d3d-4949-adad-bf28d059ac39\") " pod="openstack/nova-cell0-fe67-account-create-update-bk5t9" Jan 29 15:50:39 crc kubenswrapper[5008]: I0129 15:50:39.723914 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pq7rj\" (UniqueName: \"kubernetes.io/projected/63f2899c-3ee5-4d2c-ae4f-487783fede07-kube-api-access-pq7rj\") pod \"nova-cell1-4e36-account-create-update-mthn6\" (UID: \"63f2899c-3ee5-4d2c-ae4f-487783fede07\") " pod="openstack/nova-cell1-4e36-account-create-update-mthn6" Jan 29 15:50:39 crc kubenswrapper[5008]: I0129 15:50:39.724067 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/63f2899c-3ee5-4d2c-ae4f-487783fede07-operator-scripts\") pod \"nova-cell1-4e36-account-create-update-mthn6\" (UID: \"63f2899c-3ee5-4d2c-ae4f-487783fede07\") " pod="openstack/nova-cell1-4e36-account-create-update-mthn6" Jan 29 15:50:39 crc kubenswrapper[5008]: I0129 15:50:39.725179 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/63f2899c-3ee5-4d2c-ae4f-487783fede07-operator-scripts\") pod \"nova-cell1-4e36-account-create-update-mthn6\" (UID: \"63f2899c-3ee5-4d2c-ae4f-487783fede07\") " pod="openstack/nova-cell1-4e36-account-create-update-mthn6" Jan 29 15:50:39 crc kubenswrapper[5008]: I0129 15:50:39.744823 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pq7rj\" (UniqueName: \"kubernetes.io/projected/63f2899c-3ee5-4d2c-ae4f-487783fede07-kube-api-access-pq7rj\") pod \"nova-cell1-4e36-account-create-update-mthn6\" (UID: \"63f2899c-3ee5-4d2c-ae4f-487783fede07\") " pod="openstack/nova-cell1-4e36-account-create-update-mthn6" Jan 29 15:50:39 crc kubenswrapper[5008]: I0129 15:50:39.800818 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-lmdpk"] Jan 29 15:50:39 crc kubenswrapper[5008]: W0129 15:50:39.802897 5008 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7f34f608_b2f8_452e_8f0d_ef600929c36e.slice/crio-9f711c01c3f3f8e6a20e1c5e91488a28de77b0e88d5a0a5f43a930d927bc74ee WatchSource:0}: Error finding container 9f711c01c3f3f8e6a20e1c5e91488a28de77b0e88d5a0a5f43a930d927bc74ee: Status 404 returned error can't find the container with id 9f711c01c3f3f8e6a20e1c5e91488a28de77b0e88d5a0a5f43a930d927bc74ee Jan 29 15:50:39 crc kubenswrapper[5008]: I0129 15:50:39.803632 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-fe67-account-create-update-bk5t9" Jan 29 15:50:39 crc kubenswrapper[5008]: I0129 15:50:39.820756 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c81636ad-f799-43f6-8304-b2121e7bb427","Type":"ContainerStarted","Data":"6cb7bc803573f6d8292dd7a40b28153e8f4ff1271e0fa808ba53834296b1df6d"} Jan 29 15:50:39 crc kubenswrapper[5008]: I0129 15:50:39.854542 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-4e36-account-create-update-mthn6" Jan 29 15:50:39 crc kubenswrapper[5008]: I0129 15:50:39.988477 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-9xnkt"] Jan 29 15:50:40 crc kubenswrapper[5008]: I0129 15:50:40.138602 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-e284-account-create-update-cz9rj"] Jan 29 15:50:40 crc kubenswrapper[5008]: I0129 15:50:40.247390 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-stxgj"] Jan 29 15:50:40 crc kubenswrapper[5008]: I0129 15:50:40.502824 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-fe67-account-create-update-bk5t9"] Jan 29 15:50:40 crc kubenswrapper[5008]: I0129 15:50:40.510533 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-4e36-account-create-update-mthn6"] Jan 29 15:50:40 crc kubenswrapper[5008]: I0129 15:50:40.613937 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-55d9fbf66-r5kj8" Jan 29 15:50:40 crc kubenswrapper[5008]: I0129 15:50:40.621137 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-55d9fbf66-r5kj8" Jan 29 15:50:40 crc kubenswrapper[5008]: I0129 15:50:40.720015 5008 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-6445bd445b-mhznq"] Jan 29 15:50:40 crc kubenswrapper[5008]: I0129 15:50:40.720259 5008 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/placement-6445bd445b-mhznq" podUID="6bb31a7e-2eaf-445f-84d5-50aa5d1d007b" containerName="placement-log" containerID="cri-o://922dd14c1fc131087530a679c50232179f2527a755c5b35806f14d9f5f69d2cd" gracePeriod=30 Jan 29 15:50:40 crc kubenswrapper[5008]: I0129 15:50:40.720659 5008 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/placement-6445bd445b-mhznq" podUID="6bb31a7e-2eaf-445f-84d5-50aa5d1d007b" containerName="placement-api" containerID="cri-o://eda94a8d83e7b9b941d8d728214164666a763004cdb54a95b67730b9ed4bb21a" gracePeriod=30 Jan 29 15:50:40 crc kubenswrapper[5008]: I0129 15:50:40.837645 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-stxgj" event={"ID":"110f96e6-c230-44f3-9247-90283da8976c","Type":"ContainerStarted","Data":"54f43a8eeb4abb125a006167955d4625d7a73d504efd41b8523df427c164efa6"} Jan 29 15:50:40 crc kubenswrapper[5008]: I0129 15:50:40.841704 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-4e36-account-create-update-mthn6" event={"ID":"63f2899c-3ee5-4d2c-ae4f-487783fede07","Type":"ContainerStarted","Data":"9b12f47cdbb2b896c48220d1aac0e8e6b7220c6ea2c4ff4cf2b76a913ef44a53"} Jan 29 15:50:40 crc kubenswrapper[5008]: I0129 15:50:40.842815 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-e284-account-create-update-cz9rj" event={"ID":"ac86c8fe-7377-4407-aef2-ef0c1a6e1c5e","Type":"ContainerStarted","Data":"50a2b0760e4fa9cc3fb045d185bf9670bd499e7f4ef0f98235ea9f3653af510c"} Jan 29 15:50:40 crc kubenswrapper[5008]: I0129 15:50:40.844221 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-lmdpk" event={"ID":"7f34f608-b2f8-452e-8f0d-ef600929c36e","Type":"ContainerStarted","Data":"be81fff79545094faefca144ba3c4c81eebfa7419befdbb4509e7d36ea1420d2"} Jan 29 15:50:40 crc kubenswrapper[5008]: I0129 15:50:40.844245 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-lmdpk" event={"ID":"7f34f608-b2f8-452e-8f0d-ef600929c36e","Type":"ContainerStarted","Data":"9f711c01c3f3f8e6a20e1c5e91488a28de77b0e88d5a0a5f43a930d927bc74ee"} Jan 29 15:50:40 crc kubenswrapper[5008]: I0129 15:50:40.845022 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-9xnkt" event={"ID":"d6a58042-fefd-43b8-b186-905dcfc7b1af","Type":"ContainerStarted","Data":"aaf43661078ac1a9bfb08bc59c79813429bb3816596b5d120e45991a198b87c8"} Jan 29 15:50:40 crc kubenswrapper[5008]: I0129 15:50:40.847424 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-fe67-account-create-update-bk5t9" event={"ID":"804a6c8c-4d3d-4949-adad-bf28d059ac39","Type":"ContainerStarted","Data":"c441bacddbbf24594f7845afb68dc94be9cda37d3cadf2779f979bf27b1d5a46"} Jan 29 15:50:41 crc kubenswrapper[5008]: I0129 15:50:41.255347 5008 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 29 15:50:41 crc kubenswrapper[5008]: I0129 15:50:41.438735 5008 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 29 15:50:41 crc kubenswrapper[5008]: I0129 15:50:41.438974 5008 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="a4572386-a7c3-434a-8bcb-d1643d6893c9" containerName="glance-log" containerID="cri-o://e0fa9f1865b5505ccd4891898d3b56eec542add6175364fd360ee56950f55bac" gracePeriod=30 Jan 29 15:50:41 crc kubenswrapper[5008]: I0129 15:50:41.439088 5008 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="a4572386-a7c3-434a-8bcb-d1643d6893c9" containerName="glance-httpd" containerID="cri-o://c487f572a202948b8d78e72676270d3b2c63fcc77e90c053860ecb9f63566609" gracePeriod=30 Jan 29 15:50:41 crc kubenswrapper[5008]: I0129 15:50:41.857171 5008 generic.go:334] "Generic (PLEG): container finished" podID="a4572386-a7c3-434a-8bcb-d1643d6893c9" containerID="e0fa9f1865b5505ccd4891898d3b56eec542add6175364fd360ee56950f55bac" exitCode=143 Jan 29 15:50:41 crc kubenswrapper[5008]: I0129 15:50:41.857270 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"a4572386-a7c3-434a-8bcb-d1643d6893c9","Type":"ContainerDied","Data":"e0fa9f1865b5505ccd4891898d3b56eec542add6175364fd360ee56950f55bac"} Jan 29 15:50:41 crc kubenswrapper[5008]: I0129 15:50:41.859341 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-4e36-account-create-update-mthn6" event={"ID":"63f2899c-3ee5-4d2c-ae4f-487783fede07","Type":"ContainerStarted","Data":"4e5d5fbe6f7326436f09c1eeb706af22dd1889f9d31180f26e9f3a4622f566e8"} Jan 29 15:50:41 crc kubenswrapper[5008]: I0129 15:50:41.861168 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-e284-account-create-update-cz9rj" event={"ID":"ac86c8fe-7377-4407-aef2-ef0c1a6e1c5e","Type":"ContainerStarted","Data":"415c274cf2a73d8ccd9cabf2d49c7d2a9afd104170d6b26b6bc768e4e9246896"} Jan 29 15:50:41 crc kubenswrapper[5008]: I0129 15:50:41.864087 5008 generic.go:334] "Generic (PLEG): container finished" podID="6bb31a7e-2eaf-445f-84d5-50aa5d1d007b" containerID="922dd14c1fc131087530a679c50232179f2527a755c5b35806f14d9f5f69d2cd" exitCode=143 Jan 29 15:50:41 crc kubenswrapper[5008]: I0129 15:50:41.864130 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-6445bd445b-mhznq" event={"ID":"6bb31a7e-2eaf-445f-84d5-50aa5d1d007b","Type":"ContainerDied","Data":"922dd14c1fc131087530a679c50232179f2527a755c5b35806f14d9f5f69d2cd"} Jan 29 15:50:41 crc kubenswrapper[5008]: I0129 15:50:41.865573 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-9xnkt" event={"ID":"d6a58042-fefd-43b8-b186-905dcfc7b1af","Type":"ContainerStarted","Data":"9c072e49faa0fcbf14fb26ba5be4f4038a4404627a5b1d14d06a8f9d4347e6b9"} Jan 29 15:50:41 crc kubenswrapper[5008]: I0129 15:50:41.866896 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-fe67-account-create-update-bk5t9" event={"ID":"804a6c8c-4d3d-4949-adad-bf28d059ac39","Type":"ContainerStarted","Data":"169df0c3000d56c3aa28fc235cca6494757bead3f467fc3b72cab38160ba66e9"} Jan 29 15:50:41 crc kubenswrapper[5008]: I0129 15:50:41.869627 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-stxgj" event={"ID":"110f96e6-c230-44f3-9247-90283da8976c","Type":"ContainerStarted","Data":"84562c9f10ffe2b7193c90030faf995da403e3f35ef68c087bff6d088be04ae5"} Jan 29 15:50:41 crc kubenswrapper[5008]: I0129 15:50:41.888385 5008 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-4e36-account-create-update-mthn6" podStartSLOduration=2.888367486 podStartE2EDuration="2.888367486s" podCreationTimestamp="2026-01-29 15:50:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 15:50:41.876713774 +0000 UTC m=+1385.549568021" watchObservedRunningTime="2026-01-29 15:50:41.888367486 +0000 UTC m=+1385.561221733" Jan 29 15:50:41 crc kubenswrapper[5008]: I0129 15:50:41.895870 5008 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-db-create-stxgj" podStartSLOduration=2.895852468 podStartE2EDuration="2.895852468s" podCreationTimestamp="2026-01-29 15:50:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 15:50:41.891534984 +0000 UTC m=+1385.564389231" watchObservedRunningTime="2026-01-29 15:50:41.895852468 +0000 UTC m=+1385.568706715" Jan 29 15:50:41 crc kubenswrapper[5008]: I0129 15:50:41.913723 5008 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-fe67-account-create-update-bk5t9" podStartSLOduration=2.913702692 podStartE2EDuration="2.913702692s" podCreationTimestamp="2026-01-29 15:50:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 15:50:41.906566268 +0000 UTC m=+1385.579420515" watchObservedRunningTime="2026-01-29 15:50:41.913702692 +0000 UTC m=+1385.586556929" Jan 29 15:50:41 crc kubenswrapper[5008]: I0129 15:50:41.936673 5008 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-e284-account-create-update-cz9rj" podStartSLOduration=2.9366512780000003 podStartE2EDuration="2.936651278s" podCreationTimestamp="2026-01-29 15:50:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 15:50:41.931359829 +0000 UTC m=+1385.604214086" watchObservedRunningTime="2026-01-29 15:50:41.936651278 +0000 UTC m=+1385.609505525" Jan 29 15:50:41 crc kubenswrapper[5008]: I0129 15:50:41.956828 5008 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-db-create-lmdpk" podStartSLOduration=3.956802506 podStartE2EDuration="3.956802506s" podCreationTimestamp="2026-01-29 15:50:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 15:50:41.944610671 +0000 UTC m=+1385.617464908" watchObservedRunningTime="2026-01-29 15:50:41.956802506 +0000 UTC m=+1385.629656763" Jan 29 15:50:41 crc kubenswrapper[5008]: I0129 15:50:41.964062 5008 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-db-create-9xnkt" podStartSLOduration=3.964046263 podStartE2EDuration="3.964046263s" podCreationTimestamp="2026-01-29 15:50:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 15:50:41.957314899 +0000 UTC m=+1385.630169136" watchObservedRunningTime="2026-01-29 15:50:41.964046263 +0000 UTC m=+1385.636900490" Jan 29 15:50:42 crc kubenswrapper[5008]: I0129 15:50:42.885989 5008 generic.go:334] "Generic (PLEG): container finished" podID="d6a58042-fefd-43b8-b186-905dcfc7b1af" containerID="9c072e49faa0fcbf14fb26ba5be4f4038a4404627a5b1d14d06a8f9d4347e6b9" exitCode=0 Jan 29 15:50:42 crc kubenswrapper[5008]: I0129 15:50:42.886482 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-9xnkt" event={"ID":"d6a58042-fefd-43b8-b186-905dcfc7b1af","Type":"ContainerDied","Data":"9c072e49faa0fcbf14fb26ba5be4f4038a4404627a5b1d14d06a8f9d4347e6b9"} Jan 29 15:50:42 crc kubenswrapper[5008]: I0129 15:50:42.889376 5008 generic.go:334] "Generic (PLEG): container finished" podID="110f96e6-c230-44f3-9247-90283da8976c" containerID="84562c9f10ffe2b7193c90030faf995da403e3f35ef68c087bff6d088be04ae5" exitCode=0 Jan 29 15:50:42 crc kubenswrapper[5008]: I0129 15:50:42.889489 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-stxgj" event={"ID":"110f96e6-c230-44f3-9247-90283da8976c","Type":"ContainerDied","Data":"84562c9f10ffe2b7193c90030faf995da403e3f35ef68c087bff6d088be04ae5"} Jan 29 15:50:42 crc kubenswrapper[5008]: I0129 15:50:42.908413 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c81636ad-f799-43f6-8304-b2121e7bb427","Type":"ContainerStarted","Data":"1160ebdc889e903ce1ab9549db1c8d7aedbec5dbd448d12df99a7b71c4f59a71"} Jan 29 15:50:42 crc kubenswrapper[5008]: I0129 15:50:42.908705 5008 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="c81636ad-f799-43f6-8304-b2121e7bb427" containerName="ceilometer-central-agent" containerID="cri-o://5bdb92bd8804311389315e1c2733efae43b86032b34ec9f92e93486c776777f4" gracePeriod=30 Jan 29 15:50:42 crc kubenswrapper[5008]: I0129 15:50:42.908858 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 29 15:50:42 crc kubenswrapper[5008]: I0129 15:50:42.908926 5008 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="c81636ad-f799-43f6-8304-b2121e7bb427" containerName="proxy-httpd" containerID="cri-o://1160ebdc889e903ce1ab9549db1c8d7aedbec5dbd448d12df99a7b71c4f59a71" gracePeriod=30 Jan 29 15:50:42 crc kubenswrapper[5008]: I0129 15:50:42.908998 5008 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="c81636ad-f799-43f6-8304-b2121e7bb427" containerName="sg-core" containerID="cri-o://6cb7bc803573f6d8292dd7a40b28153e8f4ff1271e0fa808ba53834296b1df6d" gracePeriod=30 Jan 29 15:50:42 crc kubenswrapper[5008]: I0129 15:50:42.909056 5008 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="c81636ad-f799-43f6-8304-b2121e7bb427" containerName="ceilometer-notification-agent" containerID="cri-o://57b9f0118bc63b684df15ec4953cbf43eb08b4c8cd41ed4c65c18bdbe33f4dce" gracePeriod=30 Jan 29 15:50:42 crc kubenswrapper[5008]: I0129 15:50:42.921451 5008 generic.go:334] "Generic (PLEG): container finished" podID="7f34f608-b2f8-452e-8f0d-ef600929c36e" containerID="be81fff79545094faefca144ba3c4c81eebfa7419befdbb4509e7d36ea1420d2" exitCode=0 Jan 29 15:50:42 crc kubenswrapper[5008]: I0129 15:50:42.922547 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-lmdpk" event={"ID":"7f34f608-b2f8-452e-8f0d-ef600929c36e","Type":"ContainerDied","Data":"be81fff79545094faefca144ba3c4c81eebfa7419befdbb4509e7d36ea1420d2"} Jan 29 15:50:42 crc kubenswrapper[5008]: I0129 15:50:42.964888 5008 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.336980572 podStartE2EDuration="7.964873839s" podCreationTimestamp="2026-01-29 15:50:35 +0000 UTC" firstStartedPulling="2026-01-29 15:50:36.696694891 +0000 UTC m=+1380.369549128" lastFinishedPulling="2026-01-29 15:50:42.324588158 +0000 UTC m=+1385.997442395" observedRunningTime="2026-01-29 15:50:42.959468988 +0000 UTC m=+1386.632323225" watchObservedRunningTime="2026-01-29 15:50:42.964873839 +0000 UTC m=+1386.637728076" Jan 29 15:50:43 crc kubenswrapper[5008]: I0129 15:50:43.935775 5008 generic.go:334] "Generic (PLEG): container finished" podID="ac86c8fe-7377-4407-aef2-ef0c1a6e1c5e" containerID="415c274cf2a73d8ccd9cabf2d49c7d2a9afd104170d6b26b6bc768e4e9246896" exitCode=0 Jan 29 15:50:43 crc kubenswrapper[5008]: I0129 15:50:43.935818 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-e284-account-create-update-cz9rj" event={"ID":"ac86c8fe-7377-4407-aef2-ef0c1a6e1c5e","Type":"ContainerDied","Data":"415c274cf2a73d8ccd9cabf2d49c7d2a9afd104170d6b26b6bc768e4e9246896"} Jan 29 15:50:43 crc kubenswrapper[5008]: I0129 15:50:43.937693 5008 generic.go:334] "Generic (PLEG): container finished" podID="804a6c8c-4d3d-4949-adad-bf28d059ac39" containerID="169df0c3000d56c3aa28fc235cca6494757bead3f467fc3b72cab38160ba66e9" exitCode=0 Jan 29 15:50:43 crc kubenswrapper[5008]: I0129 15:50:43.937725 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-fe67-account-create-update-bk5t9" event={"ID":"804a6c8c-4d3d-4949-adad-bf28d059ac39","Type":"ContainerDied","Data":"169df0c3000d56c3aa28fc235cca6494757bead3f467fc3b72cab38160ba66e9"} Jan 29 15:50:43 crc kubenswrapper[5008]: I0129 15:50:43.939414 5008 generic.go:334] "Generic (PLEG): container finished" podID="63f2899c-3ee5-4d2c-ae4f-487783fede07" containerID="4e5d5fbe6f7326436f09c1eeb706af22dd1889f9d31180f26e9f3a4622f566e8" exitCode=0 Jan 29 15:50:43 crc kubenswrapper[5008]: I0129 15:50:43.939461 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-4e36-account-create-update-mthn6" event={"ID":"63f2899c-3ee5-4d2c-ae4f-487783fede07","Type":"ContainerDied","Data":"4e5d5fbe6f7326436f09c1eeb706af22dd1889f9d31180f26e9f3a4622f566e8"} Jan 29 15:50:43 crc kubenswrapper[5008]: I0129 15:50:43.942328 5008 generic.go:334] "Generic (PLEG): container finished" podID="c81636ad-f799-43f6-8304-b2121e7bb427" containerID="1160ebdc889e903ce1ab9549db1c8d7aedbec5dbd448d12df99a7b71c4f59a71" exitCode=0 Jan 29 15:50:43 crc kubenswrapper[5008]: I0129 15:50:43.942346 5008 generic.go:334] "Generic (PLEG): container finished" podID="c81636ad-f799-43f6-8304-b2121e7bb427" containerID="6cb7bc803573f6d8292dd7a40b28153e8f4ff1271e0fa808ba53834296b1df6d" exitCode=2 Jan 29 15:50:43 crc kubenswrapper[5008]: I0129 15:50:43.942355 5008 generic.go:334] "Generic (PLEG): container finished" podID="c81636ad-f799-43f6-8304-b2121e7bb427" containerID="57b9f0118bc63b684df15ec4953cbf43eb08b4c8cd41ed4c65c18bdbe33f4dce" exitCode=0 Jan 29 15:50:43 crc kubenswrapper[5008]: I0129 15:50:43.942417 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c81636ad-f799-43f6-8304-b2121e7bb427","Type":"ContainerDied","Data":"1160ebdc889e903ce1ab9549db1c8d7aedbec5dbd448d12df99a7b71c4f59a71"} Jan 29 15:50:43 crc kubenswrapper[5008]: I0129 15:50:43.942463 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c81636ad-f799-43f6-8304-b2121e7bb427","Type":"ContainerDied","Data":"6cb7bc803573f6d8292dd7a40b28153e8f4ff1271e0fa808ba53834296b1df6d"} Jan 29 15:50:43 crc kubenswrapper[5008]: I0129 15:50:43.942476 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c81636ad-f799-43f6-8304-b2121e7bb427","Type":"ContainerDied","Data":"57b9f0118bc63b684df15ec4953cbf43eb08b4c8cd41ed4c65c18bdbe33f4dce"} Jan 29 15:50:44 crc kubenswrapper[5008]: I0129 15:50:44.455091 5008 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-stxgj" Jan 29 15:50:44 crc kubenswrapper[5008]: I0129 15:50:44.532154 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/110f96e6-c230-44f3-9247-90283da8976c-operator-scripts\") pod \"110f96e6-c230-44f3-9247-90283da8976c\" (UID: \"110f96e6-c230-44f3-9247-90283da8976c\") " Jan 29 15:50:44 crc kubenswrapper[5008]: I0129 15:50:44.532271 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b5t4k\" (UniqueName: \"kubernetes.io/projected/110f96e6-c230-44f3-9247-90283da8976c-kube-api-access-b5t4k\") pod \"110f96e6-c230-44f3-9247-90283da8976c\" (UID: \"110f96e6-c230-44f3-9247-90283da8976c\") " Jan 29 15:50:44 crc kubenswrapper[5008]: I0129 15:50:44.533146 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/110f96e6-c230-44f3-9247-90283da8976c-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "110f96e6-c230-44f3-9247-90283da8976c" (UID: "110f96e6-c230-44f3-9247-90283da8976c"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:50:44 crc kubenswrapper[5008]: I0129 15:50:44.537898 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/110f96e6-c230-44f3-9247-90283da8976c-kube-api-access-b5t4k" (OuterVolumeSpecName: "kube-api-access-b5t4k") pod "110f96e6-c230-44f3-9247-90283da8976c" (UID: "110f96e6-c230-44f3-9247-90283da8976c"). InnerVolumeSpecName "kube-api-access-b5t4k". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:50:44 crc kubenswrapper[5008]: I0129 15:50:44.592731 5008 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-lmdpk" Jan 29 15:50:44 crc kubenswrapper[5008]: I0129 15:50:44.596123 5008 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-6445bd445b-mhznq" Jan 29 15:50:44 crc kubenswrapper[5008]: I0129 15:50:44.600622 5008 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-9xnkt" Jan 29 15:50:44 crc kubenswrapper[5008]: I0129 15:50:44.633638 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7f34f608-b2f8-452e-8f0d-ef600929c36e-operator-scripts\") pod \"7f34f608-b2f8-452e-8f0d-ef600929c36e\" (UID: \"7f34f608-b2f8-452e-8f0d-ef600929c36e\") " Jan 29 15:50:44 crc kubenswrapper[5008]: I0129 15:50:44.633754 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/6bb31a7e-2eaf-445f-84d5-50aa5d1d007b-internal-tls-certs\") pod \"6bb31a7e-2eaf-445f-84d5-50aa5d1d007b\" (UID: \"6bb31a7e-2eaf-445f-84d5-50aa5d1d007b\") " Jan 29 15:50:44 crc kubenswrapper[5008]: I0129 15:50:44.633813 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6bb31a7e-2eaf-445f-84d5-50aa5d1d007b-combined-ca-bundle\") pod \"6bb31a7e-2eaf-445f-84d5-50aa5d1d007b\" (UID: \"6bb31a7e-2eaf-445f-84d5-50aa5d1d007b\") " Jan 29 15:50:44 crc kubenswrapper[5008]: I0129 15:50:44.633847 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6bb31a7e-2eaf-445f-84d5-50aa5d1d007b-scripts\") pod \"6bb31a7e-2eaf-445f-84d5-50aa5d1d007b\" (UID: \"6bb31a7e-2eaf-445f-84d5-50aa5d1d007b\") " Jan 29 15:50:44 crc kubenswrapper[5008]: I0129 15:50:44.633918 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wh4cb\" (UniqueName: \"kubernetes.io/projected/7f34f608-b2f8-452e-8f0d-ef600929c36e-kube-api-access-wh4cb\") pod \"7f34f608-b2f8-452e-8f0d-ef600929c36e\" (UID: \"7f34f608-b2f8-452e-8f0d-ef600929c36e\") " Jan 29 15:50:44 crc kubenswrapper[5008]: I0129 15:50:44.633963 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/6bb31a7e-2eaf-445f-84d5-50aa5d1d007b-public-tls-certs\") pod \"6bb31a7e-2eaf-445f-84d5-50aa5d1d007b\" (UID: \"6bb31a7e-2eaf-445f-84d5-50aa5d1d007b\") " Jan 29 15:50:44 crc kubenswrapper[5008]: I0129 15:50:44.634006 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6bb31a7e-2eaf-445f-84d5-50aa5d1d007b-config-data\") pod \"6bb31a7e-2eaf-445f-84d5-50aa5d1d007b\" (UID: \"6bb31a7e-2eaf-445f-84d5-50aa5d1d007b\") " Jan 29 15:50:44 crc kubenswrapper[5008]: I0129 15:50:44.634024 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gdg6w\" (UniqueName: \"kubernetes.io/projected/d6a58042-fefd-43b8-b186-905dcfc7b1af-kube-api-access-gdg6w\") pod \"d6a58042-fefd-43b8-b186-905dcfc7b1af\" (UID: \"d6a58042-fefd-43b8-b186-905dcfc7b1af\") " Jan 29 15:50:44 crc kubenswrapper[5008]: I0129 15:50:44.634077 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d6a58042-fefd-43b8-b186-905dcfc7b1af-operator-scripts\") pod \"d6a58042-fefd-43b8-b186-905dcfc7b1af\" (UID: \"d6a58042-fefd-43b8-b186-905dcfc7b1af\") " Jan 29 15:50:44 crc kubenswrapper[5008]: I0129 15:50:44.634105 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qhbxw\" (UniqueName: \"kubernetes.io/projected/6bb31a7e-2eaf-445f-84d5-50aa5d1d007b-kube-api-access-qhbxw\") pod \"6bb31a7e-2eaf-445f-84d5-50aa5d1d007b\" (UID: \"6bb31a7e-2eaf-445f-84d5-50aa5d1d007b\") " Jan 29 15:50:44 crc kubenswrapper[5008]: I0129 15:50:44.634127 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6bb31a7e-2eaf-445f-84d5-50aa5d1d007b-logs\") pod \"6bb31a7e-2eaf-445f-84d5-50aa5d1d007b\" (UID: \"6bb31a7e-2eaf-445f-84d5-50aa5d1d007b\") " Jan 29 15:50:44 crc kubenswrapper[5008]: I0129 15:50:44.634189 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7f34f608-b2f8-452e-8f0d-ef600929c36e-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "7f34f608-b2f8-452e-8f0d-ef600929c36e" (UID: "7f34f608-b2f8-452e-8f0d-ef600929c36e"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:50:44 crc kubenswrapper[5008]: I0129 15:50:44.634989 5008 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7f34f608-b2f8-452e-8f0d-ef600929c36e-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 15:50:44 crc kubenswrapper[5008]: I0129 15:50:44.635078 5008 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/110f96e6-c230-44f3-9247-90283da8976c-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 15:50:44 crc kubenswrapper[5008]: I0129 15:50:44.635146 5008 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b5t4k\" (UniqueName: \"kubernetes.io/projected/110f96e6-c230-44f3-9247-90283da8976c-kube-api-access-b5t4k\") on node \"crc\" DevicePath \"\"" Jan 29 15:50:44 crc kubenswrapper[5008]: I0129 15:50:44.638138 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d6a58042-fefd-43b8-b186-905dcfc7b1af-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "d6a58042-fefd-43b8-b186-905dcfc7b1af" (UID: "d6a58042-fefd-43b8-b186-905dcfc7b1af"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:50:44 crc kubenswrapper[5008]: I0129 15:50:44.647021 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6bb31a7e-2eaf-445f-84d5-50aa5d1d007b-scripts" (OuterVolumeSpecName: "scripts") pod "6bb31a7e-2eaf-445f-84d5-50aa5d1d007b" (UID: "6bb31a7e-2eaf-445f-84d5-50aa5d1d007b"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:50:44 crc kubenswrapper[5008]: I0129 15:50:44.647085 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6bb31a7e-2eaf-445f-84d5-50aa5d1d007b-logs" (OuterVolumeSpecName: "logs") pod "6bb31a7e-2eaf-445f-84d5-50aa5d1d007b" (UID: "6bb31a7e-2eaf-445f-84d5-50aa5d1d007b"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 15:50:44 crc kubenswrapper[5008]: I0129 15:50:44.677077 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6bb31a7e-2eaf-445f-84d5-50aa5d1d007b-kube-api-access-qhbxw" (OuterVolumeSpecName: "kube-api-access-qhbxw") pod "6bb31a7e-2eaf-445f-84d5-50aa5d1d007b" (UID: "6bb31a7e-2eaf-445f-84d5-50aa5d1d007b"). InnerVolumeSpecName "kube-api-access-qhbxw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:50:44 crc kubenswrapper[5008]: I0129 15:50:44.723389 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7f34f608-b2f8-452e-8f0d-ef600929c36e-kube-api-access-wh4cb" (OuterVolumeSpecName: "kube-api-access-wh4cb") pod "7f34f608-b2f8-452e-8f0d-ef600929c36e" (UID: "7f34f608-b2f8-452e-8f0d-ef600929c36e"). InnerVolumeSpecName "kube-api-access-wh4cb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:50:44 crc kubenswrapper[5008]: I0129 15:50:44.723824 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d6a58042-fefd-43b8-b186-905dcfc7b1af-kube-api-access-gdg6w" (OuterVolumeSpecName: "kube-api-access-gdg6w") pod "d6a58042-fefd-43b8-b186-905dcfc7b1af" (UID: "d6a58042-fefd-43b8-b186-905dcfc7b1af"). InnerVolumeSpecName "kube-api-access-gdg6w". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:50:44 crc kubenswrapper[5008]: I0129 15:50:44.731969 5008 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 29 15:50:44 crc kubenswrapper[5008]: I0129 15:50:44.732231 5008 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="deb07ec3-dbb1-49c4-a9cc-155472fc28bd" containerName="glance-log" containerID="cri-o://dd3b252c8faadfc964f08468ca0dd6531af9e9a227235dd0778b9ecd9c6cebce" gracePeriod=30 Jan 29 15:50:44 crc kubenswrapper[5008]: I0129 15:50:44.732296 5008 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="deb07ec3-dbb1-49c4-a9cc-155472fc28bd" containerName="glance-httpd" containerID="cri-o://545a1369d45b715a3fe719964ed37da74cd517e9b86ae7060e6fa55a82e6ac61" gracePeriod=30 Jan 29 15:50:44 crc kubenswrapper[5008]: I0129 15:50:44.757617 5008 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wh4cb\" (UniqueName: \"kubernetes.io/projected/7f34f608-b2f8-452e-8f0d-ef600929c36e-kube-api-access-wh4cb\") on node \"crc\" DevicePath \"\"" Jan 29 15:50:44 crc kubenswrapper[5008]: I0129 15:50:44.757656 5008 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gdg6w\" (UniqueName: \"kubernetes.io/projected/d6a58042-fefd-43b8-b186-905dcfc7b1af-kube-api-access-gdg6w\") on node \"crc\" DevicePath \"\"" Jan 29 15:50:44 crc kubenswrapper[5008]: I0129 15:50:44.757668 5008 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d6a58042-fefd-43b8-b186-905dcfc7b1af-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 15:50:44 crc kubenswrapper[5008]: I0129 15:50:44.757679 5008 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qhbxw\" (UniqueName: \"kubernetes.io/projected/6bb31a7e-2eaf-445f-84d5-50aa5d1d007b-kube-api-access-qhbxw\") on node \"crc\" DevicePath \"\"" Jan 29 15:50:44 crc kubenswrapper[5008]: I0129 15:50:44.757691 5008 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6bb31a7e-2eaf-445f-84d5-50aa5d1d007b-logs\") on node \"crc\" DevicePath \"\"" Jan 29 15:50:44 crc kubenswrapper[5008]: I0129 15:50:44.757702 5008 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6bb31a7e-2eaf-445f-84d5-50aa5d1d007b-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 15:50:44 crc kubenswrapper[5008]: I0129 15:50:44.770869 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6bb31a7e-2eaf-445f-84d5-50aa5d1d007b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "6bb31a7e-2eaf-445f-84d5-50aa5d1d007b" (UID: "6bb31a7e-2eaf-445f-84d5-50aa5d1d007b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:50:44 crc kubenswrapper[5008]: I0129 15:50:44.770971 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6bb31a7e-2eaf-445f-84d5-50aa5d1d007b-config-data" (OuterVolumeSpecName: "config-data") pod "6bb31a7e-2eaf-445f-84d5-50aa5d1d007b" (UID: "6bb31a7e-2eaf-445f-84d5-50aa5d1d007b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:50:44 crc kubenswrapper[5008]: I0129 15:50:44.821068 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6bb31a7e-2eaf-445f-84d5-50aa5d1d007b-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "6bb31a7e-2eaf-445f-84d5-50aa5d1d007b" (UID: "6bb31a7e-2eaf-445f-84d5-50aa5d1d007b"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:50:44 crc kubenswrapper[5008]: I0129 15:50:44.840275 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6bb31a7e-2eaf-445f-84d5-50aa5d1d007b-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "6bb31a7e-2eaf-445f-84d5-50aa5d1d007b" (UID: "6bb31a7e-2eaf-445f-84d5-50aa5d1d007b"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:50:44 crc kubenswrapper[5008]: I0129 15:50:44.859031 5008 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/6bb31a7e-2eaf-445f-84d5-50aa5d1d007b-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 29 15:50:44 crc kubenswrapper[5008]: I0129 15:50:44.859076 5008 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6bb31a7e-2eaf-445f-84d5-50aa5d1d007b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 15:50:44 crc kubenswrapper[5008]: I0129 15:50:44.859088 5008 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/6bb31a7e-2eaf-445f-84d5-50aa5d1d007b-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 29 15:50:44 crc kubenswrapper[5008]: I0129 15:50:44.859101 5008 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6bb31a7e-2eaf-445f-84d5-50aa5d1d007b-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 15:50:44 crc kubenswrapper[5008]: I0129 15:50:44.954422 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-9xnkt" event={"ID":"d6a58042-fefd-43b8-b186-905dcfc7b1af","Type":"ContainerDied","Data":"aaf43661078ac1a9bfb08bc59c79813429bb3816596b5d120e45991a198b87c8"} Jan 29 15:50:44 crc kubenswrapper[5008]: I0129 15:50:44.954464 5008 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="aaf43661078ac1a9bfb08bc59c79813429bb3816596b5d120e45991a198b87c8" Jan 29 15:50:44 crc kubenswrapper[5008]: I0129 15:50:44.954431 5008 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-9xnkt" Jan 29 15:50:44 crc kubenswrapper[5008]: I0129 15:50:44.956991 5008 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-stxgj" Jan 29 15:50:44 crc kubenswrapper[5008]: I0129 15:50:44.957034 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-stxgj" event={"ID":"110f96e6-c230-44f3-9247-90283da8976c","Type":"ContainerDied","Data":"54f43a8eeb4abb125a006167955d4625d7a73d504efd41b8523df427c164efa6"} Jan 29 15:50:44 crc kubenswrapper[5008]: I0129 15:50:44.957189 5008 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="54f43a8eeb4abb125a006167955d4625d7a73d504efd41b8523df427c164efa6" Jan 29 15:50:44 crc kubenswrapper[5008]: I0129 15:50:44.960928 5008 generic.go:334] "Generic (PLEG): container finished" podID="a4572386-a7c3-434a-8bcb-d1643d6893c9" containerID="c487f572a202948b8d78e72676270d3b2c63fcc77e90c053860ecb9f63566609" exitCode=0 Jan 29 15:50:44 crc kubenswrapper[5008]: I0129 15:50:44.961066 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"a4572386-a7c3-434a-8bcb-d1643d6893c9","Type":"ContainerDied","Data":"c487f572a202948b8d78e72676270d3b2c63fcc77e90c053860ecb9f63566609"} Jan 29 15:50:44 crc kubenswrapper[5008]: I0129 15:50:44.970400 5008 generic.go:334] "Generic (PLEG): container finished" podID="deb07ec3-dbb1-49c4-a9cc-155472fc28bd" containerID="dd3b252c8faadfc964f08468ca0dd6531af9e9a227235dd0778b9ecd9c6cebce" exitCode=143 Jan 29 15:50:44 crc kubenswrapper[5008]: I0129 15:50:44.970513 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"deb07ec3-dbb1-49c4-a9cc-155472fc28bd","Type":"ContainerDied","Data":"dd3b252c8faadfc964f08468ca0dd6531af9e9a227235dd0778b9ecd9c6cebce"} Jan 29 15:50:44 crc kubenswrapper[5008]: I0129 15:50:44.976129 5008 generic.go:334] "Generic (PLEG): container finished" podID="6bb31a7e-2eaf-445f-84d5-50aa5d1d007b" containerID="eda94a8d83e7b9b941d8d728214164666a763004cdb54a95b67730b9ed4bb21a" exitCode=0 Jan 29 15:50:44 crc kubenswrapper[5008]: I0129 15:50:44.976185 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-6445bd445b-mhznq" event={"ID":"6bb31a7e-2eaf-445f-84d5-50aa5d1d007b","Type":"ContainerDied","Data":"eda94a8d83e7b9b941d8d728214164666a763004cdb54a95b67730b9ed4bb21a"} Jan 29 15:50:44 crc kubenswrapper[5008]: I0129 15:50:44.976412 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-6445bd445b-mhznq" event={"ID":"6bb31a7e-2eaf-445f-84d5-50aa5d1d007b","Type":"ContainerDied","Data":"359a72657c9bfba53abd214342c7a1e93d76aafd5e6beccbea5acec3bf995e32"} Jan 29 15:50:44 crc kubenswrapper[5008]: I0129 15:50:44.976522 5008 scope.go:117] "RemoveContainer" containerID="eda94a8d83e7b9b941d8d728214164666a763004cdb54a95b67730b9ed4bb21a" Jan 29 15:50:44 crc kubenswrapper[5008]: I0129 15:50:44.976200 5008 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-6445bd445b-mhznq" Jan 29 15:50:45 crc kubenswrapper[5008]: I0129 15:50:44.988701 5008 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-lmdpk" Jan 29 15:50:45 crc kubenswrapper[5008]: I0129 15:50:44.989246 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-lmdpk" event={"ID":"7f34f608-b2f8-452e-8f0d-ef600929c36e","Type":"ContainerDied","Data":"9f711c01c3f3f8e6a20e1c5e91488a28de77b0e88d5a0a5f43a930d927bc74ee"} Jan 29 15:50:45 crc kubenswrapper[5008]: I0129 15:50:44.989280 5008 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9f711c01c3f3f8e6a20e1c5e91488a28de77b0e88d5a0a5f43a930d927bc74ee" Jan 29 15:50:45 crc kubenswrapper[5008]: I0129 15:50:45.045812 5008 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-6445bd445b-mhznq"] Jan 29 15:50:45 crc kubenswrapper[5008]: I0129 15:50:45.060066 5008 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-6445bd445b-mhznq"] Jan 29 15:50:45 crc kubenswrapper[5008]: I0129 15:50:45.064425 5008 scope.go:117] "RemoveContainer" containerID="922dd14c1fc131087530a679c50232179f2527a755c5b35806f14d9f5f69d2cd" Jan 29 15:50:45 crc kubenswrapper[5008]: I0129 15:50:45.125763 5008 scope.go:117] "RemoveContainer" containerID="eda94a8d83e7b9b941d8d728214164666a763004cdb54a95b67730b9ed4bb21a" Jan 29 15:50:45 crc kubenswrapper[5008]: E0129 15:50:45.126542 5008 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"eda94a8d83e7b9b941d8d728214164666a763004cdb54a95b67730b9ed4bb21a\": container with ID starting with eda94a8d83e7b9b941d8d728214164666a763004cdb54a95b67730b9ed4bb21a not found: ID does not exist" containerID="eda94a8d83e7b9b941d8d728214164666a763004cdb54a95b67730b9ed4bb21a" Jan 29 15:50:45 crc kubenswrapper[5008]: I0129 15:50:45.126622 5008 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"eda94a8d83e7b9b941d8d728214164666a763004cdb54a95b67730b9ed4bb21a"} err="failed to get container status \"eda94a8d83e7b9b941d8d728214164666a763004cdb54a95b67730b9ed4bb21a\": rpc error: code = NotFound desc = could not find container \"eda94a8d83e7b9b941d8d728214164666a763004cdb54a95b67730b9ed4bb21a\": container with ID starting with eda94a8d83e7b9b941d8d728214164666a763004cdb54a95b67730b9ed4bb21a not found: ID does not exist" Jan 29 15:50:45 crc kubenswrapper[5008]: I0129 15:50:45.126666 5008 scope.go:117] "RemoveContainer" containerID="922dd14c1fc131087530a679c50232179f2527a755c5b35806f14d9f5f69d2cd" Jan 29 15:50:45 crc kubenswrapper[5008]: E0129 15:50:45.128484 5008 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"922dd14c1fc131087530a679c50232179f2527a755c5b35806f14d9f5f69d2cd\": container with ID starting with 922dd14c1fc131087530a679c50232179f2527a755c5b35806f14d9f5f69d2cd not found: ID does not exist" containerID="922dd14c1fc131087530a679c50232179f2527a755c5b35806f14d9f5f69d2cd" Jan 29 15:50:45 crc kubenswrapper[5008]: I0129 15:50:45.128534 5008 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"922dd14c1fc131087530a679c50232179f2527a755c5b35806f14d9f5f69d2cd"} err="failed to get container status \"922dd14c1fc131087530a679c50232179f2527a755c5b35806f14d9f5f69d2cd\": rpc error: code = NotFound desc = could not find container \"922dd14c1fc131087530a679c50232179f2527a755c5b35806f14d9f5f69d2cd\": container with ID starting with 922dd14c1fc131087530a679c50232179f2527a755c5b35806f14d9f5f69d2cd not found: ID does not exist" Jan 29 15:50:45 crc kubenswrapper[5008]: I0129 15:50:45.220414 5008 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 29 15:50:45 crc kubenswrapper[5008]: I0129 15:50:45.268585 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a4572386-a7c3-434a-8bcb-d1643d6893c9-combined-ca-bundle\") pod \"a4572386-a7c3-434a-8bcb-d1643d6893c9\" (UID: \"a4572386-a7c3-434a-8bcb-d1643d6893c9\") " Jan 29 15:50:45 crc kubenswrapper[5008]: I0129 15:50:45.268634 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rw62q\" (UniqueName: \"kubernetes.io/projected/a4572386-a7c3-434a-8bcb-d1643d6893c9-kube-api-access-rw62q\") pod \"a4572386-a7c3-434a-8bcb-d1643d6893c9\" (UID: \"a4572386-a7c3-434a-8bcb-d1643d6893c9\") " Jan 29 15:50:45 crc kubenswrapper[5008]: I0129 15:50:45.268734 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a4572386-a7c3-434a-8bcb-d1643d6893c9-config-data\") pod \"a4572386-a7c3-434a-8bcb-d1643d6893c9\" (UID: \"a4572386-a7c3-434a-8bcb-d1643d6893c9\") " Jan 29 15:50:45 crc kubenswrapper[5008]: I0129 15:50:45.268870 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/a4572386-a7c3-434a-8bcb-d1643d6893c9-httpd-run\") pod \"a4572386-a7c3-434a-8bcb-d1643d6893c9\" (UID: \"a4572386-a7c3-434a-8bcb-d1643d6893c9\") " Jan 29 15:50:45 crc kubenswrapper[5008]: I0129 15:50:45.269092 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"a4572386-a7c3-434a-8bcb-d1643d6893c9\" (UID: \"a4572386-a7c3-434a-8bcb-d1643d6893c9\") " Jan 29 15:50:45 crc kubenswrapper[5008]: I0129 15:50:45.269202 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a4572386-a7c3-434a-8bcb-d1643d6893c9-scripts\") pod \"a4572386-a7c3-434a-8bcb-d1643d6893c9\" (UID: \"a4572386-a7c3-434a-8bcb-d1643d6893c9\") " Jan 29 15:50:45 crc kubenswrapper[5008]: I0129 15:50:45.269256 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a4572386-a7c3-434a-8bcb-d1643d6893c9-logs\") pod \"a4572386-a7c3-434a-8bcb-d1643d6893c9\" (UID: \"a4572386-a7c3-434a-8bcb-d1643d6893c9\") " Jan 29 15:50:45 crc kubenswrapper[5008]: I0129 15:50:45.269288 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/a4572386-a7c3-434a-8bcb-d1643d6893c9-public-tls-certs\") pod \"a4572386-a7c3-434a-8bcb-d1643d6893c9\" (UID: \"a4572386-a7c3-434a-8bcb-d1643d6893c9\") " Jan 29 15:50:45 crc kubenswrapper[5008]: I0129 15:50:45.274569 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a4572386-a7c3-434a-8bcb-d1643d6893c9-scripts" (OuterVolumeSpecName: "scripts") pod "a4572386-a7c3-434a-8bcb-d1643d6893c9" (UID: "a4572386-a7c3-434a-8bcb-d1643d6893c9"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:50:45 crc kubenswrapper[5008]: I0129 15:50:45.283067 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a4572386-a7c3-434a-8bcb-d1643d6893c9-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "a4572386-a7c3-434a-8bcb-d1643d6893c9" (UID: "a4572386-a7c3-434a-8bcb-d1643d6893c9"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 15:50:45 crc kubenswrapper[5008]: I0129 15:50:45.286650 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage04-crc" (OuterVolumeSpecName: "glance") pod "a4572386-a7c3-434a-8bcb-d1643d6893c9" (UID: "a4572386-a7c3-434a-8bcb-d1643d6893c9"). InnerVolumeSpecName "local-storage04-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 29 15:50:45 crc kubenswrapper[5008]: I0129 15:50:45.291053 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a4572386-a7c3-434a-8bcb-d1643d6893c9-logs" (OuterVolumeSpecName: "logs") pod "a4572386-a7c3-434a-8bcb-d1643d6893c9" (UID: "a4572386-a7c3-434a-8bcb-d1643d6893c9"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 15:50:45 crc kubenswrapper[5008]: I0129 15:50:45.298936 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a4572386-a7c3-434a-8bcb-d1643d6893c9-kube-api-access-rw62q" (OuterVolumeSpecName: "kube-api-access-rw62q") pod "a4572386-a7c3-434a-8bcb-d1643d6893c9" (UID: "a4572386-a7c3-434a-8bcb-d1643d6893c9"). InnerVolumeSpecName "kube-api-access-rw62q". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:50:45 crc kubenswrapper[5008]: I0129 15:50:45.307981 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a4572386-a7c3-434a-8bcb-d1643d6893c9-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a4572386-a7c3-434a-8bcb-d1643d6893c9" (UID: "a4572386-a7c3-434a-8bcb-d1643d6893c9"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:50:45 crc kubenswrapper[5008]: I0129 15:50:45.323392 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a4572386-a7c3-434a-8bcb-d1643d6893c9-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "a4572386-a7c3-434a-8bcb-d1643d6893c9" (UID: "a4572386-a7c3-434a-8bcb-d1643d6893c9"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:50:45 crc kubenswrapper[5008]: I0129 15:50:45.359287 5008 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6bb31a7e-2eaf-445f-84d5-50aa5d1d007b" path="/var/lib/kubelet/pods/6bb31a7e-2eaf-445f-84d5-50aa5d1d007b/volumes" Jan 29 15:50:45 crc kubenswrapper[5008]: I0129 15:50:45.386319 5008 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a4572386-a7c3-434a-8bcb-d1643d6893c9-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 15:50:45 crc kubenswrapper[5008]: I0129 15:50:45.386351 5008 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a4572386-a7c3-434a-8bcb-d1643d6893c9-logs\") on node \"crc\" DevicePath \"\"" Jan 29 15:50:45 crc kubenswrapper[5008]: I0129 15:50:45.388654 5008 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/a4572386-a7c3-434a-8bcb-d1643d6893c9-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 29 15:50:45 crc kubenswrapper[5008]: I0129 15:50:45.388670 5008 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a4572386-a7c3-434a-8bcb-d1643d6893c9-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 15:50:45 crc kubenswrapper[5008]: I0129 15:50:45.388679 5008 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rw62q\" (UniqueName: \"kubernetes.io/projected/a4572386-a7c3-434a-8bcb-d1643d6893c9-kube-api-access-rw62q\") on node \"crc\" DevicePath \"\"" Jan 29 15:50:45 crc kubenswrapper[5008]: I0129 15:50:45.388692 5008 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/a4572386-a7c3-434a-8bcb-d1643d6893c9-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 29 15:50:45 crc kubenswrapper[5008]: I0129 15:50:45.388714 5008 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") on node \"crc\" " Jan 29 15:50:45 crc kubenswrapper[5008]: I0129 15:50:45.386955 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a4572386-a7c3-434a-8bcb-d1643d6893c9-config-data" (OuterVolumeSpecName: "config-data") pod "a4572386-a7c3-434a-8bcb-d1643d6893c9" (UID: "a4572386-a7c3-434a-8bcb-d1643d6893c9"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:50:45 crc kubenswrapper[5008]: I0129 15:50:45.407788 5008 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage04-crc" (UniqueName: "kubernetes.io/local-volume/local-storage04-crc") on node "crc" Jan 29 15:50:45 crc kubenswrapper[5008]: I0129 15:50:45.491798 5008 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a4572386-a7c3-434a-8bcb-d1643d6893c9-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 15:50:45 crc kubenswrapper[5008]: I0129 15:50:45.491833 5008 reconciler_common.go:293] "Volume detached for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") on node \"crc\" DevicePath \"\"" Jan 29 15:50:45 crc kubenswrapper[5008]: I0129 15:50:45.607102 5008 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-fe67-account-create-update-bk5t9" Jan 29 15:50:45 crc kubenswrapper[5008]: I0129 15:50:45.698066 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9z7ch\" (UniqueName: \"kubernetes.io/projected/804a6c8c-4d3d-4949-adad-bf28d059ac39-kube-api-access-9z7ch\") pod \"804a6c8c-4d3d-4949-adad-bf28d059ac39\" (UID: \"804a6c8c-4d3d-4949-adad-bf28d059ac39\") " Jan 29 15:50:45 crc kubenswrapper[5008]: I0129 15:50:45.698330 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/804a6c8c-4d3d-4949-adad-bf28d059ac39-operator-scripts\") pod \"804a6c8c-4d3d-4949-adad-bf28d059ac39\" (UID: \"804a6c8c-4d3d-4949-adad-bf28d059ac39\") " Jan 29 15:50:45 crc kubenswrapper[5008]: I0129 15:50:45.699004 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/804a6c8c-4d3d-4949-adad-bf28d059ac39-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "804a6c8c-4d3d-4949-adad-bf28d059ac39" (UID: "804a6c8c-4d3d-4949-adad-bf28d059ac39"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:50:45 crc kubenswrapper[5008]: I0129 15:50:45.699896 5008 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-4e36-account-create-update-mthn6" Jan 29 15:50:45 crc kubenswrapper[5008]: I0129 15:50:45.702053 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/804a6c8c-4d3d-4949-adad-bf28d059ac39-kube-api-access-9z7ch" (OuterVolumeSpecName: "kube-api-access-9z7ch") pod "804a6c8c-4d3d-4949-adad-bf28d059ac39" (UID: "804a6c8c-4d3d-4949-adad-bf28d059ac39"). InnerVolumeSpecName "kube-api-access-9z7ch". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:50:45 crc kubenswrapper[5008]: I0129 15:50:45.712560 5008 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-e284-account-create-update-cz9rj" Jan 29 15:50:45 crc kubenswrapper[5008]: I0129 15:50:45.799269 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/63f2899c-3ee5-4d2c-ae4f-487783fede07-operator-scripts\") pod \"63f2899c-3ee5-4d2c-ae4f-487783fede07\" (UID: \"63f2899c-3ee5-4d2c-ae4f-487783fede07\") " Jan 29 15:50:45 crc kubenswrapper[5008]: I0129 15:50:45.799359 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2zz5q\" (UniqueName: \"kubernetes.io/projected/ac86c8fe-7377-4407-aef2-ef0c1a6e1c5e-kube-api-access-2zz5q\") pod \"ac86c8fe-7377-4407-aef2-ef0c1a6e1c5e\" (UID: \"ac86c8fe-7377-4407-aef2-ef0c1a6e1c5e\") " Jan 29 15:50:45 crc kubenswrapper[5008]: I0129 15:50:45.799435 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pq7rj\" (UniqueName: \"kubernetes.io/projected/63f2899c-3ee5-4d2c-ae4f-487783fede07-kube-api-access-pq7rj\") pod \"63f2899c-3ee5-4d2c-ae4f-487783fede07\" (UID: \"63f2899c-3ee5-4d2c-ae4f-487783fede07\") " Jan 29 15:50:45 crc kubenswrapper[5008]: I0129 15:50:45.799462 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ac86c8fe-7377-4407-aef2-ef0c1a6e1c5e-operator-scripts\") pod \"ac86c8fe-7377-4407-aef2-ef0c1a6e1c5e\" (UID: \"ac86c8fe-7377-4407-aef2-ef0c1a6e1c5e\") " Jan 29 15:50:45 crc kubenswrapper[5008]: I0129 15:50:45.800090 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/63f2899c-3ee5-4d2c-ae4f-487783fede07-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "63f2899c-3ee5-4d2c-ae4f-487783fede07" (UID: "63f2899c-3ee5-4d2c-ae4f-487783fede07"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:50:45 crc kubenswrapper[5008]: I0129 15:50:45.800571 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ac86c8fe-7377-4407-aef2-ef0c1a6e1c5e-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "ac86c8fe-7377-4407-aef2-ef0c1a6e1c5e" (UID: "ac86c8fe-7377-4407-aef2-ef0c1a6e1c5e"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:50:45 crc kubenswrapper[5008]: I0129 15:50:45.800826 5008 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9z7ch\" (UniqueName: \"kubernetes.io/projected/804a6c8c-4d3d-4949-adad-bf28d059ac39-kube-api-access-9z7ch\") on node \"crc\" DevicePath \"\"" Jan 29 15:50:45 crc kubenswrapper[5008]: I0129 15:50:45.800849 5008 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ac86c8fe-7377-4407-aef2-ef0c1a6e1c5e-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 15:50:45 crc kubenswrapper[5008]: I0129 15:50:45.800860 5008 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/63f2899c-3ee5-4d2c-ae4f-487783fede07-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 15:50:45 crc kubenswrapper[5008]: I0129 15:50:45.800872 5008 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/804a6c8c-4d3d-4949-adad-bf28d059ac39-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 15:50:45 crc kubenswrapper[5008]: I0129 15:50:45.804059 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/63f2899c-3ee5-4d2c-ae4f-487783fede07-kube-api-access-pq7rj" (OuterVolumeSpecName: "kube-api-access-pq7rj") pod "63f2899c-3ee5-4d2c-ae4f-487783fede07" (UID: "63f2899c-3ee5-4d2c-ae4f-487783fede07"). InnerVolumeSpecName "kube-api-access-pq7rj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:50:45 crc kubenswrapper[5008]: I0129 15:50:45.805516 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ac86c8fe-7377-4407-aef2-ef0c1a6e1c5e-kube-api-access-2zz5q" (OuterVolumeSpecName: "kube-api-access-2zz5q") pod "ac86c8fe-7377-4407-aef2-ef0c1a6e1c5e" (UID: "ac86c8fe-7377-4407-aef2-ef0c1a6e1c5e"). InnerVolumeSpecName "kube-api-access-2zz5q". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:50:45 crc kubenswrapper[5008]: I0129 15:50:45.902477 5008 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2zz5q\" (UniqueName: \"kubernetes.io/projected/ac86c8fe-7377-4407-aef2-ef0c1a6e1c5e-kube-api-access-2zz5q\") on node \"crc\" DevicePath \"\"" Jan 29 15:50:45 crc kubenswrapper[5008]: I0129 15:50:45.902519 5008 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pq7rj\" (UniqueName: \"kubernetes.io/projected/63f2899c-3ee5-4d2c-ae4f-487783fede07-kube-api-access-pq7rj\") on node \"crc\" DevicePath \"\"" Jan 29 15:50:46 crc kubenswrapper[5008]: I0129 15:50:46.001862 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-fe67-account-create-update-bk5t9" event={"ID":"804a6c8c-4d3d-4949-adad-bf28d059ac39","Type":"ContainerDied","Data":"c441bacddbbf24594f7845afb68dc94be9cda37d3cadf2779f979bf27b1d5a46"} Jan 29 15:50:46 crc kubenswrapper[5008]: I0129 15:50:46.001938 5008 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c441bacddbbf24594f7845afb68dc94be9cda37d3cadf2779f979bf27b1d5a46" Jan 29 15:50:46 crc kubenswrapper[5008]: I0129 15:50:46.001868 5008 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-fe67-account-create-update-bk5t9" Jan 29 15:50:46 crc kubenswrapper[5008]: I0129 15:50:46.004524 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"a4572386-a7c3-434a-8bcb-d1643d6893c9","Type":"ContainerDied","Data":"7e694d90fa6a6ef1130c12d5f4ef32d5a6b46fd7321b4f1fabcb430d1ab3333d"} Jan 29 15:50:46 crc kubenswrapper[5008]: I0129 15:50:46.004591 5008 scope.go:117] "RemoveContainer" containerID="c487f572a202948b8d78e72676270d3b2c63fcc77e90c053860ecb9f63566609" Jan 29 15:50:46 crc kubenswrapper[5008]: I0129 15:50:46.004798 5008 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 29 15:50:46 crc kubenswrapper[5008]: I0129 15:50:46.008611 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-4e36-account-create-update-mthn6" event={"ID":"63f2899c-3ee5-4d2c-ae4f-487783fede07","Type":"ContainerDied","Data":"9b12f47cdbb2b896c48220d1aac0e8e6b7220c6ea2c4ff4cf2b76a913ef44a53"} Jan 29 15:50:46 crc kubenswrapper[5008]: I0129 15:50:46.008716 5008 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9b12f47cdbb2b896c48220d1aac0e8e6b7220c6ea2c4ff4cf2b76a913ef44a53" Jan 29 15:50:46 crc kubenswrapper[5008]: I0129 15:50:46.008951 5008 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-4e36-account-create-update-mthn6" Jan 29 15:50:46 crc kubenswrapper[5008]: I0129 15:50:46.018367 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-e284-account-create-update-cz9rj" event={"ID":"ac86c8fe-7377-4407-aef2-ef0c1a6e1c5e","Type":"ContainerDied","Data":"50a2b0760e4fa9cc3fb045d185bf9670bd499e7f4ef0f98235ea9f3653af510c"} Jan 29 15:50:46 crc kubenswrapper[5008]: I0129 15:50:46.018407 5008 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="50a2b0760e4fa9cc3fb045d185bf9670bd499e7f4ef0f98235ea9f3653af510c" Jan 29 15:50:46 crc kubenswrapper[5008]: I0129 15:50:46.018476 5008 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-e284-account-create-update-cz9rj" Jan 29 15:50:46 crc kubenswrapper[5008]: I0129 15:50:46.055269 5008 scope.go:117] "RemoveContainer" containerID="e0fa9f1865b5505ccd4891898d3b56eec542add6175364fd360ee56950f55bac" Jan 29 15:50:46 crc kubenswrapper[5008]: I0129 15:50:46.085742 5008 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 29 15:50:46 crc kubenswrapper[5008]: I0129 15:50:46.100848 5008 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 29 15:50:46 crc kubenswrapper[5008]: I0129 15:50:46.109743 5008 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Jan 29 15:50:46 crc kubenswrapper[5008]: E0129 15:50:46.126191 5008 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="63f2899c-3ee5-4d2c-ae4f-487783fede07" containerName="mariadb-account-create-update" Jan 29 15:50:46 crc kubenswrapper[5008]: I0129 15:50:46.126227 5008 state_mem.go:107] "Deleted CPUSet assignment" podUID="63f2899c-3ee5-4d2c-ae4f-487783fede07" containerName="mariadb-account-create-update" Jan 29 15:50:46 crc kubenswrapper[5008]: E0129 15:50:46.126242 5008 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a4572386-a7c3-434a-8bcb-d1643d6893c9" containerName="glance-httpd" Jan 29 15:50:46 crc kubenswrapper[5008]: I0129 15:50:46.126248 5008 state_mem.go:107] "Deleted CPUSet assignment" podUID="a4572386-a7c3-434a-8bcb-d1643d6893c9" containerName="glance-httpd" Jan 29 15:50:46 crc kubenswrapper[5008]: E0129 15:50:46.126257 5008 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d6a58042-fefd-43b8-b186-905dcfc7b1af" containerName="mariadb-database-create" Jan 29 15:50:46 crc kubenswrapper[5008]: I0129 15:50:46.126262 5008 state_mem.go:107] "Deleted CPUSet assignment" podUID="d6a58042-fefd-43b8-b186-905dcfc7b1af" containerName="mariadb-database-create" Jan 29 15:50:46 crc kubenswrapper[5008]: E0129 15:50:46.126279 5008 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="804a6c8c-4d3d-4949-adad-bf28d059ac39" containerName="mariadb-account-create-update" Jan 29 15:50:46 crc kubenswrapper[5008]: I0129 15:50:46.126286 5008 state_mem.go:107] "Deleted CPUSet assignment" podUID="804a6c8c-4d3d-4949-adad-bf28d059ac39" containerName="mariadb-account-create-update" Jan 29 15:50:46 crc kubenswrapper[5008]: E0129 15:50:46.126296 5008 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ac86c8fe-7377-4407-aef2-ef0c1a6e1c5e" containerName="mariadb-account-create-update" Jan 29 15:50:46 crc kubenswrapper[5008]: I0129 15:50:46.126301 5008 state_mem.go:107] "Deleted CPUSet assignment" podUID="ac86c8fe-7377-4407-aef2-ef0c1a6e1c5e" containerName="mariadb-account-create-update" Jan 29 15:50:46 crc kubenswrapper[5008]: E0129 15:50:46.126314 5008 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a4572386-a7c3-434a-8bcb-d1643d6893c9" containerName="glance-log" Jan 29 15:50:46 crc kubenswrapper[5008]: I0129 15:50:46.126319 5008 state_mem.go:107] "Deleted CPUSet assignment" podUID="a4572386-a7c3-434a-8bcb-d1643d6893c9" containerName="glance-log" Jan 29 15:50:46 crc kubenswrapper[5008]: E0129 15:50:46.126325 5008 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6bb31a7e-2eaf-445f-84d5-50aa5d1d007b" containerName="placement-api" Jan 29 15:50:46 crc kubenswrapper[5008]: I0129 15:50:46.126330 5008 state_mem.go:107] "Deleted CPUSet assignment" podUID="6bb31a7e-2eaf-445f-84d5-50aa5d1d007b" containerName="placement-api" Jan 29 15:50:46 crc kubenswrapper[5008]: E0129 15:50:46.126345 5008 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="110f96e6-c230-44f3-9247-90283da8976c" containerName="mariadb-database-create" Jan 29 15:50:46 crc kubenswrapper[5008]: I0129 15:50:46.126350 5008 state_mem.go:107] "Deleted CPUSet assignment" podUID="110f96e6-c230-44f3-9247-90283da8976c" containerName="mariadb-database-create" Jan 29 15:50:46 crc kubenswrapper[5008]: E0129 15:50:46.126360 5008 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7f34f608-b2f8-452e-8f0d-ef600929c36e" containerName="mariadb-database-create" Jan 29 15:50:46 crc kubenswrapper[5008]: I0129 15:50:46.126366 5008 state_mem.go:107] "Deleted CPUSet assignment" podUID="7f34f608-b2f8-452e-8f0d-ef600929c36e" containerName="mariadb-database-create" Jan 29 15:50:46 crc kubenswrapper[5008]: E0129 15:50:46.126380 5008 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6bb31a7e-2eaf-445f-84d5-50aa5d1d007b" containerName="placement-log" Jan 29 15:50:46 crc kubenswrapper[5008]: I0129 15:50:46.126386 5008 state_mem.go:107] "Deleted CPUSet assignment" podUID="6bb31a7e-2eaf-445f-84d5-50aa5d1d007b" containerName="placement-log" Jan 29 15:50:46 crc kubenswrapper[5008]: I0129 15:50:46.126542 5008 memory_manager.go:354] "RemoveStaleState removing state" podUID="6bb31a7e-2eaf-445f-84d5-50aa5d1d007b" containerName="placement-api" Jan 29 15:50:46 crc kubenswrapper[5008]: I0129 15:50:46.126552 5008 memory_manager.go:354] "RemoveStaleState removing state" podUID="ac86c8fe-7377-4407-aef2-ef0c1a6e1c5e" containerName="mariadb-account-create-update" Jan 29 15:50:46 crc kubenswrapper[5008]: I0129 15:50:46.126562 5008 memory_manager.go:354] "RemoveStaleState removing state" podUID="110f96e6-c230-44f3-9247-90283da8976c" containerName="mariadb-database-create" Jan 29 15:50:46 crc kubenswrapper[5008]: I0129 15:50:46.126570 5008 memory_manager.go:354] "RemoveStaleState removing state" podUID="a4572386-a7c3-434a-8bcb-d1643d6893c9" containerName="glance-log" Jan 29 15:50:46 crc kubenswrapper[5008]: I0129 15:50:46.126579 5008 memory_manager.go:354] "RemoveStaleState removing state" podUID="63f2899c-3ee5-4d2c-ae4f-487783fede07" containerName="mariadb-account-create-update" Jan 29 15:50:46 crc kubenswrapper[5008]: I0129 15:50:46.126592 5008 memory_manager.go:354] "RemoveStaleState removing state" podUID="6bb31a7e-2eaf-445f-84d5-50aa5d1d007b" containerName="placement-log" Jan 29 15:50:46 crc kubenswrapper[5008]: I0129 15:50:46.126606 5008 memory_manager.go:354] "RemoveStaleState removing state" podUID="a4572386-a7c3-434a-8bcb-d1643d6893c9" containerName="glance-httpd" Jan 29 15:50:46 crc kubenswrapper[5008]: I0129 15:50:46.126614 5008 memory_manager.go:354] "RemoveStaleState removing state" podUID="d6a58042-fefd-43b8-b186-905dcfc7b1af" containerName="mariadb-database-create" Jan 29 15:50:46 crc kubenswrapper[5008]: I0129 15:50:46.126624 5008 memory_manager.go:354] "RemoveStaleState removing state" podUID="7f34f608-b2f8-452e-8f0d-ef600929c36e" containerName="mariadb-database-create" Jan 29 15:50:46 crc kubenswrapper[5008]: I0129 15:50:46.126632 5008 memory_manager.go:354] "RemoveStaleState removing state" podUID="804a6c8c-4d3d-4949-adad-bf28d059ac39" containerName="mariadb-account-create-update" Jan 29 15:50:46 crc kubenswrapper[5008]: I0129 15:50:46.127437 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 29 15:50:46 crc kubenswrapper[5008]: I0129 15:50:46.127522 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 29 15:50:46 crc kubenswrapper[5008]: I0129 15:50:46.139690 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Jan 29 15:50:46 crc kubenswrapper[5008]: I0129 15:50:46.139695 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Jan 29 15:50:46 crc kubenswrapper[5008]: I0129 15:50:46.207096 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bwggz\" (UniqueName: \"kubernetes.io/projected/b210097f-985c-4014-a76e-b430ef390fce-kube-api-access-bwggz\") pod \"glance-default-external-api-0\" (UID: \"b210097f-985c-4014-a76e-b430ef390fce\") " pod="openstack/glance-default-external-api-0" Jan 29 15:50:46 crc kubenswrapper[5008]: I0129 15:50:46.207149 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/b210097f-985c-4014-a76e-b430ef390fce-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"b210097f-985c-4014-a76e-b430ef390fce\") " pod="openstack/glance-default-external-api-0" Jan 29 15:50:46 crc kubenswrapper[5008]: I0129 15:50:46.207169 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b210097f-985c-4014-a76e-b430ef390fce-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"b210097f-985c-4014-a76e-b430ef390fce\") " pod="openstack/glance-default-external-api-0" Jan 29 15:50:46 crc kubenswrapper[5008]: I0129 15:50:46.207196 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"glance-default-external-api-0\" (UID: \"b210097f-985c-4014-a76e-b430ef390fce\") " pod="openstack/glance-default-external-api-0" Jan 29 15:50:46 crc kubenswrapper[5008]: I0129 15:50:46.207213 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b210097f-985c-4014-a76e-b430ef390fce-scripts\") pod \"glance-default-external-api-0\" (UID: \"b210097f-985c-4014-a76e-b430ef390fce\") " pod="openstack/glance-default-external-api-0" Jan 29 15:50:46 crc kubenswrapper[5008]: I0129 15:50:46.207240 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b210097f-985c-4014-a76e-b430ef390fce-logs\") pod \"glance-default-external-api-0\" (UID: \"b210097f-985c-4014-a76e-b430ef390fce\") " pod="openstack/glance-default-external-api-0" Jan 29 15:50:46 crc kubenswrapper[5008]: I0129 15:50:46.207283 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b210097f-985c-4014-a76e-b430ef390fce-config-data\") pod \"glance-default-external-api-0\" (UID: \"b210097f-985c-4014-a76e-b430ef390fce\") " pod="openstack/glance-default-external-api-0" Jan 29 15:50:46 crc kubenswrapper[5008]: I0129 15:50:46.207305 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b210097f-985c-4014-a76e-b430ef390fce-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"b210097f-985c-4014-a76e-b430ef390fce\") " pod="openstack/glance-default-external-api-0" Jan 29 15:50:46 crc kubenswrapper[5008]: I0129 15:50:46.308594 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bwggz\" (UniqueName: \"kubernetes.io/projected/b210097f-985c-4014-a76e-b430ef390fce-kube-api-access-bwggz\") pod \"glance-default-external-api-0\" (UID: \"b210097f-985c-4014-a76e-b430ef390fce\") " pod="openstack/glance-default-external-api-0" Jan 29 15:50:46 crc kubenswrapper[5008]: I0129 15:50:46.308685 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/b210097f-985c-4014-a76e-b430ef390fce-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"b210097f-985c-4014-a76e-b430ef390fce\") " pod="openstack/glance-default-external-api-0" Jan 29 15:50:46 crc kubenswrapper[5008]: I0129 15:50:46.308705 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b210097f-985c-4014-a76e-b430ef390fce-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"b210097f-985c-4014-a76e-b430ef390fce\") " pod="openstack/glance-default-external-api-0" Jan 29 15:50:46 crc kubenswrapper[5008]: I0129 15:50:46.308727 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b210097f-985c-4014-a76e-b430ef390fce-scripts\") pod \"glance-default-external-api-0\" (UID: \"b210097f-985c-4014-a76e-b430ef390fce\") " pod="openstack/glance-default-external-api-0" Jan 29 15:50:46 crc kubenswrapper[5008]: I0129 15:50:46.308746 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"glance-default-external-api-0\" (UID: \"b210097f-985c-4014-a76e-b430ef390fce\") " pod="openstack/glance-default-external-api-0" Jan 29 15:50:46 crc kubenswrapper[5008]: I0129 15:50:46.308770 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b210097f-985c-4014-a76e-b430ef390fce-logs\") pod \"glance-default-external-api-0\" (UID: \"b210097f-985c-4014-a76e-b430ef390fce\") " pod="openstack/glance-default-external-api-0" Jan 29 15:50:46 crc kubenswrapper[5008]: I0129 15:50:46.308804 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b210097f-985c-4014-a76e-b430ef390fce-config-data\") pod \"glance-default-external-api-0\" (UID: \"b210097f-985c-4014-a76e-b430ef390fce\") " pod="openstack/glance-default-external-api-0" Jan 29 15:50:46 crc kubenswrapper[5008]: I0129 15:50:46.308828 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b210097f-985c-4014-a76e-b430ef390fce-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"b210097f-985c-4014-a76e-b430ef390fce\") " pod="openstack/glance-default-external-api-0" Jan 29 15:50:46 crc kubenswrapper[5008]: I0129 15:50:46.309224 5008 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"glance-default-external-api-0\" (UID: \"b210097f-985c-4014-a76e-b430ef390fce\") device mount path \"/mnt/openstack/pv04\"" pod="openstack/glance-default-external-api-0" Jan 29 15:50:46 crc kubenswrapper[5008]: I0129 15:50:46.309293 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b210097f-985c-4014-a76e-b430ef390fce-logs\") pod \"glance-default-external-api-0\" (UID: \"b210097f-985c-4014-a76e-b430ef390fce\") " pod="openstack/glance-default-external-api-0" Jan 29 15:50:46 crc kubenswrapper[5008]: I0129 15:50:46.309356 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/b210097f-985c-4014-a76e-b430ef390fce-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"b210097f-985c-4014-a76e-b430ef390fce\") " pod="openstack/glance-default-external-api-0" Jan 29 15:50:46 crc kubenswrapper[5008]: I0129 15:50:46.314567 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b210097f-985c-4014-a76e-b430ef390fce-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"b210097f-985c-4014-a76e-b430ef390fce\") " pod="openstack/glance-default-external-api-0" Jan 29 15:50:46 crc kubenswrapper[5008]: I0129 15:50:46.315172 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b210097f-985c-4014-a76e-b430ef390fce-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"b210097f-985c-4014-a76e-b430ef390fce\") " pod="openstack/glance-default-external-api-0" Jan 29 15:50:46 crc kubenswrapper[5008]: I0129 15:50:46.315240 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b210097f-985c-4014-a76e-b430ef390fce-config-data\") pod \"glance-default-external-api-0\" (UID: \"b210097f-985c-4014-a76e-b430ef390fce\") " pod="openstack/glance-default-external-api-0" Jan 29 15:50:46 crc kubenswrapper[5008]: I0129 15:50:46.319401 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b210097f-985c-4014-a76e-b430ef390fce-scripts\") pod \"glance-default-external-api-0\" (UID: \"b210097f-985c-4014-a76e-b430ef390fce\") " pod="openstack/glance-default-external-api-0" Jan 29 15:50:46 crc kubenswrapper[5008]: I0129 15:50:46.330084 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bwggz\" (UniqueName: \"kubernetes.io/projected/b210097f-985c-4014-a76e-b430ef390fce-kube-api-access-bwggz\") pod \"glance-default-external-api-0\" (UID: \"b210097f-985c-4014-a76e-b430ef390fce\") " pod="openstack/glance-default-external-api-0" Jan 29 15:50:46 crc kubenswrapper[5008]: I0129 15:50:46.339358 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"glance-default-external-api-0\" (UID: \"b210097f-985c-4014-a76e-b430ef390fce\") " pod="openstack/glance-default-external-api-0" Jan 29 15:50:46 crc kubenswrapper[5008]: I0129 15:50:46.455468 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 29 15:50:47 crc kubenswrapper[5008]: I0129 15:50:47.199679 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 29 15:50:47 crc kubenswrapper[5008]: I0129 15:50:47.334481 5008 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a4572386-a7c3-434a-8bcb-d1643d6893c9" path="/var/lib/kubelet/pods/a4572386-a7c3-434a-8bcb-d1643d6893c9/volumes" Jan 29 15:50:48 crc kubenswrapper[5008]: I0129 15:50:48.053804 5008 generic.go:334] "Generic (PLEG): container finished" podID="deb07ec3-dbb1-49c4-a9cc-155472fc28bd" containerID="545a1369d45b715a3fe719964ed37da74cd517e9b86ae7060e6fa55a82e6ac61" exitCode=0 Jan 29 15:50:48 crc kubenswrapper[5008]: I0129 15:50:48.054074 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"deb07ec3-dbb1-49c4-a9cc-155472fc28bd","Type":"ContainerDied","Data":"545a1369d45b715a3fe719964ed37da74cd517e9b86ae7060e6fa55a82e6ac61"} Jan 29 15:50:48 crc kubenswrapper[5008]: I0129 15:50:48.080691 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"b210097f-985c-4014-a76e-b430ef390fce","Type":"ContainerStarted","Data":"1c77dc5a1c47165cc89495ccf8800d8e17aa07125ab958bde86eb223c06adc95"} Jan 29 15:50:48 crc kubenswrapper[5008]: I0129 15:50:48.080754 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"b210097f-985c-4014-a76e-b430ef390fce","Type":"ContainerStarted","Data":"a8f0c3e7553f02acd8bc69cfc8d32757da715fe2a5c25e250d1acf3cc83d59b1"} Jan 29 15:50:48 crc kubenswrapper[5008]: I0129 15:50:48.639506 5008 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 29 15:50:48 crc kubenswrapper[5008]: I0129 15:50:48.788820 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/deb07ec3-dbb1-49c4-a9cc-155472fc28bd-httpd-run\") pod \"deb07ec3-dbb1-49c4-a9cc-155472fc28bd\" (UID: \"deb07ec3-dbb1-49c4-a9cc-155472fc28bd\") " Jan 29 15:50:48 crc kubenswrapper[5008]: I0129 15:50:48.789111 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wzqpv\" (UniqueName: \"kubernetes.io/projected/deb07ec3-dbb1-49c4-a9cc-155472fc28bd-kube-api-access-wzqpv\") pod \"deb07ec3-dbb1-49c4-a9cc-155472fc28bd\" (UID: \"deb07ec3-dbb1-49c4-a9cc-155472fc28bd\") " Jan 29 15:50:48 crc kubenswrapper[5008]: I0129 15:50:48.789189 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"deb07ec3-dbb1-49c4-a9cc-155472fc28bd\" (UID: \"deb07ec3-dbb1-49c4-a9cc-155472fc28bd\") " Jan 29 15:50:48 crc kubenswrapper[5008]: I0129 15:50:48.789261 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/deb07ec3-dbb1-49c4-a9cc-155472fc28bd-internal-tls-certs\") pod \"deb07ec3-dbb1-49c4-a9cc-155472fc28bd\" (UID: \"deb07ec3-dbb1-49c4-a9cc-155472fc28bd\") " Jan 29 15:50:48 crc kubenswrapper[5008]: I0129 15:50:48.789289 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/deb07ec3-dbb1-49c4-a9cc-155472fc28bd-combined-ca-bundle\") pod \"deb07ec3-dbb1-49c4-a9cc-155472fc28bd\" (UID: \"deb07ec3-dbb1-49c4-a9cc-155472fc28bd\") " Jan 29 15:50:48 crc kubenswrapper[5008]: I0129 15:50:48.789331 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/deb07ec3-dbb1-49c4-a9cc-155472fc28bd-config-data\") pod \"deb07ec3-dbb1-49c4-a9cc-155472fc28bd\" (UID: \"deb07ec3-dbb1-49c4-a9cc-155472fc28bd\") " Jan 29 15:50:48 crc kubenswrapper[5008]: I0129 15:50:48.789384 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/deb07ec3-dbb1-49c4-a9cc-155472fc28bd-scripts\") pod \"deb07ec3-dbb1-49c4-a9cc-155472fc28bd\" (UID: \"deb07ec3-dbb1-49c4-a9cc-155472fc28bd\") " Jan 29 15:50:48 crc kubenswrapper[5008]: I0129 15:50:48.789418 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/deb07ec3-dbb1-49c4-a9cc-155472fc28bd-logs\") pod \"deb07ec3-dbb1-49c4-a9cc-155472fc28bd\" (UID: \"deb07ec3-dbb1-49c4-a9cc-155472fc28bd\") " Jan 29 15:50:48 crc kubenswrapper[5008]: I0129 15:50:48.789412 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/deb07ec3-dbb1-49c4-a9cc-155472fc28bd-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "deb07ec3-dbb1-49c4-a9cc-155472fc28bd" (UID: "deb07ec3-dbb1-49c4-a9cc-155472fc28bd"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 15:50:48 crc kubenswrapper[5008]: I0129 15:50:48.789935 5008 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/deb07ec3-dbb1-49c4-a9cc-155472fc28bd-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 29 15:50:48 crc kubenswrapper[5008]: I0129 15:50:48.790503 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/deb07ec3-dbb1-49c4-a9cc-155472fc28bd-logs" (OuterVolumeSpecName: "logs") pod "deb07ec3-dbb1-49c4-a9cc-155472fc28bd" (UID: "deb07ec3-dbb1-49c4-a9cc-155472fc28bd"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 15:50:48 crc kubenswrapper[5008]: I0129 15:50:48.796386 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage07-crc" (OuterVolumeSpecName: "glance") pod "deb07ec3-dbb1-49c4-a9cc-155472fc28bd" (UID: "deb07ec3-dbb1-49c4-a9cc-155472fc28bd"). InnerVolumeSpecName "local-storage07-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 29 15:50:48 crc kubenswrapper[5008]: I0129 15:50:48.796406 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/deb07ec3-dbb1-49c4-a9cc-155472fc28bd-scripts" (OuterVolumeSpecName: "scripts") pod "deb07ec3-dbb1-49c4-a9cc-155472fc28bd" (UID: "deb07ec3-dbb1-49c4-a9cc-155472fc28bd"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:50:48 crc kubenswrapper[5008]: I0129 15:50:48.797085 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/deb07ec3-dbb1-49c4-a9cc-155472fc28bd-kube-api-access-wzqpv" (OuterVolumeSpecName: "kube-api-access-wzqpv") pod "deb07ec3-dbb1-49c4-a9cc-155472fc28bd" (UID: "deb07ec3-dbb1-49c4-a9cc-155472fc28bd"). InnerVolumeSpecName "kube-api-access-wzqpv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:50:48 crc kubenswrapper[5008]: I0129 15:50:48.822905 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/deb07ec3-dbb1-49c4-a9cc-155472fc28bd-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "deb07ec3-dbb1-49c4-a9cc-155472fc28bd" (UID: "deb07ec3-dbb1-49c4-a9cc-155472fc28bd"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:50:48 crc kubenswrapper[5008]: I0129 15:50:48.853192 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/deb07ec3-dbb1-49c4-a9cc-155472fc28bd-config-data" (OuterVolumeSpecName: "config-data") pod "deb07ec3-dbb1-49c4-a9cc-155472fc28bd" (UID: "deb07ec3-dbb1-49c4-a9cc-155472fc28bd"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:50:48 crc kubenswrapper[5008]: I0129 15:50:48.854985 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/deb07ec3-dbb1-49c4-a9cc-155472fc28bd-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "deb07ec3-dbb1-49c4-a9cc-155472fc28bd" (UID: "deb07ec3-dbb1-49c4-a9cc-155472fc28bd"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:50:48 crc kubenswrapper[5008]: I0129 15:50:48.891490 5008 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") on node \"crc\" " Jan 29 15:50:48 crc kubenswrapper[5008]: I0129 15:50:48.891740 5008 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/deb07ec3-dbb1-49c4-a9cc-155472fc28bd-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 29 15:50:48 crc kubenswrapper[5008]: I0129 15:50:48.891874 5008 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/deb07ec3-dbb1-49c4-a9cc-155472fc28bd-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 15:50:48 crc kubenswrapper[5008]: I0129 15:50:48.891972 5008 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/deb07ec3-dbb1-49c4-a9cc-155472fc28bd-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 15:50:48 crc kubenswrapper[5008]: I0129 15:50:48.892054 5008 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/deb07ec3-dbb1-49c4-a9cc-155472fc28bd-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 15:50:48 crc kubenswrapper[5008]: I0129 15:50:48.892134 5008 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/deb07ec3-dbb1-49c4-a9cc-155472fc28bd-logs\") on node \"crc\" DevicePath \"\"" Jan 29 15:50:48 crc kubenswrapper[5008]: I0129 15:50:48.892206 5008 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wzqpv\" (UniqueName: \"kubernetes.io/projected/deb07ec3-dbb1-49c4-a9cc-155472fc28bd-kube-api-access-wzqpv\") on node \"crc\" DevicePath \"\"" Jan 29 15:50:48 crc kubenswrapper[5008]: I0129 15:50:48.923644 5008 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage07-crc" (UniqueName: "kubernetes.io/local-volume/local-storage07-crc") on node "crc" Jan 29 15:50:48 crc kubenswrapper[5008]: I0129 15:50:48.993308 5008 reconciler_common.go:293] "Volume detached for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") on node \"crc\" DevicePath \"\"" Jan 29 15:50:49 crc kubenswrapper[5008]: I0129 15:50:49.092371 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"deb07ec3-dbb1-49c4-a9cc-155472fc28bd","Type":"ContainerDied","Data":"d5ff4add692e0bdecfe0d236bfcf204bfe9c6a37130e4e5f390ced855d6ac026"} Jan 29 15:50:49 crc kubenswrapper[5008]: I0129 15:50:49.092396 5008 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 29 15:50:49 crc kubenswrapper[5008]: I0129 15:50:49.092430 5008 scope.go:117] "RemoveContainer" containerID="545a1369d45b715a3fe719964ed37da74cd517e9b86ae7060e6fa55a82e6ac61" Jan 29 15:50:49 crc kubenswrapper[5008]: I0129 15:50:49.096042 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"b210097f-985c-4014-a76e-b430ef390fce","Type":"ContainerStarted","Data":"92c70c8e7f911b9a5337dd362e47e177fc7522ef2c3e0b34c3e165d1d390335d"} Jan 29 15:50:49 crc kubenswrapper[5008]: I0129 15:50:49.121851 5008 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=3.12183069 podStartE2EDuration="3.12183069s" podCreationTimestamp="2026-01-29 15:50:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 15:50:49.115308781 +0000 UTC m=+1392.788163018" watchObservedRunningTime="2026-01-29 15:50:49.12183069 +0000 UTC m=+1392.794684927" Jan 29 15:50:49 crc kubenswrapper[5008]: I0129 15:50:49.130597 5008 scope.go:117] "RemoveContainer" containerID="dd3b252c8faadfc964f08468ca0dd6531af9e9a227235dd0778b9ecd9c6cebce" Jan 29 15:50:49 crc kubenswrapper[5008]: I0129 15:50:49.154171 5008 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 29 15:50:49 crc kubenswrapper[5008]: I0129 15:50:49.164094 5008 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 29 15:50:49 crc kubenswrapper[5008]: I0129 15:50:49.182843 5008 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 29 15:50:49 crc kubenswrapper[5008]: E0129 15:50:49.184148 5008 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="deb07ec3-dbb1-49c4-a9cc-155472fc28bd" containerName="glance-httpd" Jan 29 15:50:49 crc kubenswrapper[5008]: I0129 15:50:49.184195 5008 state_mem.go:107] "Deleted CPUSet assignment" podUID="deb07ec3-dbb1-49c4-a9cc-155472fc28bd" containerName="glance-httpd" Jan 29 15:50:49 crc kubenswrapper[5008]: E0129 15:50:49.184227 5008 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="deb07ec3-dbb1-49c4-a9cc-155472fc28bd" containerName="glance-log" Jan 29 15:50:49 crc kubenswrapper[5008]: I0129 15:50:49.187800 5008 state_mem.go:107] "Deleted CPUSet assignment" podUID="deb07ec3-dbb1-49c4-a9cc-155472fc28bd" containerName="glance-log" Jan 29 15:50:49 crc kubenswrapper[5008]: I0129 15:50:49.188542 5008 memory_manager.go:354] "RemoveStaleState removing state" podUID="deb07ec3-dbb1-49c4-a9cc-155472fc28bd" containerName="glance-httpd" Jan 29 15:50:49 crc kubenswrapper[5008]: I0129 15:50:49.188569 5008 memory_manager.go:354] "RemoveStaleState removing state" podUID="deb07ec3-dbb1-49c4-a9cc-155472fc28bd" containerName="glance-log" Jan 29 15:50:49 crc kubenswrapper[5008]: I0129 15:50:49.190394 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 29 15:50:49 crc kubenswrapper[5008]: I0129 15:50:49.195747 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Jan 29 15:50:49 crc kubenswrapper[5008]: I0129 15:50:49.195855 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Jan 29 15:50:49 crc kubenswrapper[5008]: I0129 15:50:49.197446 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 29 15:50:49 crc kubenswrapper[5008]: I0129 15:50:49.297344 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d30face9-2636-4cb7-8e84-8558b7b40df4-scripts\") pod \"glance-default-internal-api-0\" (UID: \"d30face9-2636-4cb7-8e84-8558b7b40df4\") " pod="openstack/glance-default-internal-api-0" Jan 29 15:50:49 crc kubenswrapper[5008]: I0129 15:50:49.297396 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d30face9-2636-4cb7-8e84-8558b7b40df4-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"d30face9-2636-4cb7-8e84-8558b7b40df4\") " pod="openstack/glance-default-internal-api-0" Jan 29 15:50:49 crc kubenswrapper[5008]: I0129 15:50:49.297432 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d30face9-2636-4cb7-8e84-8558b7b40df4-logs\") pod \"glance-default-internal-api-0\" (UID: \"d30face9-2636-4cb7-8e84-8558b7b40df4\") " pod="openstack/glance-default-internal-api-0" Jan 29 15:50:49 crc kubenswrapper[5008]: I0129 15:50:49.297453 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rf42m\" (UniqueName: \"kubernetes.io/projected/d30face9-2636-4cb7-8e84-8558b7b40df4-kube-api-access-rf42m\") pod \"glance-default-internal-api-0\" (UID: \"d30face9-2636-4cb7-8e84-8558b7b40df4\") " pod="openstack/glance-default-internal-api-0" Jan 29 15:50:49 crc kubenswrapper[5008]: I0129 15:50:49.297491 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/d30face9-2636-4cb7-8e84-8558b7b40df4-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"d30face9-2636-4cb7-8e84-8558b7b40df4\") " pod="openstack/glance-default-internal-api-0" Jan 29 15:50:49 crc kubenswrapper[5008]: I0129 15:50:49.297529 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d30face9-2636-4cb7-8e84-8558b7b40df4-config-data\") pod \"glance-default-internal-api-0\" (UID: \"d30face9-2636-4cb7-8e84-8558b7b40df4\") " pod="openstack/glance-default-internal-api-0" Jan 29 15:50:49 crc kubenswrapper[5008]: I0129 15:50:49.297597 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"glance-default-internal-api-0\" (UID: \"d30face9-2636-4cb7-8e84-8558b7b40df4\") " pod="openstack/glance-default-internal-api-0" Jan 29 15:50:49 crc kubenswrapper[5008]: I0129 15:50:49.297651 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d30face9-2636-4cb7-8e84-8558b7b40df4-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"d30face9-2636-4cb7-8e84-8558b7b40df4\") " pod="openstack/glance-default-internal-api-0" Jan 29 15:50:49 crc kubenswrapper[5008]: I0129 15:50:49.334086 5008 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="deb07ec3-dbb1-49c4-a9cc-155472fc28bd" path="/var/lib/kubelet/pods/deb07ec3-dbb1-49c4-a9cc-155472fc28bd/volumes" Jan 29 15:50:49 crc kubenswrapper[5008]: I0129 15:50:49.399303 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"glance-default-internal-api-0\" (UID: \"d30face9-2636-4cb7-8e84-8558b7b40df4\") " pod="openstack/glance-default-internal-api-0" Jan 29 15:50:49 crc kubenswrapper[5008]: I0129 15:50:49.399388 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d30face9-2636-4cb7-8e84-8558b7b40df4-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"d30face9-2636-4cb7-8e84-8558b7b40df4\") " pod="openstack/glance-default-internal-api-0" Jan 29 15:50:49 crc kubenswrapper[5008]: I0129 15:50:49.399415 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d30face9-2636-4cb7-8e84-8558b7b40df4-scripts\") pod \"glance-default-internal-api-0\" (UID: \"d30face9-2636-4cb7-8e84-8558b7b40df4\") " pod="openstack/glance-default-internal-api-0" Jan 29 15:50:49 crc kubenswrapper[5008]: I0129 15:50:49.399436 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d30face9-2636-4cb7-8e84-8558b7b40df4-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"d30face9-2636-4cb7-8e84-8558b7b40df4\") " pod="openstack/glance-default-internal-api-0" Jan 29 15:50:49 crc kubenswrapper[5008]: I0129 15:50:49.399462 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d30face9-2636-4cb7-8e84-8558b7b40df4-logs\") pod \"glance-default-internal-api-0\" (UID: \"d30face9-2636-4cb7-8e84-8558b7b40df4\") " pod="openstack/glance-default-internal-api-0" Jan 29 15:50:49 crc kubenswrapper[5008]: I0129 15:50:49.399478 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rf42m\" (UniqueName: \"kubernetes.io/projected/d30face9-2636-4cb7-8e84-8558b7b40df4-kube-api-access-rf42m\") pod \"glance-default-internal-api-0\" (UID: \"d30face9-2636-4cb7-8e84-8558b7b40df4\") " pod="openstack/glance-default-internal-api-0" Jan 29 15:50:49 crc kubenswrapper[5008]: I0129 15:50:49.399493 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/d30face9-2636-4cb7-8e84-8558b7b40df4-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"d30face9-2636-4cb7-8e84-8558b7b40df4\") " pod="openstack/glance-default-internal-api-0" Jan 29 15:50:49 crc kubenswrapper[5008]: I0129 15:50:49.399536 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d30face9-2636-4cb7-8e84-8558b7b40df4-config-data\") pod \"glance-default-internal-api-0\" (UID: \"d30face9-2636-4cb7-8e84-8558b7b40df4\") " pod="openstack/glance-default-internal-api-0" Jan 29 15:50:49 crc kubenswrapper[5008]: I0129 15:50:49.400705 5008 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"glance-default-internal-api-0\" (UID: \"d30face9-2636-4cb7-8e84-8558b7b40df4\") device mount path \"/mnt/openstack/pv07\"" pod="openstack/glance-default-internal-api-0" Jan 29 15:50:49 crc kubenswrapper[5008]: I0129 15:50:49.400723 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/d30face9-2636-4cb7-8e84-8558b7b40df4-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"d30face9-2636-4cb7-8e84-8558b7b40df4\") " pod="openstack/glance-default-internal-api-0" Jan 29 15:50:49 crc kubenswrapper[5008]: I0129 15:50:49.400970 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d30face9-2636-4cb7-8e84-8558b7b40df4-logs\") pod \"glance-default-internal-api-0\" (UID: \"d30face9-2636-4cb7-8e84-8558b7b40df4\") " pod="openstack/glance-default-internal-api-0" Jan 29 15:50:49 crc kubenswrapper[5008]: I0129 15:50:49.404349 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d30face9-2636-4cb7-8e84-8558b7b40df4-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"d30face9-2636-4cb7-8e84-8558b7b40df4\") " pod="openstack/glance-default-internal-api-0" Jan 29 15:50:49 crc kubenswrapper[5008]: I0129 15:50:49.404526 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d30face9-2636-4cb7-8e84-8558b7b40df4-config-data\") pod \"glance-default-internal-api-0\" (UID: \"d30face9-2636-4cb7-8e84-8558b7b40df4\") " pod="openstack/glance-default-internal-api-0" Jan 29 15:50:49 crc kubenswrapper[5008]: I0129 15:50:49.404886 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d30face9-2636-4cb7-8e84-8558b7b40df4-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"d30face9-2636-4cb7-8e84-8558b7b40df4\") " pod="openstack/glance-default-internal-api-0" Jan 29 15:50:49 crc kubenswrapper[5008]: I0129 15:50:49.404948 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d30face9-2636-4cb7-8e84-8558b7b40df4-scripts\") pod \"glance-default-internal-api-0\" (UID: \"d30face9-2636-4cb7-8e84-8558b7b40df4\") " pod="openstack/glance-default-internal-api-0" Jan 29 15:50:49 crc kubenswrapper[5008]: I0129 15:50:49.432415 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rf42m\" (UniqueName: \"kubernetes.io/projected/d30face9-2636-4cb7-8e84-8558b7b40df4-kube-api-access-rf42m\") pod \"glance-default-internal-api-0\" (UID: \"d30face9-2636-4cb7-8e84-8558b7b40df4\") " pod="openstack/glance-default-internal-api-0" Jan 29 15:50:49 crc kubenswrapper[5008]: I0129 15:50:49.439211 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"glance-default-internal-api-0\" (UID: \"d30face9-2636-4cb7-8e84-8558b7b40df4\") " pod="openstack/glance-default-internal-api-0" Jan 29 15:50:49 crc kubenswrapper[5008]: I0129 15:50:49.518621 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 29 15:50:49 crc kubenswrapper[5008]: I0129 15:50:49.656995 5008 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-db-sync-9mffk"] Jan 29 15:50:49 crc kubenswrapper[5008]: I0129 15:50:49.658383 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-9mffk" Jan 29 15:50:49 crc kubenswrapper[5008]: I0129 15:50:49.660353 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Jan 29 15:50:49 crc kubenswrapper[5008]: I0129 15:50:49.660737 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-s4fbc" Jan 29 15:50:49 crc kubenswrapper[5008]: I0129 15:50:49.665589 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-scripts" Jan 29 15:50:49 crc kubenswrapper[5008]: I0129 15:50:49.673383 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-9mffk"] Jan 29 15:50:49 crc kubenswrapper[5008]: I0129 15:50:49.808207 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ls57p\" (UniqueName: \"kubernetes.io/projected/00b42485-f42b-4ca6-8e84-1a795454dd9f-kube-api-access-ls57p\") pod \"nova-cell0-conductor-db-sync-9mffk\" (UID: \"00b42485-f42b-4ca6-8e84-1a795454dd9f\") " pod="openstack/nova-cell0-conductor-db-sync-9mffk" Jan 29 15:50:49 crc kubenswrapper[5008]: I0129 15:50:49.808261 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/00b42485-f42b-4ca6-8e84-1a795454dd9f-config-data\") pod \"nova-cell0-conductor-db-sync-9mffk\" (UID: \"00b42485-f42b-4ca6-8e84-1a795454dd9f\") " pod="openstack/nova-cell0-conductor-db-sync-9mffk" Jan 29 15:50:49 crc kubenswrapper[5008]: I0129 15:50:49.808348 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/00b42485-f42b-4ca6-8e84-1a795454dd9f-scripts\") pod \"nova-cell0-conductor-db-sync-9mffk\" (UID: \"00b42485-f42b-4ca6-8e84-1a795454dd9f\") " pod="openstack/nova-cell0-conductor-db-sync-9mffk" Jan 29 15:50:49 crc kubenswrapper[5008]: I0129 15:50:49.808382 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/00b42485-f42b-4ca6-8e84-1a795454dd9f-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-9mffk\" (UID: \"00b42485-f42b-4ca6-8e84-1a795454dd9f\") " pod="openstack/nova-cell0-conductor-db-sync-9mffk" Jan 29 15:50:49 crc kubenswrapper[5008]: I0129 15:50:49.909605 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/00b42485-f42b-4ca6-8e84-1a795454dd9f-scripts\") pod \"nova-cell0-conductor-db-sync-9mffk\" (UID: \"00b42485-f42b-4ca6-8e84-1a795454dd9f\") " pod="openstack/nova-cell0-conductor-db-sync-9mffk" Jan 29 15:50:49 crc kubenswrapper[5008]: I0129 15:50:49.909687 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/00b42485-f42b-4ca6-8e84-1a795454dd9f-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-9mffk\" (UID: \"00b42485-f42b-4ca6-8e84-1a795454dd9f\") " pod="openstack/nova-cell0-conductor-db-sync-9mffk" Jan 29 15:50:49 crc kubenswrapper[5008]: I0129 15:50:49.909744 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ls57p\" (UniqueName: \"kubernetes.io/projected/00b42485-f42b-4ca6-8e84-1a795454dd9f-kube-api-access-ls57p\") pod \"nova-cell0-conductor-db-sync-9mffk\" (UID: \"00b42485-f42b-4ca6-8e84-1a795454dd9f\") " pod="openstack/nova-cell0-conductor-db-sync-9mffk" Jan 29 15:50:49 crc kubenswrapper[5008]: I0129 15:50:49.909764 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/00b42485-f42b-4ca6-8e84-1a795454dd9f-config-data\") pod \"nova-cell0-conductor-db-sync-9mffk\" (UID: \"00b42485-f42b-4ca6-8e84-1a795454dd9f\") " pod="openstack/nova-cell0-conductor-db-sync-9mffk" Jan 29 15:50:49 crc kubenswrapper[5008]: I0129 15:50:49.915377 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/00b42485-f42b-4ca6-8e84-1a795454dd9f-config-data\") pod \"nova-cell0-conductor-db-sync-9mffk\" (UID: \"00b42485-f42b-4ca6-8e84-1a795454dd9f\") " pod="openstack/nova-cell0-conductor-db-sync-9mffk" Jan 29 15:50:49 crc kubenswrapper[5008]: I0129 15:50:49.915736 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/00b42485-f42b-4ca6-8e84-1a795454dd9f-scripts\") pod \"nova-cell0-conductor-db-sync-9mffk\" (UID: \"00b42485-f42b-4ca6-8e84-1a795454dd9f\") " pod="openstack/nova-cell0-conductor-db-sync-9mffk" Jan 29 15:50:49 crc kubenswrapper[5008]: I0129 15:50:49.919701 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/00b42485-f42b-4ca6-8e84-1a795454dd9f-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-9mffk\" (UID: \"00b42485-f42b-4ca6-8e84-1a795454dd9f\") " pod="openstack/nova-cell0-conductor-db-sync-9mffk" Jan 29 15:50:49 crc kubenswrapper[5008]: I0129 15:50:49.934973 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ls57p\" (UniqueName: \"kubernetes.io/projected/00b42485-f42b-4ca6-8e84-1a795454dd9f-kube-api-access-ls57p\") pod \"nova-cell0-conductor-db-sync-9mffk\" (UID: \"00b42485-f42b-4ca6-8e84-1a795454dd9f\") " pod="openstack/nova-cell0-conductor-db-sync-9mffk" Jan 29 15:50:49 crc kubenswrapper[5008]: I0129 15:50:49.981939 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-9mffk" Jan 29 15:50:50 crc kubenswrapper[5008]: I0129 15:50:50.118286 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 29 15:50:50 crc kubenswrapper[5008]: W0129 15:50:50.128706 5008 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd30face9_2636_4cb7_8e84_8558b7b40df4.slice/crio-49ac2c3c603bb6f8c398ea508483ca0ced12a9f9ffcece09ffdfc60f9c90cba3 WatchSource:0}: Error finding container 49ac2c3c603bb6f8c398ea508483ca0ced12a9f9ffcece09ffdfc60f9c90cba3: Status 404 returned error can't find the container with id 49ac2c3c603bb6f8c398ea508483ca0ced12a9f9ffcece09ffdfc60f9c90cba3 Jan 29 15:50:50 crc kubenswrapper[5008]: I0129 15:50:50.291742 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-9mffk"] Jan 29 15:50:50 crc kubenswrapper[5008]: W0129 15:50:50.295838 5008 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod00b42485_f42b_4ca6_8e84_1a795454dd9f.slice/crio-9cfdb60cd6bab187b310c7e3b7b9918a771aed98988c83c807016cc578b45171 WatchSource:0}: Error finding container 9cfdb60cd6bab187b310c7e3b7b9918a771aed98988c83c807016cc578b45171: Status 404 returned error can't find the container with id 9cfdb60cd6bab187b310c7e3b7b9918a771aed98988c83c807016cc578b45171 Jan 29 15:50:51 crc kubenswrapper[5008]: I0129 15:50:51.118573 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"d30face9-2636-4cb7-8e84-8558b7b40df4","Type":"ContainerStarted","Data":"7d923a651364584dd0a68975de72d53fe72eae96ed00ce0b324f3cba07f9ce12"} Jan 29 15:50:51 crc kubenswrapper[5008]: I0129 15:50:51.118954 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"d30face9-2636-4cb7-8e84-8558b7b40df4","Type":"ContainerStarted","Data":"49ac2c3c603bb6f8c398ea508483ca0ced12a9f9ffcece09ffdfc60f9c90cba3"} Jan 29 15:50:51 crc kubenswrapper[5008]: I0129 15:50:51.124346 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-9mffk" event={"ID":"00b42485-f42b-4ca6-8e84-1a795454dd9f","Type":"ContainerStarted","Data":"9cfdb60cd6bab187b310c7e3b7b9918a771aed98988c83c807016cc578b45171"} Jan 29 15:50:52 crc kubenswrapper[5008]: I0129 15:50:52.149408 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"d30face9-2636-4cb7-8e84-8558b7b40df4","Type":"ContainerStarted","Data":"7b624b6342c7c0ec0d24499d3b0550c0800023f732ceb5a4c809881749409b62"} Jan 29 15:50:52 crc kubenswrapper[5008]: I0129 15:50:52.183157 5008 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=3.183141108 podStartE2EDuration="3.183141108s" podCreationTimestamp="2026-01-29 15:50:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 15:50:52.176699522 +0000 UTC m=+1395.849553759" watchObservedRunningTime="2026-01-29 15:50:52.183141108 +0000 UTC m=+1395.855995345" Jan 29 15:50:52 crc kubenswrapper[5008]: I0129 15:50:52.875695 5008 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 29 15:50:52 crc kubenswrapper[5008]: I0129 15:50:52.982416 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c81636ad-f799-43f6-8304-b2121e7bb427-combined-ca-bundle\") pod \"c81636ad-f799-43f6-8304-b2121e7bb427\" (UID: \"c81636ad-f799-43f6-8304-b2121e7bb427\") " Jan 29 15:50:52 crc kubenswrapper[5008]: I0129 15:50:52.982477 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c81636ad-f799-43f6-8304-b2121e7bb427-config-data\") pod \"c81636ad-f799-43f6-8304-b2121e7bb427\" (UID: \"c81636ad-f799-43f6-8304-b2121e7bb427\") " Jan 29 15:50:52 crc kubenswrapper[5008]: I0129 15:50:52.982581 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6rfpc\" (UniqueName: \"kubernetes.io/projected/c81636ad-f799-43f6-8304-b2121e7bb427-kube-api-access-6rfpc\") pod \"c81636ad-f799-43f6-8304-b2121e7bb427\" (UID: \"c81636ad-f799-43f6-8304-b2121e7bb427\") " Jan 29 15:50:52 crc kubenswrapper[5008]: I0129 15:50:52.982627 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c81636ad-f799-43f6-8304-b2121e7bb427-scripts\") pod \"c81636ad-f799-43f6-8304-b2121e7bb427\" (UID: \"c81636ad-f799-43f6-8304-b2121e7bb427\") " Jan 29 15:50:52 crc kubenswrapper[5008]: I0129 15:50:52.982722 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/c81636ad-f799-43f6-8304-b2121e7bb427-sg-core-conf-yaml\") pod \"c81636ad-f799-43f6-8304-b2121e7bb427\" (UID: \"c81636ad-f799-43f6-8304-b2121e7bb427\") " Jan 29 15:50:52 crc kubenswrapper[5008]: I0129 15:50:52.982772 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c81636ad-f799-43f6-8304-b2121e7bb427-run-httpd\") pod \"c81636ad-f799-43f6-8304-b2121e7bb427\" (UID: \"c81636ad-f799-43f6-8304-b2121e7bb427\") " Jan 29 15:50:52 crc kubenswrapper[5008]: I0129 15:50:52.982882 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c81636ad-f799-43f6-8304-b2121e7bb427-log-httpd\") pod \"c81636ad-f799-43f6-8304-b2121e7bb427\" (UID: \"c81636ad-f799-43f6-8304-b2121e7bb427\") " Jan 29 15:50:52 crc kubenswrapper[5008]: I0129 15:50:52.984163 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c81636ad-f799-43f6-8304-b2121e7bb427-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "c81636ad-f799-43f6-8304-b2121e7bb427" (UID: "c81636ad-f799-43f6-8304-b2121e7bb427"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 15:50:52 crc kubenswrapper[5008]: I0129 15:50:52.985637 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c81636ad-f799-43f6-8304-b2121e7bb427-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "c81636ad-f799-43f6-8304-b2121e7bb427" (UID: "c81636ad-f799-43f6-8304-b2121e7bb427"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 15:50:52 crc kubenswrapper[5008]: I0129 15:50:52.987719 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c81636ad-f799-43f6-8304-b2121e7bb427-kube-api-access-6rfpc" (OuterVolumeSpecName: "kube-api-access-6rfpc") pod "c81636ad-f799-43f6-8304-b2121e7bb427" (UID: "c81636ad-f799-43f6-8304-b2121e7bb427"). InnerVolumeSpecName "kube-api-access-6rfpc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:50:52 crc kubenswrapper[5008]: I0129 15:50:52.992617 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c81636ad-f799-43f6-8304-b2121e7bb427-scripts" (OuterVolumeSpecName: "scripts") pod "c81636ad-f799-43f6-8304-b2121e7bb427" (UID: "c81636ad-f799-43f6-8304-b2121e7bb427"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:50:53 crc kubenswrapper[5008]: I0129 15:50:53.008849 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c81636ad-f799-43f6-8304-b2121e7bb427-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "c81636ad-f799-43f6-8304-b2121e7bb427" (UID: "c81636ad-f799-43f6-8304-b2121e7bb427"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:50:53 crc kubenswrapper[5008]: I0129 15:50:53.064029 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c81636ad-f799-43f6-8304-b2121e7bb427-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c81636ad-f799-43f6-8304-b2121e7bb427" (UID: "c81636ad-f799-43f6-8304-b2121e7bb427"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:50:53 crc kubenswrapper[5008]: I0129 15:50:53.085501 5008 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6rfpc\" (UniqueName: \"kubernetes.io/projected/c81636ad-f799-43f6-8304-b2121e7bb427-kube-api-access-6rfpc\") on node \"crc\" DevicePath \"\"" Jan 29 15:50:53 crc kubenswrapper[5008]: I0129 15:50:53.085534 5008 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c81636ad-f799-43f6-8304-b2121e7bb427-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 15:50:53 crc kubenswrapper[5008]: I0129 15:50:53.085543 5008 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/c81636ad-f799-43f6-8304-b2121e7bb427-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 29 15:50:53 crc kubenswrapper[5008]: I0129 15:50:53.085552 5008 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c81636ad-f799-43f6-8304-b2121e7bb427-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 29 15:50:53 crc kubenswrapper[5008]: I0129 15:50:53.085561 5008 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c81636ad-f799-43f6-8304-b2121e7bb427-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 29 15:50:53 crc kubenswrapper[5008]: I0129 15:50:53.085572 5008 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c81636ad-f799-43f6-8304-b2121e7bb427-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 15:50:53 crc kubenswrapper[5008]: I0129 15:50:53.097114 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c81636ad-f799-43f6-8304-b2121e7bb427-config-data" (OuterVolumeSpecName: "config-data") pod "c81636ad-f799-43f6-8304-b2121e7bb427" (UID: "c81636ad-f799-43f6-8304-b2121e7bb427"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:50:53 crc kubenswrapper[5008]: I0129 15:50:53.162271 5008 generic.go:334] "Generic (PLEG): container finished" podID="c81636ad-f799-43f6-8304-b2121e7bb427" containerID="5bdb92bd8804311389315e1c2733efae43b86032b34ec9f92e93486c776777f4" exitCode=0 Jan 29 15:50:53 crc kubenswrapper[5008]: I0129 15:50:53.162347 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c81636ad-f799-43f6-8304-b2121e7bb427","Type":"ContainerDied","Data":"5bdb92bd8804311389315e1c2733efae43b86032b34ec9f92e93486c776777f4"} Jan 29 15:50:53 crc kubenswrapper[5008]: I0129 15:50:53.162367 5008 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 29 15:50:53 crc kubenswrapper[5008]: I0129 15:50:53.162403 5008 scope.go:117] "RemoveContainer" containerID="1160ebdc889e903ce1ab9549db1c8d7aedbec5dbd448d12df99a7b71c4f59a71" Jan 29 15:50:53 crc kubenswrapper[5008]: I0129 15:50:53.162393 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c81636ad-f799-43f6-8304-b2121e7bb427","Type":"ContainerDied","Data":"0e23d38c1351d3b9d8ce539ce39bcaaeb12db97fb4d36c36c739e94b79c66551"} Jan 29 15:50:53 crc kubenswrapper[5008]: I0129 15:50:53.190720 5008 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c81636ad-f799-43f6-8304-b2121e7bb427-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 15:50:53 crc kubenswrapper[5008]: I0129 15:50:53.201910 5008 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 29 15:50:53 crc kubenswrapper[5008]: I0129 15:50:53.203762 5008 scope.go:117] "RemoveContainer" containerID="6cb7bc803573f6d8292dd7a40b28153e8f4ff1271e0fa808ba53834296b1df6d" Jan 29 15:50:53 crc kubenswrapper[5008]: I0129 15:50:53.221969 5008 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 29 15:50:53 crc kubenswrapper[5008]: I0129 15:50:53.246658 5008 scope.go:117] "RemoveContainer" containerID="57b9f0118bc63b684df15ec4953cbf43eb08b4c8cd41ed4c65c18bdbe33f4dce" Jan 29 15:50:53 crc kubenswrapper[5008]: I0129 15:50:53.256913 5008 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 29 15:50:53 crc kubenswrapper[5008]: E0129 15:50:53.257576 5008 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c81636ad-f799-43f6-8304-b2121e7bb427" containerName="sg-core" Jan 29 15:50:53 crc kubenswrapper[5008]: I0129 15:50:53.257599 5008 state_mem.go:107] "Deleted CPUSet assignment" podUID="c81636ad-f799-43f6-8304-b2121e7bb427" containerName="sg-core" Jan 29 15:50:53 crc kubenswrapper[5008]: E0129 15:50:53.257617 5008 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c81636ad-f799-43f6-8304-b2121e7bb427" containerName="proxy-httpd" Jan 29 15:50:53 crc kubenswrapper[5008]: I0129 15:50:53.257625 5008 state_mem.go:107] "Deleted CPUSet assignment" podUID="c81636ad-f799-43f6-8304-b2121e7bb427" containerName="proxy-httpd" Jan 29 15:50:53 crc kubenswrapper[5008]: E0129 15:50:53.257642 5008 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c81636ad-f799-43f6-8304-b2121e7bb427" containerName="ceilometer-central-agent" Jan 29 15:50:53 crc kubenswrapper[5008]: I0129 15:50:53.257648 5008 state_mem.go:107] "Deleted CPUSet assignment" podUID="c81636ad-f799-43f6-8304-b2121e7bb427" containerName="ceilometer-central-agent" Jan 29 15:50:53 crc kubenswrapper[5008]: E0129 15:50:53.257671 5008 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c81636ad-f799-43f6-8304-b2121e7bb427" containerName="ceilometer-notification-agent" Jan 29 15:50:53 crc kubenswrapper[5008]: I0129 15:50:53.257677 5008 state_mem.go:107] "Deleted CPUSet assignment" podUID="c81636ad-f799-43f6-8304-b2121e7bb427" containerName="ceilometer-notification-agent" Jan 29 15:50:53 crc kubenswrapper[5008]: I0129 15:50:53.257857 5008 memory_manager.go:354] "RemoveStaleState removing state" podUID="c81636ad-f799-43f6-8304-b2121e7bb427" containerName="ceilometer-notification-agent" Jan 29 15:50:53 crc kubenswrapper[5008]: I0129 15:50:53.257878 5008 memory_manager.go:354] "RemoveStaleState removing state" podUID="c81636ad-f799-43f6-8304-b2121e7bb427" containerName="proxy-httpd" Jan 29 15:50:53 crc kubenswrapper[5008]: I0129 15:50:53.257885 5008 memory_manager.go:354] "RemoveStaleState removing state" podUID="c81636ad-f799-43f6-8304-b2121e7bb427" containerName="sg-core" Jan 29 15:50:53 crc kubenswrapper[5008]: I0129 15:50:53.257896 5008 memory_manager.go:354] "RemoveStaleState removing state" podUID="c81636ad-f799-43f6-8304-b2121e7bb427" containerName="ceilometer-central-agent" Jan 29 15:50:53 crc kubenswrapper[5008]: I0129 15:50:53.264469 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 29 15:50:53 crc kubenswrapper[5008]: I0129 15:50:53.268135 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 29 15:50:53 crc kubenswrapper[5008]: I0129 15:50:53.269260 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 29 15:50:53 crc kubenswrapper[5008]: I0129 15:50:53.280484 5008 scope.go:117] "RemoveContainer" containerID="5bdb92bd8804311389315e1c2733efae43b86032b34ec9f92e93486c776777f4" Jan 29 15:50:53 crc kubenswrapper[5008]: I0129 15:50:53.281652 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 29 15:50:53 crc kubenswrapper[5008]: I0129 15:50:53.311493 5008 scope.go:117] "RemoveContainer" containerID="1160ebdc889e903ce1ab9549db1c8d7aedbec5dbd448d12df99a7b71c4f59a71" Jan 29 15:50:53 crc kubenswrapper[5008]: E0129 15:50:53.311897 5008 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1160ebdc889e903ce1ab9549db1c8d7aedbec5dbd448d12df99a7b71c4f59a71\": container with ID starting with 1160ebdc889e903ce1ab9549db1c8d7aedbec5dbd448d12df99a7b71c4f59a71 not found: ID does not exist" containerID="1160ebdc889e903ce1ab9549db1c8d7aedbec5dbd448d12df99a7b71c4f59a71" Jan 29 15:50:53 crc kubenswrapper[5008]: I0129 15:50:53.311953 5008 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1160ebdc889e903ce1ab9549db1c8d7aedbec5dbd448d12df99a7b71c4f59a71"} err="failed to get container status \"1160ebdc889e903ce1ab9549db1c8d7aedbec5dbd448d12df99a7b71c4f59a71\": rpc error: code = NotFound desc = could not find container \"1160ebdc889e903ce1ab9549db1c8d7aedbec5dbd448d12df99a7b71c4f59a71\": container with ID starting with 1160ebdc889e903ce1ab9549db1c8d7aedbec5dbd448d12df99a7b71c4f59a71 not found: ID does not exist" Jan 29 15:50:53 crc kubenswrapper[5008]: I0129 15:50:53.311975 5008 scope.go:117] "RemoveContainer" containerID="6cb7bc803573f6d8292dd7a40b28153e8f4ff1271e0fa808ba53834296b1df6d" Jan 29 15:50:53 crc kubenswrapper[5008]: E0129 15:50:53.312496 5008 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6cb7bc803573f6d8292dd7a40b28153e8f4ff1271e0fa808ba53834296b1df6d\": container with ID starting with 6cb7bc803573f6d8292dd7a40b28153e8f4ff1271e0fa808ba53834296b1df6d not found: ID does not exist" containerID="6cb7bc803573f6d8292dd7a40b28153e8f4ff1271e0fa808ba53834296b1df6d" Jan 29 15:50:53 crc kubenswrapper[5008]: I0129 15:50:53.312516 5008 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6cb7bc803573f6d8292dd7a40b28153e8f4ff1271e0fa808ba53834296b1df6d"} err="failed to get container status \"6cb7bc803573f6d8292dd7a40b28153e8f4ff1271e0fa808ba53834296b1df6d\": rpc error: code = NotFound desc = could not find container \"6cb7bc803573f6d8292dd7a40b28153e8f4ff1271e0fa808ba53834296b1df6d\": container with ID starting with 6cb7bc803573f6d8292dd7a40b28153e8f4ff1271e0fa808ba53834296b1df6d not found: ID does not exist" Jan 29 15:50:53 crc kubenswrapper[5008]: I0129 15:50:53.312531 5008 scope.go:117] "RemoveContainer" containerID="57b9f0118bc63b684df15ec4953cbf43eb08b4c8cd41ed4c65c18bdbe33f4dce" Jan 29 15:50:53 crc kubenswrapper[5008]: E0129 15:50:53.312759 5008 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"57b9f0118bc63b684df15ec4953cbf43eb08b4c8cd41ed4c65c18bdbe33f4dce\": container with ID starting with 57b9f0118bc63b684df15ec4953cbf43eb08b4c8cd41ed4c65c18bdbe33f4dce not found: ID does not exist" containerID="57b9f0118bc63b684df15ec4953cbf43eb08b4c8cd41ed4c65c18bdbe33f4dce" Jan 29 15:50:53 crc kubenswrapper[5008]: I0129 15:50:53.312805 5008 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"57b9f0118bc63b684df15ec4953cbf43eb08b4c8cd41ed4c65c18bdbe33f4dce"} err="failed to get container status \"57b9f0118bc63b684df15ec4953cbf43eb08b4c8cd41ed4c65c18bdbe33f4dce\": rpc error: code = NotFound desc = could not find container \"57b9f0118bc63b684df15ec4953cbf43eb08b4c8cd41ed4c65c18bdbe33f4dce\": container with ID starting with 57b9f0118bc63b684df15ec4953cbf43eb08b4c8cd41ed4c65c18bdbe33f4dce not found: ID does not exist" Jan 29 15:50:53 crc kubenswrapper[5008]: I0129 15:50:53.312818 5008 scope.go:117] "RemoveContainer" containerID="5bdb92bd8804311389315e1c2733efae43b86032b34ec9f92e93486c776777f4" Jan 29 15:50:53 crc kubenswrapper[5008]: E0129 15:50:53.313188 5008 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5bdb92bd8804311389315e1c2733efae43b86032b34ec9f92e93486c776777f4\": container with ID starting with 5bdb92bd8804311389315e1c2733efae43b86032b34ec9f92e93486c776777f4 not found: ID does not exist" containerID="5bdb92bd8804311389315e1c2733efae43b86032b34ec9f92e93486c776777f4" Jan 29 15:50:53 crc kubenswrapper[5008]: I0129 15:50:53.313213 5008 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5bdb92bd8804311389315e1c2733efae43b86032b34ec9f92e93486c776777f4"} err="failed to get container status \"5bdb92bd8804311389315e1c2733efae43b86032b34ec9f92e93486c776777f4\": rpc error: code = NotFound desc = could not find container \"5bdb92bd8804311389315e1c2733efae43b86032b34ec9f92e93486c776777f4\": container with ID starting with 5bdb92bd8804311389315e1c2733efae43b86032b34ec9f92e93486c776777f4 not found: ID does not exist" Jan 29 15:50:53 crc kubenswrapper[5008]: I0129 15:50:53.344531 5008 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c81636ad-f799-43f6-8304-b2121e7bb427" path="/var/lib/kubelet/pods/c81636ad-f799-43f6-8304-b2121e7bb427/volumes" Jan 29 15:50:53 crc kubenswrapper[5008]: I0129 15:50:53.398573 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-58fcv\" (UniqueName: \"kubernetes.io/projected/36d8b2f2-f15e-4b9a-a522-35d228919444-kube-api-access-58fcv\") pod \"ceilometer-0\" (UID: \"36d8b2f2-f15e-4b9a-a522-35d228919444\") " pod="openstack/ceilometer-0" Jan 29 15:50:53 crc kubenswrapper[5008]: I0129 15:50:53.398628 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/36d8b2f2-f15e-4b9a-a522-35d228919444-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"36d8b2f2-f15e-4b9a-a522-35d228919444\") " pod="openstack/ceilometer-0" Jan 29 15:50:53 crc kubenswrapper[5008]: I0129 15:50:53.398712 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/36d8b2f2-f15e-4b9a-a522-35d228919444-log-httpd\") pod \"ceilometer-0\" (UID: \"36d8b2f2-f15e-4b9a-a522-35d228919444\") " pod="openstack/ceilometer-0" Jan 29 15:50:53 crc kubenswrapper[5008]: I0129 15:50:53.398742 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/36d8b2f2-f15e-4b9a-a522-35d228919444-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"36d8b2f2-f15e-4b9a-a522-35d228919444\") " pod="openstack/ceilometer-0" Jan 29 15:50:53 crc kubenswrapper[5008]: I0129 15:50:53.398774 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/36d8b2f2-f15e-4b9a-a522-35d228919444-scripts\") pod \"ceilometer-0\" (UID: \"36d8b2f2-f15e-4b9a-a522-35d228919444\") " pod="openstack/ceilometer-0" Jan 29 15:50:53 crc kubenswrapper[5008]: I0129 15:50:53.398839 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/36d8b2f2-f15e-4b9a-a522-35d228919444-config-data\") pod \"ceilometer-0\" (UID: \"36d8b2f2-f15e-4b9a-a522-35d228919444\") " pod="openstack/ceilometer-0" Jan 29 15:50:53 crc kubenswrapper[5008]: I0129 15:50:53.398869 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/36d8b2f2-f15e-4b9a-a522-35d228919444-run-httpd\") pod \"ceilometer-0\" (UID: \"36d8b2f2-f15e-4b9a-a522-35d228919444\") " pod="openstack/ceilometer-0" Jan 29 15:50:53 crc kubenswrapper[5008]: I0129 15:50:53.517734 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-58fcv\" (UniqueName: \"kubernetes.io/projected/36d8b2f2-f15e-4b9a-a522-35d228919444-kube-api-access-58fcv\") pod \"ceilometer-0\" (UID: \"36d8b2f2-f15e-4b9a-a522-35d228919444\") " pod="openstack/ceilometer-0" Jan 29 15:50:53 crc kubenswrapper[5008]: I0129 15:50:53.517794 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/36d8b2f2-f15e-4b9a-a522-35d228919444-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"36d8b2f2-f15e-4b9a-a522-35d228919444\") " pod="openstack/ceilometer-0" Jan 29 15:50:53 crc kubenswrapper[5008]: I0129 15:50:53.517856 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/36d8b2f2-f15e-4b9a-a522-35d228919444-log-httpd\") pod \"ceilometer-0\" (UID: \"36d8b2f2-f15e-4b9a-a522-35d228919444\") " pod="openstack/ceilometer-0" Jan 29 15:50:53 crc kubenswrapper[5008]: I0129 15:50:53.517879 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/36d8b2f2-f15e-4b9a-a522-35d228919444-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"36d8b2f2-f15e-4b9a-a522-35d228919444\") " pod="openstack/ceilometer-0" Jan 29 15:50:53 crc kubenswrapper[5008]: I0129 15:50:53.517904 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/36d8b2f2-f15e-4b9a-a522-35d228919444-scripts\") pod \"ceilometer-0\" (UID: \"36d8b2f2-f15e-4b9a-a522-35d228919444\") " pod="openstack/ceilometer-0" Jan 29 15:50:53 crc kubenswrapper[5008]: I0129 15:50:53.517937 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/36d8b2f2-f15e-4b9a-a522-35d228919444-config-data\") pod \"ceilometer-0\" (UID: \"36d8b2f2-f15e-4b9a-a522-35d228919444\") " pod="openstack/ceilometer-0" Jan 29 15:50:53 crc kubenswrapper[5008]: I0129 15:50:53.517961 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/36d8b2f2-f15e-4b9a-a522-35d228919444-run-httpd\") pod \"ceilometer-0\" (UID: \"36d8b2f2-f15e-4b9a-a522-35d228919444\") " pod="openstack/ceilometer-0" Jan 29 15:50:53 crc kubenswrapper[5008]: I0129 15:50:53.518377 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/36d8b2f2-f15e-4b9a-a522-35d228919444-run-httpd\") pod \"ceilometer-0\" (UID: \"36d8b2f2-f15e-4b9a-a522-35d228919444\") " pod="openstack/ceilometer-0" Jan 29 15:50:53 crc kubenswrapper[5008]: I0129 15:50:53.521294 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/36d8b2f2-f15e-4b9a-a522-35d228919444-log-httpd\") pod \"ceilometer-0\" (UID: \"36d8b2f2-f15e-4b9a-a522-35d228919444\") " pod="openstack/ceilometer-0" Jan 29 15:50:53 crc kubenswrapper[5008]: I0129 15:50:53.522372 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/36d8b2f2-f15e-4b9a-a522-35d228919444-scripts\") pod \"ceilometer-0\" (UID: \"36d8b2f2-f15e-4b9a-a522-35d228919444\") " pod="openstack/ceilometer-0" Jan 29 15:50:53 crc kubenswrapper[5008]: I0129 15:50:53.522435 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/36d8b2f2-f15e-4b9a-a522-35d228919444-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"36d8b2f2-f15e-4b9a-a522-35d228919444\") " pod="openstack/ceilometer-0" Jan 29 15:50:53 crc kubenswrapper[5008]: I0129 15:50:53.523442 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/36d8b2f2-f15e-4b9a-a522-35d228919444-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"36d8b2f2-f15e-4b9a-a522-35d228919444\") " pod="openstack/ceilometer-0" Jan 29 15:50:53 crc kubenswrapper[5008]: I0129 15:50:53.532876 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/36d8b2f2-f15e-4b9a-a522-35d228919444-config-data\") pod \"ceilometer-0\" (UID: \"36d8b2f2-f15e-4b9a-a522-35d228919444\") " pod="openstack/ceilometer-0" Jan 29 15:50:53 crc kubenswrapper[5008]: I0129 15:50:53.534767 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-58fcv\" (UniqueName: \"kubernetes.io/projected/36d8b2f2-f15e-4b9a-a522-35d228919444-kube-api-access-58fcv\") pod \"ceilometer-0\" (UID: \"36d8b2f2-f15e-4b9a-a522-35d228919444\") " pod="openstack/ceilometer-0" Jan 29 15:50:53 crc kubenswrapper[5008]: I0129 15:50:53.611945 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 29 15:50:54 crc kubenswrapper[5008]: I0129 15:50:54.074228 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 29 15:50:54 crc kubenswrapper[5008]: W0129 15:50:54.076858 5008 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod36d8b2f2_f15e_4b9a_a522_35d228919444.slice/crio-1da072cac2f699d8a48c0ad66c9f3278b5a7e28c720fc4e5dc3e5e7db0670e0e WatchSource:0}: Error finding container 1da072cac2f699d8a48c0ad66c9f3278b5a7e28c720fc4e5dc3e5e7db0670e0e: Status 404 returned error can't find the container with id 1da072cac2f699d8a48c0ad66c9f3278b5a7e28c720fc4e5dc3e5e7db0670e0e Jan 29 15:50:54 crc kubenswrapper[5008]: I0129 15:50:54.176174 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"36d8b2f2-f15e-4b9a-a522-35d228919444","Type":"ContainerStarted","Data":"1da072cac2f699d8a48c0ad66c9f3278b5a7e28c720fc4e5dc3e5e7db0670e0e"} Jan 29 15:50:56 crc kubenswrapper[5008]: I0129 15:50:56.456044 5008 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Jan 29 15:50:56 crc kubenswrapper[5008]: I0129 15:50:56.457614 5008 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Jan 29 15:50:56 crc kubenswrapper[5008]: I0129 15:50:56.494385 5008 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Jan 29 15:50:56 crc kubenswrapper[5008]: I0129 15:50:56.505399 5008 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Jan 29 15:50:57 crc kubenswrapper[5008]: I0129 15:50:57.203053 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Jan 29 15:50:57 crc kubenswrapper[5008]: I0129 15:50:57.203691 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Jan 29 15:50:59 crc kubenswrapper[5008]: I0129 15:50:59.224887 5008 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 29 15:50:59 crc kubenswrapper[5008]: I0129 15:50:59.225286 5008 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 29 15:50:59 crc kubenswrapper[5008]: I0129 15:50:59.236044 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Jan 29 15:50:59 crc kubenswrapper[5008]: I0129 15:50:59.245377 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Jan 29 15:50:59 crc kubenswrapper[5008]: I0129 15:50:59.519016 5008 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Jan 29 15:50:59 crc kubenswrapper[5008]: I0129 15:50:59.519297 5008 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Jan 29 15:50:59 crc kubenswrapper[5008]: I0129 15:50:59.565487 5008 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Jan 29 15:50:59 crc kubenswrapper[5008]: I0129 15:50:59.575272 5008 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Jan 29 15:51:00 crc kubenswrapper[5008]: I0129 15:51:00.236008 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Jan 29 15:51:00 crc kubenswrapper[5008]: I0129 15:51:00.236053 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Jan 29 15:51:01 crc kubenswrapper[5008]: I0129 15:51:01.814071 5008 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 29 15:51:02 crc kubenswrapper[5008]: I0129 15:51:02.232965 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Jan 29 15:51:02 crc kubenswrapper[5008]: I0129 15:51:02.252623 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-9mffk" event={"ID":"00b42485-f42b-4ca6-8e84-1a795454dd9f","Type":"ContainerStarted","Data":"cae76da1b19104ec9ac0d79d4c0c18c044c82a9e0fb4665e780db9f6a9a1f05e"} Jan 29 15:51:02 crc kubenswrapper[5008]: I0129 15:51:02.254452 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"36d8b2f2-f15e-4b9a-a522-35d228919444","Type":"ContainerStarted","Data":"333aa71748cf0dbcc8fddbab51dff8ff1acaa47f116a066d74485824ee50dd82"} Jan 29 15:51:02 crc kubenswrapper[5008]: I0129 15:51:02.254470 5008 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 29 15:51:02 crc kubenswrapper[5008]: I0129 15:51:02.274401 5008 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-db-sync-9mffk" podStartSLOduration=2.481375735 podStartE2EDuration="13.274383521s" podCreationTimestamp="2026-01-29 15:50:49 +0000 UTC" firstStartedPulling="2026-01-29 15:50:50.300917841 +0000 UTC m=+1393.973772078" lastFinishedPulling="2026-01-29 15:51:01.093925627 +0000 UTC m=+1404.766779864" observedRunningTime="2026-01-29 15:51:02.269721759 +0000 UTC m=+1405.942575996" watchObservedRunningTime="2026-01-29 15:51:02.274383521 +0000 UTC m=+1405.947237758" Jan 29 15:51:02 crc kubenswrapper[5008]: I0129 15:51:02.349299 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Jan 29 15:51:03 crc kubenswrapper[5008]: I0129 15:51:03.264955 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"36d8b2f2-f15e-4b9a-a522-35d228919444","Type":"ContainerStarted","Data":"35227769b955309ab8713be39a1f2ffd968e6a4bd2b991d2c6531b44270ba0a3"} Jan 29 15:51:07 crc kubenswrapper[5008]: I0129 15:51:07.463113 5008 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-klfxq"] Jan 29 15:51:07 crc kubenswrapper[5008]: I0129 15:51:07.468998 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-klfxq" Jan 29 15:51:07 crc kubenswrapper[5008]: I0129 15:51:07.488637 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-klfxq"] Jan 29 15:51:07 crc kubenswrapper[5008]: I0129 15:51:07.605879 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4463fec1-8026-4831-9f99-d7b8ba936dc2-catalog-content\") pod \"redhat-operators-klfxq\" (UID: \"4463fec1-8026-4831-9f99-d7b8ba936dc2\") " pod="openshift-marketplace/redhat-operators-klfxq" Jan 29 15:51:07 crc kubenswrapper[5008]: I0129 15:51:07.606157 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2268w\" (UniqueName: \"kubernetes.io/projected/4463fec1-8026-4831-9f99-d7b8ba936dc2-kube-api-access-2268w\") pod \"redhat-operators-klfxq\" (UID: \"4463fec1-8026-4831-9f99-d7b8ba936dc2\") " pod="openshift-marketplace/redhat-operators-klfxq" Jan 29 15:51:07 crc kubenswrapper[5008]: I0129 15:51:07.606307 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4463fec1-8026-4831-9f99-d7b8ba936dc2-utilities\") pod \"redhat-operators-klfxq\" (UID: \"4463fec1-8026-4831-9f99-d7b8ba936dc2\") " pod="openshift-marketplace/redhat-operators-klfxq" Jan 29 15:51:07 crc kubenswrapper[5008]: I0129 15:51:07.707774 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4463fec1-8026-4831-9f99-d7b8ba936dc2-catalog-content\") pod \"redhat-operators-klfxq\" (UID: \"4463fec1-8026-4831-9f99-d7b8ba936dc2\") " pod="openshift-marketplace/redhat-operators-klfxq" Jan 29 15:51:07 crc kubenswrapper[5008]: I0129 15:51:07.708184 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2268w\" (UniqueName: \"kubernetes.io/projected/4463fec1-8026-4831-9f99-d7b8ba936dc2-kube-api-access-2268w\") pod \"redhat-operators-klfxq\" (UID: \"4463fec1-8026-4831-9f99-d7b8ba936dc2\") " pod="openshift-marketplace/redhat-operators-klfxq" Jan 29 15:51:07 crc kubenswrapper[5008]: I0129 15:51:07.708376 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4463fec1-8026-4831-9f99-d7b8ba936dc2-utilities\") pod \"redhat-operators-klfxq\" (UID: \"4463fec1-8026-4831-9f99-d7b8ba936dc2\") " pod="openshift-marketplace/redhat-operators-klfxq" Jan 29 15:51:07 crc kubenswrapper[5008]: I0129 15:51:07.709134 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4463fec1-8026-4831-9f99-d7b8ba936dc2-utilities\") pod \"redhat-operators-klfxq\" (UID: \"4463fec1-8026-4831-9f99-d7b8ba936dc2\") " pod="openshift-marketplace/redhat-operators-klfxq" Jan 29 15:51:07 crc kubenswrapper[5008]: I0129 15:51:07.709545 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4463fec1-8026-4831-9f99-d7b8ba936dc2-catalog-content\") pod \"redhat-operators-klfxq\" (UID: \"4463fec1-8026-4831-9f99-d7b8ba936dc2\") " pod="openshift-marketplace/redhat-operators-klfxq" Jan 29 15:51:07 crc kubenswrapper[5008]: I0129 15:51:07.728632 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2268w\" (UniqueName: \"kubernetes.io/projected/4463fec1-8026-4831-9f99-d7b8ba936dc2-kube-api-access-2268w\") pod \"redhat-operators-klfxq\" (UID: \"4463fec1-8026-4831-9f99-d7b8ba936dc2\") " pod="openshift-marketplace/redhat-operators-klfxq" Jan 29 15:51:07 crc kubenswrapper[5008]: I0129 15:51:07.792469 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-klfxq" Jan 29 15:51:08 crc kubenswrapper[5008]: I0129 15:51:08.255645 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-klfxq"] Jan 29 15:51:08 crc kubenswrapper[5008]: I0129 15:51:08.316400 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-klfxq" event={"ID":"4463fec1-8026-4831-9f99-d7b8ba936dc2","Type":"ContainerStarted","Data":"0fc90fe6f216734f5716f0b5ec0a72d9a7f69d6941dabccd64c564046956dd2e"} Jan 29 15:51:08 crc kubenswrapper[5008]: I0129 15:51:08.322089 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"36d8b2f2-f15e-4b9a-a522-35d228919444","Type":"ContainerStarted","Data":"4d4a03cbd90fc3060c9fd69659de0e9b052b60e67708f55848d807bfe4b811fa"} Jan 29 15:51:10 crc kubenswrapper[5008]: I0129 15:51:10.344524 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-klfxq" event={"ID":"4463fec1-8026-4831-9f99-d7b8ba936dc2","Type":"ContainerStarted","Data":"462959e0a3731e52e7f85f03aa4b504ea2a9ab52231f3ec4dbe2d3b003c0cc7b"} Jan 29 15:51:11 crc kubenswrapper[5008]: I0129 15:51:11.358691 5008 generic.go:334] "Generic (PLEG): container finished" podID="4463fec1-8026-4831-9f99-d7b8ba936dc2" containerID="462959e0a3731e52e7f85f03aa4b504ea2a9ab52231f3ec4dbe2d3b003c0cc7b" exitCode=0 Jan 29 15:51:11 crc kubenswrapper[5008]: I0129 15:51:11.358765 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-klfxq" event={"ID":"4463fec1-8026-4831-9f99-d7b8ba936dc2","Type":"ContainerDied","Data":"462959e0a3731e52e7f85f03aa4b504ea2a9ab52231f3ec4dbe2d3b003c0cc7b"} Jan 29 15:51:12 crc kubenswrapper[5008]: I0129 15:51:12.372218 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-klfxq" event={"ID":"4463fec1-8026-4831-9f99-d7b8ba936dc2","Type":"ContainerStarted","Data":"f7b95015db9e59af7b1eeefd93b153512b2e48feff7adc929690e1b45d7dbec2"} Jan 29 15:51:12 crc kubenswrapper[5008]: I0129 15:51:12.378260 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"36d8b2f2-f15e-4b9a-a522-35d228919444","Type":"ContainerStarted","Data":"3c4810319ce99b0ba470d870728a1657c47a7d5b6ecdc21f11ecc35cfa95fa28"} Jan 29 15:51:12 crc kubenswrapper[5008]: I0129 15:51:12.378574 5008 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="36d8b2f2-f15e-4b9a-a522-35d228919444" containerName="ceilometer-central-agent" containerID="cri-o://333aa71748cf0dbcc8fddbab51dff8ff1acaa47f116a066d74485824ee50dd82" gracePeriod=30 Jan 29 15:51:12 crc kubenswrapper[5008]: I0129 15:51:12.378877 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 29 15:51:12 crc kubenswrapper[5008]: I0129 15:51:12.378943 5008 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="36d8b2f2-f15e-4b9a-a522-35d228919444" containerName="proxy-httpd" containerID="cri-o://3c4810319ce99b0ba470d870728a1657c47a7d5b6ecdc21f11ecc35cfa95fa28" gracePeriod=30 Jan 29 15:51:12 crc kubenswrapper[5008]: I0129 15:51:12.379033 5008 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="36d8b2f2-f15e-4b9a-a522-35d228919444" containerName="sg-core" containerID="cri-o://4d4a03cbd90fc3060c9fd69659de0e9b052b60e67708f55848d807bfe4b811fa" gracePeriod=30 Jan 29 15:51:12 crc kubenswrapper[5008]: I0129 15:51:12.379111 5008 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="36d8b2f2-f15e-4b9a-a522-35d228919444" containerName="ceilometer-notification-agent" containerID="cri-o://35227769b955309ab8713be39a1f2ffd968e6a4bd2b991d2c6531b44270ba0a3" gracePeriod=30 Jan 29 15:51:13 crc kubenswrapper[5008]: I0129 15:51:13.393659 5008 generic.go:334] "Generic (PLEG): container finished" podID="4463fec1-8026-4831-9f99-d7b8ba936dc2" containerID="f7b95015db9e59af7b1eeefd93b153512b2e48feff7adc929690e1b45d7dbec2" exitCode=0 Jan 29 15:51:13 crc kubenswrapper[5008]: I0129 15:51:13.393774 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-klfxq" event={"ID":"4463fec1-8026-4831-9f99-d7b8ba936dc2","Type":"ContainerDied","Data":"f7b95015db9e59af7b1eeefd93b153512b2e48feff7adc929690e1b45d7dbec2"} Jan 29 15:51:13 crc kubenswrapper[5008]: I0129 15:51:13.400563 5008 generic.go:334] "Generic (PLEG): container finished" podID="36d8b2f2-f15e-4b9a-a522-35d228919444" containerID="4d4a03cbd90fc3060c9fd69659de0e9b052b60e67708f55848d807bfe4b811fa" exitCode=2 Jan 29 15:51:13 crc kubenswrapper[5008]: I0129 15:51:13.400608 5008 generic.go:334] "Generic (PLEG): container finished" podID="36d8b2f2-f15e-4b9a-a522-35d228919444" containerID="35227769b955309ab8713be39a1f2ffd968e6a4bd2b991d2c6531b44270ba0a3" exitCode=0 Jan 29 15:51:13 crc kubenswrapper[5008]: I0129 15:51:13.400621 5008 generic.go:334] "Generic (PLEG): container finished" podID="36d8b2f2-f15e-4b9a-a522-35d228919444" containerID="333aa71748cf0dbcc8fddbab51dff8ff1acaa47f116a066d74485824ee50dd82" exitCode=0 Jan 29 15:51:13 crc kubenswrapper[5008]: I0129 15:51:13.400631 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"36d8b2f2-f15e-4b9a-a522-35d228919444","Type":"ContainerDied","Data":"4d4a03cbd90fc3060c9fd69659de0e9b052b60e67708f55848d807bfe4b811fa"} Jan 29 15:51:13 crc kubenswrapper[5008]: I0129 15:51:13.400701 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"36d8b2f2-f15e-4b9a-a522-35d228919444","Type":"ContainerDied","Data":"35227769b955309ab8713be39a1f2ffd968e6a4bd2b991d2c6531b44270ba0a3"} Jan 29 15:51:13 crc kubenswrapper[5008]: I0129 15:51:13.400713 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"36d8b2f2-f15e-4b9a-a522-35d228919444","Type":"ContainerDied","Data":"333aa71748cf0dbcc8fddbab51dff8ff1acaa47f116a066d74485824ee50dd82"} Jan 29 15:51:13 crc kubenswrapper[5008]: I0129 15:51:13.428413 5008 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.515717908 podStartE2EDuration="20.428387087s" podCreationTimestamp="2026-01-29 15:50:53 +0000 UTC" firstStartedPulling="2026-01-29 15:50:54.08120993 +0000 UTC m=+1397.754064167" lastFinishedPulling="2026-01-29 15:51:11.993879109 +0000 UTC m=+1415.666733346" observedRunningTime="2026-01-29 15:51:12.427926528 +0000 UTC m=+1416.100780795" watchObservedRunningTime="2026-01-29 15:51:13.428387087 +0000 UTC m=+1417.101241334" Jan 29 15:51:14 crc kubenswrapper[5008]: I0129 15:51:14.415002 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-klfxq" event={"ID":"4463fec1-8026-4831-9f99-d7b8ba936dc2","Type":"ContainerStarted","Data":"ae6477014f197c0c059afc09d06201a6ab5fe21275e0fd3dbd3b46238154e186"} Jan 29 15:51:14 crc kubenswrapper[5008]: I0129 15:51:14.444628 5008 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-klfxq" podStartSLOduration=4.847403096 podStartE2EDuration="7.444609737s" podCreationTimestamp="2026-01-29 15:51:07 +0000 UTC" firstStartedPulling="2026-01-29 15:51:11.360301511 +0000 UTC m=+1415.033155768" lastFinishedPulling="2026-01-29 15:51:13.957508172 +0000 UTC m=+1417.630362409" observedRunningTime="2026-01-29 15:51:14.437241008 +0000 UTC m=+1418.110095255" watchObservedRunningTime="2026-01-29 15:51:14.444609737 +0000 UTC m=+1418.117463974" Jan 29 15:51:17 crc kubenswrapper[5008]: I0129 15:51:17.793340 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-klfxq" Jan 29 15:51:17 crc kubenswrapper[5008]: I0129 15:51:17.793680 5008 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-klfxq" Jan 29 15:51:18 crc kubenswrapper[5008]: I0129 15:51:18.843426 5008 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-klfxq" podUID="4463fec1-8026-4831-9f99-d7b8ba936dc2" containerName="registry-server" probeResult="failure" output=< Jan 29 15:51:18 crc kubenswrapper[5008]: timeout: failed to connect service ":50051" within 1s Jan 29 15:51:18 crc kubenswrapper[5008]: > Jan 29 15:51:23 crc kubenswrapper[5008]: I0129 15:51:23.658081 5008 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="36d8b2f2-f15e-4b9a-a522-35d228919444" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Jan 29 15:51:27 crc kubenswrapper[5008]: I0129 15:51:27.852287 5008 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-klfxq" Jan 29 15:51:27 crc kubenswrapper[5008]: I0129 15:51:27.934529 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-klfxq" Jan 29 15:51:28 crc kubenswrapper[5008]: I0129 15:51:28.647134 5008 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-klfxq"] Jan 29 15:51:29 crc kubenswrapper[5008]: I0129 15:51:29.561350 5008 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-klfxq" podUID="4463fec1-8026-4831-9f99-d7b8ba936dc2" containerName="registry-server" containerID="cri-o://ae6477014f197c0c059afc09d06201a6ab5fe21275e0fd3dbd3b46238154e186" gracePeriod=2 Jan 29 15:51:30 crc kubenswrapper[5008]: I0129 15:51:30.576217 5008 generic.go:334] "Generic (PLEG): container finished" podID="4463fec1-8026-4831-9f99-d7b8ba936dc2" containerID="ae6477014f197c0c059afc09d06201a6ab5fe21275e0fd3dbd3b46238154e186" exitCode=0 Jan 29 15:51:30 crc kubenswrapper[5008]: I0129 15:51:30.576303 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-klfxq" event={"ID":"4463fec1-8026-4831-9f99-d7b8ba936dc2","Type":"ContainerDied","Data":"ae6477014f197c0c059afc09d06201a6ab5fe21275e0fd3dbd3b46238154e186"} Jan 29 15:51:31 crc kubenswrapper[5008]: I0129 15:51:31.482038 5008 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-klfxq" Jan 29 15:51:31 crc kubenswrapper[5008]: I0129 15:51:31.576934 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4463fec1-8026-4831-9f99-d7b8ba936dc2-catalog-content\") pod \"4463fec1-8026-4831-9f99-d7b8ba936dc2\" (UID: \"4463fec1-8026-4831-9f99-d7b8ba936dc2\") " Jan 29 15:51:31 crc kubenswrapper[5008]: I0129 15:51:31.577236 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2268w\" (UniqueName: \"kubernetes.io/projected/4463fec1-8026-4831-9f99-d7b8ba936dc2-kube-api-access-2268w\") pod \"4463fec1-8026-4831-9f99-d7b8ba936dc2\" (UID: \"4463fec1-8026-4831-9f99-d7b8ba936dc2\") " Jan 29 15:51:31 crc kubenswrapper[5008]: I0129 15:51:31.577273 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4463fec1-8026-4831-9f99-d7b8ba936dc2-utilities\") pod \"4463fec1-8026-4831-9f99-d7b8ba936dc2\" (UID: \"4463fec1-8026-4831-9f99-d7b8ba936dc2\") " Jan 29 15:51:31 crc kubenswrapper[5008]: I0129 15:51:31.578258 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4463fec1-8026-4831-9f99-d7b8ba936dc2-utilities" (OuterVolumeSpecName: "utilities") pod "4463fec1-8026-4831-9f99-d7b8ba936dc2" (UID: "4463fec1-8026-4831-9f99-d7b8ba936dc2"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 15:51:31 crc kubenswrapper[5008]: I0129 15:51:31.582720 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4463fec1-8026-4831-9f99-d7b8ba936dc2-kube-api-access-2268w" (OuterVolumeSpecName: "kube-api-access-2268w") pod "4463fec1-8026-4831-9f99-d7b8ba936dc2" (UID: "4463fec1-8026-4831-9f99-d7b8ba936dc2"). InnerVolumeSpecName "kube-api-access-2268w". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:51:31 crc kubenswrapper[5008]: I0129 15:51:31.588529 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-klfxq" event={"ID":"4463fec1-8026-4831-9f99-d7b8ba936dc2","Type":"ContainerDied","Data":"0fc90fe6f216734f5716f0b5ec0a72d9a7f69d6941dabccd64c564046956dd2e"} Jan 29 15:51:31 crc kubenswrapper[5008]: I0129 15:51:31.588574 5008 scope.go:117] "RemoveContainer" containerID="ae6477014f197c0c059afc09d06201a6ab5fe21275e0fd3dbd3b46238154e186" Jan 29 15:51:31 crc kubenswrapper[5008]: I0129 15:51:31.588611 5008 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-klfxq" Jan 29 15:51:31 crc kubenswrapper[5008]: I0129 15:51:31.678701 5008 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2268w\" (UniqueName: \"kubernetes.io/projected/4463fec1-8026-4831-9f99-d7b8ba936dc2-kube-api-access-2268w\") on node \"crc\" DevicePath \"\"" Jan 29 15:51:31 crc kubenswrapper[5008]: I0129 15:51:31.678767 5008 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4463fec1-8026-4831-9f99-d7b8ba936dc2-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 15:51:31 crc kubenswrapper[5008]: I0129 15:51:31.714661 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4463fec1-8026-4831-9f99-d7b8ba936dc2-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "4463fec1-8026-4831-9f99-d7b8ba936dc2" (UID: "4463fec1-8026-4831-9f99-d7b8ba936dc2"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 15:51:31 crc kubenswrapper[5008]: I0129 15:51:31.780020 5008 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4463fec1-8026-4831-9f99-d7b8ba936dc2-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 15:51:31 crc kubenswrapper[5008]: I0129 15:51:31.925360 5008 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-klfxq"] Jan 29 15:51:31 crc kubenswrapper[5008]: I0129 15:51:31.939231 5008 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-klfxq"] Jan 29 15:51:32 crc kubenswrapper[5008]: I0129 15:51:32.518276 5008 scope.go:117] "RemoveContainer" containerID="f7b95015db9e59af7b1eeefd93b153512b2e48feff7adc929690e1b45d7dbec2" Jan 29 15:51:32 crc kubenswrapper[5008]: I0129 15:51:32.559128 5008 scope.go:117] "RemoveContainer" containerID="462959e0a3731e52e7f85f03aa4b504ea2a9ab52231f3ec4dbe2d3b003c0cc7b" Jan 29 15:51:33 crc kubenswrapper[5008]: I0129 15:51:33.339706 5008 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4463fec1-8026-4831-9f99-d7b8ba936dc2" path="/var/lib/kubelet/pods/4463fec1-8026-4831-9f99-d7b8ba936dc2/volumes" Jan 29 15:51:37 crc kubenswrapper[5008]: I0129 15:51:37.657371 5008 generic.go:334] "Generic (PLEG): container finished" podID="00b42485-f42b-4ca6-8e84-1a795454dd9f" containerID="cae76da1b19104ec9ac0d79d4c0c18c044c82a9e0fb4665e780db9f6a9a1f05e" exitCode=0 Jan 29 15:51:37 crc kubenswrapper[5008]: I0129 15:51:37.657539 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-9mffk" event={"ID":"00b42485-f42b-4ca6-8e84-1a795454dd9f","Type":"ContainerDied","Data":"cae76da1b19104ec9ac0d79d4c0c18c044c82a9e0fb4665e780db9f6a9a1f05e"} Jan 29 15:51:39 crc kubenswrapper[5008]: I0129 15:51:39.027695 5008 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-9mffk" Jan 29 15:51:39 crc kubenswrapper[5008]: I0129 15:51:39.207843 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ls57p\" (UniqueName: \"kubernetes.io/projected/00b42485-f42b-4ca6-8e84-1a795454dd9f-kube-api-access-ls57p\") pod \"00b42485-f42b-4ca6-8e84-1a795454dd9f\" (UID: \"00b42485-f42b-4ca6-8e84-1a795454dd9f\") " Jan 29 15:51:39 crc kubenswrapper[5008]: I0129 15:51:39.208241 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/00b42485-f42b-4ca6-8e84-1a795454dd9f-config-data\") pod \"00b42485-f42b-4ca6-8e84-1a795454dd9f\" (UID: \"00b42485-f42b-4ca6-8e84-1a795454dd9f\") " Jan 29 15:51:39 crc kubenswrapper[5008]: I0129 15:51:39.208312 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/00b42485-f42b-4ca6-8e84-1a795454dd9f-scripts\") pod \"00b42485-f42b-4ca6-8e84-1a795454dd9f\" (UID: \"00b42485-f42b-4ca6-8e84-1a795454dd9f\") " Jan 29 15:51:39 crc kubenswrapper[5008]: I0129 15:51:39.208356 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/00b42485-f42b-4ca6-8e84-1a795454dd9f-combined-ca-bundle\") pod \"00b42485-f42b-4ca6-8e84-1a795454dd9f\" (UID: \"00b42485-f42b-4ca6-8e84-1a795454dd9f\") " Jan 29 15:51:39 crc kubenswrapper[5008]: I0129 15:51:39.213808 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/00b42485-f42b-4ca6-8e84-1a795454dd9f-scripts" (OuterVolumeSpecName: "scripts") pod "00b42485-f42b-4ca6-8e84-1a795454dd9f" (UID: "00b42485-f42b-4ca6-8e84-1a795454dd9f"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:51:39 crc kubenswrapper[5008]: I0129 15:51:39.216966 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/00b42485-f42b-4ca6-8e84-1a795454dd9f-kube-api-access-ls57p" (OuterVolumeSpecName: "kube-api-access-ls57p") pod "00b42485-f42b-4ca6-8e84-1a795454dd9f" (UID: "00b42485-f42b-4ca6-8e84-1a795454dd9f"). InnerVolumeSpecName "kube-api-access-ls57p". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:51:39 crc kubenswrapper[5008]: I0129 15:51:39.241318 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/00b42485-f42b-4ca6-8e84-1a795454dd9f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "00b42485-f42b-4ca6-8e84-1a795454dd9f" (UID: "00b42485-f42b-4ca6-8e84-1a795454dd9f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:51:39 crc kubenswrapper[5008]: I0129 15:51:39.243886 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/00b42485-f42b-4ca6-8e84-1a795454dd9f-config-data" (OuterVolumeSpecName: "config-data") pod "00b42485-f42b-4ca6-8e84-1a795454dd9f" (UID: "00b42485-f42b-4ca6-8e84-1a795454dd9f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:51:39 crc kubenswrapper[5008]: I0129 15:51:39.311129 5008 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ls57p\" (UniqueName: \"kubernetes.io/projected/00b42485-f42b-4ca6-8e84-1a795454dd9f-kube-api-access-ls57p\") on node \"crc\" DevicePath \"\"" Jan 29 15:51:39 crc kubenswrapper[5008]: I0129 15:51:39.311180 5008 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/00b42485-f42b-4ca6-8e84-1a795454dd9f-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 15:51:39 crc kubenswrapper[5008]: I0129 15:51:39.311194 5008 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/00b42485-f42b-4ca6-8e84-1a795454dd9f-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 15:51:39 crc kubenswrapper[5008]: I0129 15:51:39.311205 5008 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/00b42485-f42b-4ca6-8e84-1a795454dd9f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 15:51:39 crc kubenswrapper[5008]: I0129 15:51:39.677473 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-9mffk" event={"ID":"00b42485-f42b-4ca6-8e84-1a795454dd9f","Type":"ContainerDied","Data":"9cfdb60cd6bab187b310c7e3b7b9918a771aed98988c83c807016cc578b45171"} Jan 29 15:51:39 crc kubenswrapper[5008]: I0129 15:51:39.677510 5008 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9cfdb60cd6bab187b310c7e3b7b9918a771aed98988c83c807016cc578b45171" Jan 29 15:51:39 crc kubenswrapper[5008]: I0129 15:51:39.677568 5008 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-9mffk" Jan 29 15:51:39 crc kubenswrapper[5008]: I0129 15:51:39.817357 5008 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 29 15:51:39 crc kubenswrapper[5008]: E0129 15:51:39.817912 5008 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4463fec1-8026-4831-9f99-d7b8ba936dc2" containerName="registry-server" Jan 29 15:51:39 crc kubenswrapper[5008]: I0129 15:51:39.817943 5008 state_mem.go:107] "Deleted CPUSet assignment" podUID="4463fec1-8026-4831-9f99-d7b8ba936dc2" containerName="registry-server" Jan 29 15:51:39 crc kubenswrapper[5008]: E0129 15:51:39.817974 5008 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4463fec1-8026-4831-9f99-d7b8ba936dc2" containerName="extract-content" Jan 29 15:51:39 crc kubenswrapper[5008]: I0129 15:51:39.817987 5008 state_mem.go:107] "Deleted CPUSet assignment" podUID="4463fec1-8026-4831-9f99-d7b8ba936dc2" containerName="extract-content" Jan 29 15:51:39 crc kubenswrapper[5008]: E0129 15:51:39.818005 5008 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4463fec1-8026-4831-9f99-d7b8ba936dc2" containerName="extract-utilities" Jan 29 15:51:39 crc kubenswrapper[5008]: I0129 15:51:39.818017 5008 state_mem.go:107] "Deleted CPUSet assignment" podUID="4463fec1-8026-4831-9f99-d7b8ba936dc2" containerName="extract-utilities" Jan 29 15:51:39 crc kubenswrapper[5008]: E0129 15:51:39.818072 5008 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="00b42485-f42b-4ca6-8e84-1a795454dd9f" containerName="nova-cell0-conductor-db-sync" Jan 29 15:51:39 crc kubenswrapper[5008]: I0129 15:51:39.818086 5008 state_mem.go:107] "Deleted CPUSet assignment" podUID="00b42485-f42b-4ca6-8e84-1a795454dd9f" containerName="nova-cell0-conductor-db-sync" Jan 29 15:51:39 crc kubenswrapper[5008]: I0129 15:51:39.818389 5008 memory_manager.go:354] "RemoveStaleState removing state" podUID="00b42485-f42b-4ca6-8e84-1a795454dd9f" containerName="nova-cell0-conductor-db-sync" Jan 29 15:51:39 crc kubenswrapper[5008]: I0129 15:51:39.818415 5008 memory_manager.go:354] "RemoveStaleState removing state" podUID="4463fec1-8026-4831-9f99-d7b8ba936dc2" containerName="registry-server" Jan 29 15:51:39 crc kubenswrapper[5008]: I0129 15:51:39.819322 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Jan 29 15:51:39 crc kubenswrapper[5008]: I0129 15:51:39.826058 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Jan 29 15:51:39 crc kubenswrapper[5008]: I0129 15:51:39.826093 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-s4fbc" Jan 29 15:51:39 crc kubenswrapper[5008]: I0129 15:51:39.830849 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 29 15:51:39 crc kubenswrapper[5008]: I0129 15:51:39.921891 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fc7804a1-e957-4095-b882-901a403bce11-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"fc7804a1-e957-4095-b882-901a403bce11\") " pod="openstack/nova-cell0-conductor-0" Jan 29 15:51:39 crc kubenswrapper[5008]: I0129 15:51:39.921990 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fc7804a1-e957-4095-b882-901a403bce11-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"fc7804a1-e957-4095-b882-901a403bce11\") " pod="openstack/nova-cell0-conductor-0" Jan 29 15:51:39 crc kubenswrapper[5008]: I0129 15:51:39.922316 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-btcgl\" (UniqueName: \"kubernetes.io/projected/fc7804a1-e957-4095-b882-901a403bce11-kube-api-access-btcgl\") pod \"nova-cell0-conductor-0\" (UID: \"fc7804a1-e957-4095-b882-901a403bce11\") " pod="openstack/nova-cell0-conductor-0" Jan 29 15:51:40 crc kubenswrapper[5008]: I0129 15:51:40.024710 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fc7804a1-e957-4095-b882-901a403bce11-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"fc7804a1-e957-4095-b882-901a403bce11\") " pod="openstack/nova-cell0-conductor-0" Jan 29 15:51:40 crc kubenswrapper[5008]: I0129 15:51:40.025064 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fc7804a1-e957-4095-b882-901a403bce11-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"fc7804a1-e957-4095-b882-901a403bce11\") " pod="openstack/nova-cell0-conductor-0" Jan 29 15:51:40 crc kubenswrapper[5008]: I0129 15:51:40.025930 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-btcgl\" (UniqueName: \"kubernetes.io/projected/fc7804a1-e957-4095-b882-901a403bce11-kube-api-access-btcgl\") pod \"nova-cell0-conductor-0\" (UID: \"fc7804a1-e957-4095-b882-901a403bce11\") " pod="openstack/nova-cell0-conductor-0" Jan 29 15:51:40 crc kubenswrapper[5008]: I0129 15:51:40.028809 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fc7804a1-e957-4095-b882-901a403bce11-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"fc7804a1-e957-4095-b882-901a403bce11\") " pod="openstack/nova-cell0-conductor-0" Jan 29 15:51:40 crc kubenswrapper[5008]: I0129 15:51:40.028890 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fc7804a1-e957-4095-b882-901a403bce11-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"fc7804a1-e957-4095-b882-901a403bce11\") " pod="openstack/nova-cell0-conductor-0" Jan 29 15:51:40 crc kubenswrapper[5008]: I0129 15:51:40.041140 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-btcgl\" (UniqueName: \"kubernetes.io/projected/fc7804a1-e957-4095-b882-901a403bce11-kube-api-access-btcgl\") pod \"nova-cell0-conductor-0\" (UID: \"fc7804a1-e957-4095-b882-901a403bce11\") " pod="openstack/nova-cell0-conductor-0" Jan 29 15:51:40 crc kubenswrapper[5008]: I0129 15:51:40.145256 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Jan 29 15:51:40 crc kubenswrapper[5008]: I0129 15:51:40.606485 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 29 15:51:40 crc kubenswrapper[5008]: I0129 15:51:40.686685 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"fc7804a1-e957-4095-b882-901a403bce11","Type":"ContainerStarted","Data":"4d79723cffb908add611004361bb98aa1374424b0a267ad0392e0fad3299d496"} Jan 29 15:51:41 crc kubenswrapper[5008]: I0129 15:51:41.700702 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"fc7804a1-e957-4095-b882-901a403bce11","Type":"ContainerStarted","Data":"5819b755290290b2c26f61417a55999054b0b315e48cf27e3ed3f924cc962e36"} Jan 29 15:51:41 crc kubenswrapper[5008]: I0129 15:51:41.700972 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell0-conductor-0" Jan 29 15:51:41 crc kubenswrapper[5008]: I0129 15:51:41.743287 5008 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-0" podStartSLOduration=2.743259352 podStartE2EDuration="2.743259352s" podCreationTimestamp="2026-01-29 15:51:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 15:51:41.72790695 +0000 UTC m=+1445.400761267" watchObservedRunningTime="2026-01-29 15:51:41.743259352 +0000 UTC m=+1445.416113629" Jan 29 15:51:42 crc kubenswrapper[5008]: E0129 15:51:42.644114 5008 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod00b42485_f42b_4ca6_8e84_1a795454dd9f.slice/crio-9cfdb60cd6bab187b310c7e3b7b9918a771aed98988c83c807016cc578b45171\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod36d8b2f2_f15e_4b9a_a522_35d228919444.slice/crio-conmon-3c4810319ce99b0ba470d870728a1657c47a7d5b6ecdc21f11ecc35cfa95fa28.scope\": RecentStats: unable to find data in memory cache]" Jan 29 15:51:42 crc kubenswrapper[5008]: I0129 15:51:42.713020 5008 generic.go:334] "Generic (PLEG): container finished" podID="36d8b2f2-f15e-4b9a-a522-35d228919444" containerID="3c4810319ce99b0ba470d870728a1657c47a7d5b6ecdc21f11ecc35cfa95fa28" exitCode=137 Jan 29 15:51:42 crc kubenswrapper[5008]: I0129 15:51:42.713061 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"36d8b2f2-f15e-4b9a-a522-35d228919444","Type":"ContainerDied","Data":"3c4810319ce99b0ba470d870728a1657c47a7d5b6ecdc21f11ecc35cfa95fa28"} Jan 29 15:51:43 crc kubenswrapper[5008]: I0129 15:51:43.275386 5008 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 29 15:51:43 crc kubenswrapper[5008]: I0129 15:51:43.388422 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/36d8b2f2-f15e-4b9a-a522-35d228919444-run-httpd\") pod \"36d8b2f2-f15e-4b9a-a522-35d228919444\" (UID: \"36d8b2f2-f15e-4b9a-a522-35d228919444\") " Jan 29 15:51:43 crc kubenswrapper[5008]: I0129 15:51:43.388854 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/36d8b2f2-f15e-4b9a-a522-35d228919444-config-data\") pod \"36d8b2f2-f15e-4b9a-a522-35d228919444\" (UID: \"36d8b2f2-f15e-4b9a-a522-35d228919444\") " Jan 29 15:51:43 crc kubenswrapper[5008]: I0129 15:51:43.388974 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-58fcv\" (UniqueName: \"kubernetes.io/projected/36d8b2f2-f15e-4b9a-a522-35d228919444-kube-api-access-58fcv\") pod \"36d8b2f2-f15e-4b9a-a522-35d228919444\" (UID: \"36d8b2f2-f15e-4b9a-a522-35d228919444\") " Jan 29 15:51:43 crc kubenswrapper[5008]: I0129 15:51:43.389015 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/36d8b2f2-f15e-4b9a-a522-35d228919444-log-httpd\") pod \"36d8b2f2-f15e-4b9a-a522-35d228919444\" (UID: \"36d8b2f2-f15e-4b9a-a522-35d228919444\") " Jan 29 15:51:43 crc kubenswrapper[5008]: I0129 15:51:43.389061 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/36d8b2f2-f15e-4b9a-a522-35d228919444-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "36d8b2f2-f15e-4b9a-a522-35d228919444" (UID: "36d8b2f2-f15e-4b9a-a522-35d228919444"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 15:51:43 crc kubenswrapper[5008]: I0129 15:51:43.389137 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/36d8b2f2-f15e-4b9a-a522-35d228919444-combined-ca-bundle\") pod \"36d8b2f2-f15e-4b9a-a522-35d228919444\" (UID: \"36d8b2f2-f15e-4b9a-a522-35d228919444\") " Jan 29 15:51:43 crc kubenswrapper[5008]: I0129 15:51:43.389167 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/36d8b2f2-f15e-4b9a-a522-35d228919444-scripts\") pod \"36d8b2f2-f15e-4b9a-a522-35d228919444\" (UID: \"36d8b2f2-f15e-4b9a-a522-35d228919444\") " Jan 29 15:51:43 crc kubenswrapper[5008]: I0129 15:51:43.389280 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/36d8b2f2-f15e-4b9a-a522-35d228919444-sg-core-conf-yaml\") pod \"36d8b2f2-f15e-4b9a-a522-35d228919444\" (UID: \"36d8b2f2-f15e-4b9a-a522-35d228919444\") " Jan 29 15:51:43 crc kubenswrapper[5008]: I0129 15:51:43.389512 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/36d8b2f2-f15e-4b9a-a522-35d228919444-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "36d8b2f2-f15e-4b9a-a522-35d228919444" (UID: "36d8b2f2-f15e-4b9a-a522-35d228919444"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 15:51:43 crc kubenswrapper[5008]: I0129 15:51:43.389794 5008 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/36d8b2f2-f15e-4b9a-a522-35d228919444-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 29 15:51:43 crc kubenswrapper[5008]: I0129 15:51:43.389816 5008 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/36d8b2f2-f15e-4b9a-a522-35d228919444-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 29 15:51:43 crc kubenswrapper[5008]: I0129 15:51:43.395518 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/36d8b2f2-f15e-4b9a-a522-35d228919444-kube-api-access-58fcv" (OuterVolumeSpecName: "kube-api-access-58fcv") pod "36d8b2f2-f15e-4b9a-a522-35d228919444" (UID: "36d8b2f2-f15e-4b9a-a522-35d228919444"). InnerVolumeSpecName "kube-api-access-58fcv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:51:43 crc kubenswrapper[5008]: I0129 15:51:43.410495 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/36d8b2f2-f15e-4b9a-a522-35d228919444-scripts" (OuterVolumeSpecName: "scripts") pod "36d8b2f2-f15e-4b9a-a522-35d228919444" (UID: "36d8b2f2-f15e-4b9a-a522-35d228919444"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:51:43 crc kubenswrapper[5008]: I0129 15:51:43.416387 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/36d8b2f2-f15e-4b9a-a522-35d228919444-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "36d8b2f2-f15e-4b9a-a522-35d228919444" (UID: "36d8b2f2-f15e-4b9a-a522-35d228919444"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:51:43 crc kubenswrapper[5008]: I0129 15:51:43.507236 5008 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/36d8b2f2-f15e-4b9a-a522-35d228919444-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 15:51:43 crc kubenswrapper[5008]: I0129 15:51:43.507266 5008 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/36d8b2f2-f15e-4b9a-a522-35d228919444-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 29 15:51:43 crc kubenswrapper[5008]: I0129 15:51:43.507277 5008 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-58fcv\" (UniqueName: \"kubernetes.io/projected/36d8b2f2-f15e-4b9a-a522-35d228919444-kube-api-access-58fcv\") on node \"crc\" DevicePath \"\"" Jan 29 15:51:43 crc kubenswrapper[5008]: I0129 15:51:43.513998 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/36d8b2f2-f15e-4b9a-a522-35d228919444-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "36d8b2f2-f15e-4b9a-a522-35d228919444" (UID: "36d8b2f2-f15e-4b9a-a522-35d228919444"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:51:43 crc kubenswrapper[5008]: I0129 15:51:43.535165 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/36d8b2f2-f15e-4b9a-a522-35d228919444-config-data" (OuterVolumeSpecName: "config-data") pod "36d8b2f2-f15e-4b9a-a522-35d228919444" (UID: "36d8b2f2-f15e-4b9a-a522-35d228919444"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:51:43 crc kubenswrapper[5008]: I0129 15:51:43.609585 5008 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/36d8b2f2-f15e-4b9a-a522-35d228919444-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 15:51:43 crc kubenswrapper[5008]: I0129 15:51:43.609640 5008 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/36d8b2f2-f15e-4b9a-a522-35d228919444-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 15:51:43 crc kubenswrapper[5008]: I0129 15:51:43.728744 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"36d8b2f2-f15e-4b9a-a522-35d228919444","Type":"ContainerDied","Data":"1da072cac2f699d8a48c0ad66c9f3278b5a7e28c720fc4e5dc3e5e7db0670e0e"} Jan 29 15:51:43 crc kubenswrapper[5008]: I0129 15:51:43.728824 5008 scope.go:117] "RemoveContainer" containerID="3c4810319ce99b0ba470d870728a1657c47a7d5b6ecdc21f11ecc35cfa95fa28" Jan 29 15:51:43 crc kubenswrapper[5008]: I0129 15:51:43.728967 5008 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 29 15:51:43 crc kubenswrapper[5008]: I0129 15:51:43.751330 5008 scope.go:117] "RemoveContainer" containerID="4d4a03cbd90fc3060c9fd69659de0e9b052b60e67708f55848d807bfe4b811fa" Jan 29 15:51:43 crc kubenswrapper[5008]: I0129 15:51:43.773459 5008 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 29 15:51:43 crc kubenswrapper[5008]: I0129 15:51:43.778626 5008 scope.go:117] "RemoveContainer" containerID="35227769b955309ab8713be39a1f2ffd968e6a4bd2b991d2c6531b44270ba0a3" Jan 29 15:51:43 crc kubenswrapper[5008]: I0129 15:51:43.786459 5008 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 29 15:51:43 crc kubenswrapper[5008]: I0129 15:51:43.816820 5008 scope.go:117] "RemoveContainer" containerID="333aa71748cf0dbcc8fddbab51dff8ff1acaa47f116a066d74485824ee50dd82" Jan 29 15:51:43 crc kubenswrapper[5008]: I0129 15:51:43.822605 5008 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 29 15:51:43 crc kubenswrapper[5008]: E0129 15:51:43.823130 5008 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="36d8b2f2-f15e-4b9a-a522-35d228919444" containerName="ceilometer-central-agent" Jan 29 15:51:43 crc kubenswrapper[5008]: I0129 15:51:43.823158 5008 state_mem.go:107] "Deleted CPUSet assignment" podUID="36d8b2f2-f15e-4b9a-a522-35d228919444" containerName="ceilometer-central-agent" Jan 29 15:51:43 crc kubenswrapper[5008]: E0129 15:51:43.823179 5008 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="36d8b2f2-f15e-4b9a-a522-35d228919444" containerName="sg-core" Jan 29 15:51:43 crc kubenswrapper[5008]: I0129 15:51:43.823189 5008 state_mem.go:107] "Deleted CPUSet assignment" podUID="36d8b2f2-f15e-4b9a-a522-35d228919444" containerName="sg-core" Jan 29 15:51:43 crc kubenswrapper[5008]: E0129 15:51:43.823207 5008 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="36d8b2f2-f15e-4b9a-a522-35d228919444" containerName="proxy-httpd" Jan 29 15:51:43 crc kubenswrapper[5008]: I0129 15:51:43.823218 5008 state_mem.go:107] "Deleted CPUSet assignment" podUID="36d8b2f2-f15e-4b9a-a522-35d228919444" containerName="proxy-httpd" Jan 29 15:51:43 crc kubenswrapper[5008]: E0129 15:51:43.823242 5008 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="36d8b2f2-f15e-4b9a-a522-35d228919444" containerName="ceilometer-notification-agent" Jan 29 15:51:43 crc kubenswrapper[5008]: I0129 15:51:43.823252 5008 state_mem.go:107] "Deleted CPUSet assignment" podUID="36d8b2f2-f15e-4b9a-a522-35d228919444" containerName="ceilometer-notification-agent" Jan 29 15:51:43 crc kubenswrapper[5008]: I0129 15:51:43.823562 5008 memory_manager.go:354] "RemoveStaleState removing state" podUID="36d8b2f2-f15e-4b9a-a522-35d228919444" containerName="ceilometer-notification-agent" Jan 29 15:51:43 crc kubenswrapper[5008]: I0129 15:51:43.823592 5008 memory_manager.go:354] "RemoveStaleState removing state" podUID="36d8b2f2-f15e-4b9a-a522-35d228919444" containerName="sg-core" Jan 29 15:51:43 crc kubenswrapper[5008]: I0129 15:51:43.823609 5008 memory_manager.go:354] "RemoveStaleState removing state" podUID="36d8b2f2-f15e-4b9a-a522-35d228919444" containerName="proxy-httpd" Jan 29 15:51:43 crc kubenswrapper[5008]: I0129 15:51:43.823624 5008 memory_manager.go:354] "RemoveStaleState removing state" podUID="36d8b2f2-f15e-4b9a-a522-35d228919444" containerName="ceilometer-central-agent" Jan 29 15:51:43 crc kubenswrapper[5008]: I0129 15:51:43.826459 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 29 15:51:43 crc kubenswrapper[5008]: I0129 15:51:43.829800 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 29 15:51:43 crc kubenswrapper[5008]: I0129 15:51:43.829965 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 29 15:51:43 crc kubenswrapper[5008]: I0129 15:51:43.832497 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 29 15:51:43 crc kubenswrapper[5008]: I0129 15:51:43.914977 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/d1ab502b-4ec7-4a0b-b7a4-ed10d3f26be7-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"d1ab502b-4ec7-4a0b-b7a4-ed10d3f26be7\") " pod="openstack/ceilometer-0" Jan 29 15:51:43 crc kubenswrapper[5008]: I0129 15:51:43.915220 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d1ab502b-4ec7-4a0b-b7a4-ed10d3f26be7-config-data\") pod \"ceilometer-0\" (UID: \"d1ab502b-4ec7-4a0b-b7a4-ed10d3f26be7\") " pod="openstack/ceilometer-0" Jan 29 15:51:43 crc kubenswrapper[5008]: I0129 15:51:43.915428 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d1ab502b-4ec7-4a0b-b7a4-ed10d3f26be7-run-httpd\") pod \"ceilometer-0\" (UID: \"d1ab502b-4ec7-4a0b-b7a4-ed10d3f26be7\") " pod="openstack/ceilometer-0" Jan 29 15:51:43 crc kubenswrapper[5008]: I0129 15:51:43.915551 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d1ab502b-4ec7-4a0b-b7a4-ed10d3f26be7-scripts\") pod \"ceilometer-0\" (UID: \"d1ab502b-4ec7-4a0b-b7a4-ed10d3f26be7\") " pod="openstack/ceilometer-0" Jan 29 15:51:43 crc kubenswrapper[5008]: I0129 15:51:43.915630 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d1ab502b-4ec7-4a0b-b7a4-ed10d3f26be7-log-httpd\") pod \"ceilometer-0\" (UID: \"d1ab502b-4ec7-4a0b-b7a4-ed10d3f26be7\") " pod="openstack/ceilometer-0" Jan 29 15:51:43 crc kubenswrapper[5008]: I0129 15:51:43.915675 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d1ab502b-4ec7-4a0b-b7a4-ed10d3f26be7-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"d1ab502b-4ec7-4a0b-b7a4-ed10d3f26be7\") " pod="openstack/ceilometer-0" Jan 29 15:51:43 crc kubenswrapper[5008]: I0129 15:51:43.915710 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5vwdz\" (UniqueName: \"kubernetes.io/projected/d1ab502b-4ec7-4a0b-b7a4-ed10d3f26be7-kube-api-access-5vwdz\") pod \"ceilometer-0\" (UID: \"d1ab502b-4ec7-4a0b-b7a4-ed10d3f26be7\") " pod="openstack/ceilometer-0" Jan 29 15:51:43 crc kubenswrapper[5008]: I0129 15:51:43.990887 5008 patch_prober.go:28] interesting pod/machine-config-daemon-gk9q8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 15:51:43 crc kubenswrapper[5008]: I0129 15:51:43.990973 5008 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-gk9q8" podUID="ca0fcb2d-733d-4bde-9bbf-3f7082d0e244" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 15:51:44 crc kubenswrapper[5008]: I0129 15:51:44.018028 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d1ab502b-4ec7-4a0b-b7a4-ed10d3f26be7-scripts\") pod \"ceilometer-0\" (UID: \"d1ab502b-4ec7-4a0b-b7a4-ed10d3f26be7\") " pod="openstack/ceilometer-0" Jan 29 15:51:44 crc kubenswrapper[5008]: I0129 15:51:44.018150 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d1ab502b-4ec7-4a0b-b7a4-ed10d3f26be7-log-httpd\") pod \"ceilometer-0\" (UID: \"d1ab502b-4ec7-4a0b-b7a4-ed10d3f26be7\") " pod="openstack/ceilometer-0" Jan 29 15:51:44 crc kubenswrapper[5008]: I0129 15:51:44.018179 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d1ab502b-4ec7-4a0b-b7a4-ed10d3f26be7-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"d1ab502b-4ec7-4a0b-b7a4-ed10d3f26be7\") " pod="openstack/ceilometer-0" Jan 29 15:51:44 crc kubenswrapper[5008]: I0129 15:51:44.018214 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5vwdz\" (UniqueName: \"kubernetes.io/projected/d1ab502b-4ec7-4a0b-b7a4-ed10d3f26be7-kube-api-access-5vwdz\") pod \"ceilometer-0\" (UID: \"d1ab502b-4ec7-4a0b-b7a4-ed10d3f26be7\") " pod="openstack/ceilometer-0" Jan 29 15:51:44 crc kubenswrapper[5008]: I0129 15:51:44.018274 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/d1ab502b-4ec7-4a0b-b7a4-ed10d3f26be7-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"d1ab502b-4ec7-4a0b-b7a4-ed10d3f26be7\") " pod="openstack/ceilometer-0" Jan 29 15:51:44 crc kubenswrapper[5008]: I0129 15:51:44.018378 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d1ab502b-4ec7-4a0b-b7a4-ed10d3f26be7-config-data\") pod \"ceilometer-0\" (UID: \"d1ab502b-4ec7-4a0b-b7a4-ed10d3f26be7\") " pod="openstack/ceilometer-0" Jan 29 15:51:44 crc kubenswrapper[5008]: I0129 15:51:44.019078 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d1ab502b-4ec7-4a0b-b7a4-ed10d3f26be7-run-httpd\") pod \"ceilometer-0\" (UID: \"d1ab502b-4ec7-4a0b-b7a4-ed10d3f26be7\") " pod="openstack/ceilometer-0" Jan 29 15:51:44 crc kubenswrapper[5008]: I0129 15:51:44.019454 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d1ab502b-4ec7-4a0b-b7a4-ed10d3f26be7-run-httpd\") pod \"ceilometer-0\" (UID: \"d1ab502b-4ec7-4a0b-b7a4-ed10d3f26be7\") " pod="openstack/ceilometer-0" Jan 29 15:51:44 crc kubenswrapper[5008]: I0129 15:51:44.020384 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d1ab502b-4ec7-4a0b-b7a4-ed10d3f26be7-log-httpd\") pod \"ceilometer-0\" (UID: \"d1ab502b-4ec7-4a0b-b7a4-ed10d3f26be7\") " pod="openstack/ceilometer-0" Jan 29 15:51:44 crc kubenswrapper[5008]: I0129 15:51:44.026741 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d1ab502b-4ec7-4a0b-b7a4-ed10d3f26be7-scripts\") pod \"ceilometer-0\" (UID: \"d1ab502b-4ec7-4a0b-b7a4-ed10d3f26be7\") " pod="openstack/ceilometer-0" Jan 29 15:51:44 crc kubenswrapper[5008]: I0129 15:51:44.027014 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/d1ab502b-4ec7-4a0b-b7a4-ed10d3f26be7-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"d1ab502b-4ec7-4a0b-b7a4-ed10d3f26be7\") " pod="openstack/ceilometer-0" Jan 29 15:51:44 crc kubenswrapper[5008]: I0129 15:51:44.027747 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d1ab502b-4ec7-4a0b-b7a4-ed10d3f26be7-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"d1ab502b-4ec7-4a0b-b7a4-ed10d3f26be7\") " pod="openstack/ceilometer-0" Jan 29 15:51:44 crc kubenswrapper[5008]: I0129 15:51:44.034922 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d1ab502b-4ec7-4a0b-b7a4-ed10d3f26be7-config-data\") pod \"ceilometer-0\" (UID: \"d1ab502b-4ec7-4a0b-b7a4-ed10d3f26be7\") " pod="openstack/ceilometer-0" Jan 29 15:51:44 crc kubenswrapper[5008]: I0129 15:51:44.047237 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5vwdz\" (UniqueName: \"kubernetes.io/projected/d1ab502b-4ec7-4a0b-b7a4-ed10d3f26be7-kube-api-access-5vwdz\") pod \"ceilometer-0\" (UID: \"d1ab502b-4ec7-4a0b-b7a4-ed10d3f26be7\") " pod="openstack/ceilometer-0" Jan 29 15:51:44 crc kubenswrapper[5008]: I0129 15:51:44.154071 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 29 15:51:44 crc kubenswrapper[5008]: I0129 15:51:44.569420 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 29 15:51:44 crc kubenswrapper[5008]: I0129 15:51:44.738832 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d1ab502b-4ec7-4a0b-b7a4-ed10d3f26be7","Type":"ContainerStarted","Data":"0c880a32127e0f9cf20872f0cb9c9103c1ec0fcb4e31857d57145ee7e6ef5eff"} Jan 29 15:51:45 crc kubenswrapper[5008]: I0129 15:51:45.178947 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell0-conductor-0" Jan 29 15:51:45 crc kubenswrapper[5008]: I0129 15:51:45.336858 5008 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="36d8b2f2-f15e-4b9a-a522-35d228919444" path="/var/lib/kubelet/pods/36d8b2f2-f15e-4b9a-a522-35d228919444/volumes" Jan 29 15:51:45 crc kubenswrapper[5008]: I0129 15:51:45.688655 5008 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-cell-mapping-2crqc"] Jan 29 15:51:45 crc kubenswrapper[5008]: I0129 15:51:45.689874 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-2crqc" Jan 29 15:51:45 crc kubenswrapper[5008]: I0129 15:51:45.692274 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-scripts" Jan 29 15:51:45 crc kubenswrapper[5008]: I0129 15:51:45.692830 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-config-data" Jan 29 15:51:45 crc kubenswrapper[5008]: I0129 15:51:45.701942 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-2crqc"] Jan 29 15:51:45 crc kubenswrapper[5008]: I0129 15:51:45.751267 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d1ab502b-4ec7-4a0b-b7a4-ed10d3f26be7","Type":"ContainerStarted","Data":"1f0cac0f22132fbe8eb8ceb4b6f38d3eb51e2e56dc4d95059f929e668ed362f6"} Jan 29 15:51:45 crc kubenswrapper[5008]: I0129 15:51:45.857594 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/eef9ab07-3037-4115-bb8e-954191b169af-scripts\") pod \"nova-cell0-cell-mapping-2crqc\" (UID: \"eef9ab07-3037-4115-bb8e-954191b169af\") " pod="openstack/nova-cell0-cell-mapping-2crqc" Jan 29 15:51:45 crc kubenswrapper[5008]: I0129 15:51:45.857710 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eef9ab07-3037-4115-bb8e-954191b169af-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-2crqc\" (UID: \"eef9ab07-3037-4115-bb8e-954191b169af\") " pod="openstack/nova-cell0-cell-mapping-2crqc" Jan 29 15:51:45 crc kubenswrapper[5008]: I0129 15:51:45.857838 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zsxq7\" (UniqueName: \"kubernetes.io/projected/eef9ab07-3037-4115-bb8e-954191b169af-kube-api-access-zsxq7\") pod \"nova-cell0-cell-mapping-2crqc\" (UID: \"eef9ab07-3037-4115-bb8e-954191b169af\") " pod="openstack/nova-cell0-cell-mapping-2crqc" Jan 29 15:51:45 crc kubenswrapper[5008]: I0129 15:51:45.857881 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eef9ab07-3037-4115-bb8e-954191b169af-config-data\") pod \"nova-cell0-cell-mapping-2crqc\" (UID: \"eef9ab07-3037-4115-bb8e-954191b169af\") " pod="openstack/nova-cell0-cell-mapping-2crqc" Jan 29 15:51:45 crc kubenswrapper[5008]: I0129 15:51:45.886048 5008 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Jan 29 15:51:45 crc kubenswrapper[5008]: I0129 15:51:45.887118 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 29 15:51:45 crc kubenswrapper[5008]: I0129 15:51:45.889951 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Jan 29 15:51:45 crc kubenswrapper[5008]: I0129 15:51:45.912528 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 29 15:51:45 crc kubenswrapper[5008]: I0129 15:51:45.950775 5008 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Jan 29 15:51:45 crc kubenswrapper[5008]: I0129 15:51:45.952693 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 29 15:51:45 crc kubenswrapper[5008]: I0129 15:51:45.959117 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eef9ab07-3037-4115-bb8e-954191b169af-config-data\") pod \"nova-cell0-cell-mapping-2crqc\" (UID: \"eef9ab07-3037-4115-bb8e-954191b169af\") " pod="openstack/nova-cell0-cell-mapping-2crqc" Jan 29 15:51:45 crc kubenswrapper[5008]: I0129 15:51:45.959235 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/eef9ab07-3037-4115-bb8e-954191b169af-scripts\") pod \"nova-cell0-cell-mapping-2crqc\" (UID: \"eef9ab07-3037-4115-bb8e-954191b169af\") " pod="openstack/nova-cell0-cell-mapping-2crqc" Jan 29 15:51:45 crc kubenswrapper[5008]: I0129 15:51:45.959307 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eef9ab07-3037-4115-bb8e-954191b169af-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-2crqc\" (UID: \"eef9ab07-3037-4115-bb8e-954191b169af\") " pod="openstack/nova-cell0-cell-mapping-2crqc" Jan 29 15:51:45 crc kubenswrapper[5008]: I0129 15:51:45.959401 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zsxq7\" (UniqueName: \"kubernetes.io/projected/eef9ab07-3037-4115-bb8e-954191b169af-kube-api-access-zsxq7\") pod \"nova-cell0-cell-mapping-2crqc\" (UID: \"eef9ab07-3037-4115-bb8e-954191b169af\") " pod="openstack/nova-cell0-cell-mapping-2crqc" Jan 29 15:51:45 crc kubenswrapper[5008]: I0129 15:51:45.977509 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 29 15:51:45 crc kubenswrapper[5008]: I0129 15:51:45.979123 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Jan 29 15:51:45 crc kubenswrapper[5008]: I0129 15:51:45.981547 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/eef9ab07-3037-4115-bb8e-954191b169af-scripts\") pod \"nova-cell0-cell-mapping-2crqc\" (UID: \"eef9ab07-3037-4115-bb8e-954191b169af\") " pod="openstack/nova-cell0-cell-mapping-2crqc" Jan 29 15:51:45 crc kubenswrapper[5008]: I0129 15:51:45.981581 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eef9ab07-3037-4115-bb8e-954191b169af-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-2crqc\" (UID: \"eef9ab07-3037-4115-bb8e-954191b169af\") " pod="openstack/nova-cell0-cell-mapping-2crqc" Jan 29 15:51:45 crc kubenswrapper[5008]: I0129 15:51:45.982248 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eef9ab07-3037-4115-bb8e-954191b169af-config-data\") pod \"nova-cell0-cell-mapping-2crqc\" (UID: \"eef9ab07-3037-4115-bb8e-954191b169af\") " pod="openstack/nova-cell0-cell-mapping-2crqc" Jan 29 15:51:45 crc kubenswrapper[5008]: I0129 15:51:45.991896 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zsxq7\" (UniqueName: \"kubernetes.io/projected/eef9ab07-3037-4115-bb8e-954191b169af-kube-api-access-zsxq7\") pod \"nova-cell0-cell-mapping-2crqc\" (UID: \"eef9ab07-3037-4115-bb8e-954191b169af\") " pod="openstack/nova-cell0-cell-mapping-2crqc" Jan 29 15:51:46 crc kubenswrapper[5008]: I0129 15:51:46.053849 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-2crqc" Jan 29 15:51:46 crc kubenswrapper[5008]: I0129 15:51:46.060746 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h4pfj\" (UniqueName: \"kubernetes.io/projected/1f0bf87f-118b-4ad5-8354-688ae93d75e8-kube-api-access-h4pfj\") pod \"nova-scheduler-0\" (UID: \"1f0bf87f-118b-4ad5-8354-688ae93d75e8\") " pod="openstack/nova-scheduler-0" Jan 29 15:51:46 crc kubenswrapper[5008]: I0129 15:51:46.060846 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f6cf4\" (UniqueName: \"kubernetes.io/projected/aafcc4fd-9cb2-458b-892e-0e56adcdfa2c-kube-api-access-f6cf4\") pod \"nova-api-0\" (UID: \"aafcc4fd-9cb2-458b-892e-0e56adcdfa2c\") " pod="openstack/nova-api-0" Jan 29 15:51:46 crc kubenswrapper[5008]: I0129 15:51:46.060876 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1f0bf87f-118b-4ad5-8354-688ae93d75e8-config-data\") pod \"nova-scheduler-0\" (UID: \"1f0bf87f-118b-4ad5-8354-688ae93d75e8\") " pod="openstack/nova-scheduler-0" Jan 29 15:51:46 crc kubenswrapper[5008]: I0129 15:51:46.060898 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/aafcc4fd-9cb2-458b-892e-0e56adcdfa2c-logs\") pod \"nova-api-0\" (UID: \"aafcc4fd-9cb2-458b-892e-0e56adcdfa2c\") " pod="openstack/nova-api-0" Jan 29 15:51:46 crc kubenswrapper[5008]: I0129 15:51:46.060962 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aafcc4fd-9cb2-458b-892e-0e56adcdfa2c-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"aafcc4fd-9cb2-458b-892e-0e56adcdfa2c\") " pod="openstack/nova-api-0" Jan 29 15:51:46 crc kubenswrapper[5008]: I0129 15:51:46.060979 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aafcc4fd-9cb2-458b-892e-0e56adcdfa2c-config-data\") pod \"nova-api-0\" (UID: \"aafcc4fd-9cb2-458b-892e-0e56adcdfa2c\") " pod="openstack/nova-api-0" Jan 29 15:51:46 crc kubenswrapper[5008]: I0129 15:51:46.061010 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1f0bf87f-118b-4ad5-8354-688ae93d75e8-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"1f0bf87f-118b-4ad5-8354-688ae93d75e8\") " pod="openstack/nova-scheduler-0" Jan 29 15:51:46 crc kubenswrapper[5008]: I0129 15:51:46.118853 5008 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Jan 29 15:51:46 crc kubenswrapper[5008]: I0129 15:51:46.120436 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 29 15:51:46 crc kubenswrapper[5008]: I0129 15:51:46.125503 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Jan 29 15:51:46 crc kubenswrapper[5008]: I0129 15:51:46.158563 5008 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 29 15:51:46 crc kubenswrapper[5008]: I0129 15:51:46.163815 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f6cf4\" (UniqueName: \"kubernetes.io/projected/aafcc4fd-9cb2-458b-892e-0e56adcdfa2c-kube-api-access-f6cf4\") pod \"nova-api-0\" (UID: \"aafcc4fd-9cb2-458b-892e-0e56adcdfa2c\") " pod="openstack/nova-api-0" Jan 29 15:51:46 crc kubenswrapper[5008]: I0129 15:51:46.163868 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1f0bf87f-118b-4ad5-8354-688ae93d75e8-config-data\") pod \"nova-scheduler-0\" (UID: \"1f0bf87f-118b-4ad5-8354-688ae93d75e8\") " pod="openstack/nova-scheduler-0" Jan 29 15:51:46 crc kubenswrapper[5008]: I0129 15:51:46.163892 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/aafcc4fd-9cb2-458b-892e-0e56adcdfa2c-logs\") pod \"nova-api-0\" (UID: \"aafcc4fd-9cb2-458b-892e-0e56adcdfa2c\") " pod="openstack/nova-api-0" Jan 29 15:51:46 crc kubenswrapper[5008]: I0129 15:51:46.163955 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aafcc4fd-9cb2-458b-892e-0e56adcdfa2c-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"aafcc4fd-9cb2-458b-892e-0e56adcdfa2c\") " pod="openstack/nova-api-0" Jan 29 15:51:46 crc kubenswrapper[5008]: I0129 15:51:46.163971 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aafcc4fd-9cb2-458b-892e-0e56adcdfa2c-config-data\") pod \"nova-api-0\" (UID: \"aafcc4fd-9cb2-458b-892e-0e56adcdfa2c\") " pod="openstack/nova-api-0" Jan 29 15:51:46 crc kubenswrapper[5008]: I0129 15:51:46.163999 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1f0bf87f-118b-4ad5-8354-688ae93d75e8-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"1f0bf87f-118b-4ad5-8354-688ae93d75e8\") " pod="openstack/nova-scheduler-0" Jan 29 15:51:46 crc kubenswrapper[5008]: I0129 15:51:46.164068 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h4pfj\" (UniqueName: \"kubernetes.io/projected/1f0bf87f-118b-4ad5-8354-688ae93d75e8-kube-api-access-h4pfj\") pod \"nova-scheduler-0\" (UID: \"1f0bf87f-118b-4ad5-8354-688ae93d75e8\") " pod="openstack/nova-scheduler-0" Jan 29 15:51:46 crc kubenswrapper[5008]: I0129 15:51:46.183367 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/aafcc4fd-9cb2-458b-892e-0e56adcdfa2c-logs\") pod \"nova-api-0\" (UID: \"aafcc4fd-9cb2-458b-892e-0e56adcdfa2c\") " pod="openstack/nova-api-0" Jan 29 15:51:46 crc kubenswrapper[5008]: I0129 15:51:46.185439 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 29 15:51:46 crc kubenswrapper[5008]: I0129 15:51:46.194303 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Jan 29 15:51:46 crc kubenswrapper[5008]: I0129 15:51:46.195002 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1f0bf87f-118b-4ad5-8354-688ae93d75e8-config-data\") pod \"nova-scheduler-0\" (UID: \"1f0bf87f-118b-4ad5-8354-688ae93d75e8\") " pod="openstack/nova-scheduler-0" Jan 29 15:51:46 crc kubenswrapper[5008]: I0129 15:51:46.195565 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aafcc4fd-9cb2-458b-892e-0e56adcdfa2c-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"aafcc4fd-9cb2-458b-892e-0e56adcdfa2c\") " pod="openstack/nova-api-0" Jan 29 15:51:46 crc kubenswrapper[5008]: I0129 15:51:46.201173 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1f0bf87f-118b-4ad5-8354-688ae93d75e8-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"1f0bf87f-118b-4ad5-8354-688ae93d75e8\") " pod="openstack/nova-scheduler-0" Jan 29 15:51:46 crc kubenswrapper[5008]: I0129 15:51:46.207086 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aafcc4fd-9cb2-458b-892e-0e56adcdfa2c-config-data\") pod \"nova-api-0\" (UID: \"aafcc4fd-9cb2-458b-892e-0e56adcdfa2c\") " pod="openstack/nova-api-0" Jan 29 15:51:46 crc kubenswrapper[5008]: I0129 15:51:46.213601 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 29 15:51:46 crc kubenswrapper[5008]: I0129 15:51:46.232393 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h4pfj\" (UniqueName: \"kubernetes.io/projected/1f0bf87f-118b-4ad5-8354-688ae93d75e8-kube-api-access-h4pfj\") pod \"nova-scheduler-0\" (UID: \"1f0bf87f-118b-4ad5-8354-688ae93d75e8\") " pod="openstack/nova-scheduler-0" Jan 29 15:51:46 crc kubenswrapper[5008]: I0129 15:51:46.240418 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f6cf4\" (UniqueName: \"kubernetes.io/projected/aafcc4fd-9cb2-458b-892e-0e56adcdfa2c-kube-api-access-f6cf4\") pod \"nova-api-0\" (UID: \"aafcc4fd-9cb2-458b-892e-0e56adcdfa2c\") " pod="openstack/nova-api-0" Jan 29 15:51:46 crc kubenswrapper[5008]: I0129 15:51:46.257212 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 29 15:51:46 crc kubenswrapper[5008]: I0129 15:51:46.268430 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a768e5ff-0521-4ad2-aa02-6774dcb5cdd0-logs\") pod \"nova-metadata-0\" (UID: \"a768e5ff-0521-4ad2-aa02-6774dcb5cdd0\") " pod="openstack/nova-metadata-0" Jan 29 15:51:46 crc kubenswrapper[5008]: I0129 15:51:46.268705 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a768e5ff-0521-4ad2-aa02-6774dcb5cdd0-config-data\") pod \"nova-metadata-0\" (UID: \"a768e5ff-0521-4ad2-aa02-6774dcb5cdd0\") " pod="openstack/nova-metadata-0" Jan 29 15:51:46 crc kubenswrapper[5008]: I0129 15:51:46.269043 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2zl2n\" (UniqueName: \"kubernetes.io/projected/a768e5ff-0521-4ad2-aa02-6774dcb5cdd0-kube-api-access-2zl2n\") pod \"nova-metadata-0\" (UID: \"a768e5ff-0521-4ad2-aa02-6774dcb5cdd0\") " pod="openstack/nova-metadata-0" Jan 29 15:51:46 crc kubenswrapper[5008]: I0129 15:51:46.269187 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a768e5ff-0521-4ad2-aa02-6774dcb5cdd0-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"a768e5ff-0521-4ad2-aa02-6774dcb5cdd0\") " pod="openstack/nova-metadata-0" Jan 29 15:51:46 crc kubenswrapper[5008]: I0129 15:51:46.287843 5008 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-bccf8f775-xx5z4"] Jan 29 15:51:46 crc kubenswrapper[5008]: I0129 15:51:46.291131 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-bccf8f775-xx5z4" Jan 29 15:51:46 crc kubenswrapper[5008]: I0129 15:51:46.299682 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-bccf8f775-xx5z4"] Jan 29 15:51:46 crc kubenswrapper[5008]: I0129 15:51:46.372729 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a768e5ff-0521-4ad2-aa02-6774dcb5cdd0-config-data\") pod \"nova-metadata-0\" (UID: \"a768e5ff-0521-4ad2-aa02-6774dcb5cdd0\") " pod="openstack/nova-metadata-0" Jan 29 15:51:46 crc kubenswrapper[5008]: I0129 15:51:46.372791 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/13fcb7f1-5a0f-427b-a4a4-709553d1c88d-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"13fcb7f1-5a0f-427b-a4a4-709553d1c88d\") " pod="openstack/nova-cell1-novncproxy-0" Jan 29 15:51:46 crc kubenswrapper[5008]: I0129 15:51:46.372823 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2zl2n\" (UniqueName: \"kubernetes.io/projected/a768e5ff-0521-4ad2-aa02-6774dcb5cdd0-kube-api-access-2zl2n\") pod \"nova-metadata-0\" (UID: \"a768e5ff-0521-4ad2-aa02-6774dcb5cdd0\") " pod="openstack/nova-metadata-0" Jan 29 15:51:46 crc kubenswrapper[5008]: I0129 15:51:46.372849 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a768e5ff-0521-4ad2-aa02-6774dcb5cdd0-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"a768e5ff-0521-4ad2-aa02-6774dcb5cdd0\") " pod="openstack/nova-metadata-0" Jan 29 15:51:46 crc kubenswrapper[5008]: I0129 15:51:46.372882 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cbfkv\" (UniqueName: \"kubernetes.io/projected/13fcb7f1-5a0f-427b-a4a4-709553d1c88d-kube-api-access-cbfkv\") pod \"nova-cell1-novncproxy-0\" (UID: \"13fcb7f1-5a0f-427b-a4a4-709553d1c88d\") " pod="openstack/nova-cell1-novncproxy-0" Jan 29 15:51:46 crc kubenswrapper[5008]: I0129 15:51:46.372924 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a768e5ff-0521-4ad2-aa02-6774dcb5cdd0-logs\") pod \"nova-metadata-0\" (UID: \"a768e5ff-0521-4ad2-aa02-6774dcb5cdd0\") " pod="openstack/nova-metadata-0" Jan 29 15:51:46 crc kubenswrapper[5008]: I0129 15:51:46.373014 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/13fcb7f1-5a0f-427b-a4a4-709553d1c88d-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"13fcb7f1-5a0f-427b-a4a4-709553d1c88d\") " pod="openstack/nova-cell1-novncproxy-0" Jan 29 15:51:46 crc kubenswrapper[5008]: I0129 15:51:46.376146 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a768e5ff-0521-4ad2-aa02-6774dcb5cdd0-config-data\") pod \"nova-metadata-0\" (UID: \"a768e5ff-0521-4ad2-aa02-6774dcb5cdd0\") " pod="openstack/nova-metadata-0" Jan 29 15:51:46 crc kubenswrapper[5008]: I0129 15:51:46.376608 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a768e5ff-0521-4ad2-aa02-6774dcb5cdd0-logs\") pod \"nova-metadata-0\" (UID: \"a768e5ff-0521-4ad2-aa02-6774dcb5cdd0\") " pod="openstack/nova-metadata-0" Jan 29 15:51:46 crc kubenswrapper[5008]: I0129 15:51:46.380176 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a768e5ff-0521-4ad2-aa02-6774dcb5cdd0-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"a768e5ff-0521-4ad2-aa02-6774dcb5cdd0\") " pod="openstack/nova-metadata-0" Jan 29 15:51:46 crc kubenswrapper[5008]: I0129 15:51:46.407500 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2zl2n\" (UniqueName: \"kubernetes.io/projected/a768e5ff-0521-4ad2-aa02-6774dcb5cdd0-kube-api-access-2zl2n\") pod \"nova-metadata-0\" (UID: \"a768e5ff-0521-4ad2-aa02-6774dcb5cdd0\") " pod="openstack/nova-metadata-0" Jan 29 15:51:46 crc kubenswrapper[5008]: I0129 15:51:46.474180 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/65ae154d-9b35-408c-bcdb-8b9601be71c8-dns-svc\") pod \"dnsmasq-dns-bccf8f775-xx5z4\" (UID: \"65ae154d-9b35-408c-bcdb-8b9601be71c8\") " pod="openstack/dnsmasq-dns-bccf8f775-xx5z4" Jan 29 15:51:46 crc kubenswrapper[5008]: I0129 15:51:46.474239 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cbfkv\" (UniqueName: \"kubernetes.io/projected/13fcb7f1-5a0f-427b-a4a4-709553d1c88d-kube-api-access-cbfkv\") pod \"nova-cell1-novncproxy-0\" (UID: \"13fcb7f1-5a0f-427b-a4a4-709553d1c88d\") " pod="openstack/nova-cell1-novncproxy-0" Jan 29 15:51:46 crc kubenswrapper[5008]: I0129 15:51:46.474260 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/65ae154d-9b35-408c-bcdb-8b9601be71c8-ovsdbserver-sb\") pod \"dnsmasq-dns-bccf8f775-xx5z4\" (UID: \"65ae154d-9b35-408c-bcdb-8b9601be71c8\") " pod="openstack/dnsmasq-dns-bccf8f775-xx5z4" Jan 29 15:51:46 crc kubenswrapper[5008]: I0129 15:51:46.474295 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/65ae154d-9b35-408c-bcdb-8b9601be71c8-config\") pod \"dnsmasq-dns-bccf8f775-xx5z4\" (UID: \"65ae154d-9b35-408c-bcdb-8b9601be71c8\") " pod="openstack/dnsmasq-dns-bccf8f775-xx5z4" Jan 29 15:51:46 crc kubenswrapper[5008]: I0129 15:51:46.474319 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/65ae154d-9b35-408c-bcdb-8b9601be71c8-ovsdbserver-nb\") pod \"dnsmasq-dns-bccf8f775-xx5z4\" (UID: \"65ae154d-9b35-408c-bcdb-8b9601be71c8\") " pod="openstack/dnsmasq-dns-bccf8f775-xx5z4" Jan 29 15:51:46 crc kubenswrapper[5008]: I0129 15:51:46.474396 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c2ns2\" (UniqueName: \"kubernetes.io/projected/65ae154d-9b35-408c-bcdb-8b9601be71c8-kube-api-access-c2ns2\") pod \"dnsmasq-dns-bccf8f775-xx5z4\" (UID: \"65ae154d-9b35-408c-bcdb-8b9601be71c8\") " pod="openstack/dnsmasq-dns-bccf8f775-xx5z4" Jan 29 15:51:46 crc kubenswrapper[5008]: I0129 15:51:46.474457 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/65ae154d-9b35-408c-bcdb-8b9601be71c8-dns-swift-storage-0\") pod \"dnsmasq-dns-bccf8f775-xx5z4\" (UID: \"65ae154d-9b35-408c-bcdb-8b9601be71c8\") " pod="openstack/dnsmasq-dns-bccf8f775-xx5z4" Jan 29 15:51:46 crc kubenswrapper[5008]: I0129 15:51:46.474481 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/13fcb7f1-5a0f-427b-a4a4-709553d1c88d-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"13fcb7f1-5a0f-427b-a4a4-709553d1c88d\") " pod="openstack/nova-cell1-novncproxy-0" Jan 29 15:51:46 crc kubenswrapper[5008]: I0129 15:51:46.474521 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/13fcb7f1-5a0f-427b-a4a4-709553d1c88d-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"13fcb7f1-5a0f-427b-a4a4-709553d1c88d\") " pod="openstack/nova-cell1-novncproxy-0" Jan 29 15:51:46 crc kubenswrapper[5008]: I0129 15:51:46.481446 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/13fcb7f1-5a0f-427b-a4a4-709553d1c88d-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"13fcb7f1-5a0f-427b-a4a4-709553d1c88d\") " pod="openstack/nova-cell1-novncproxy-0" Jan 29 15:51:46 crc kubenswrapper[5008]: I0129 15:51:46.489201 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/13fcb7f1-5a0f-427b-a4a4-709553d1c88d-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"13fcb7f1-5a0f-427b-a4a4-709553d1c88d\") " pod="openstack/nova-cell1-novncproxy-0" Jan 29 15:51:46 crc kubenswrapper[5008]: I0129 15:51:46.491020 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cbfkv\" (UniqueName: \"kubernetes.io/projected/13fcb7f1-5a0f-427b-a4a4-709553d1c88d-kube-api-access-cbfkv\") pod \"nova-cell1-novncproxy-0\" (UID: \"13fcb7f1-5a0f-427b-a4a4-709553d1c88d\") " pod="openstack/nova-cell1-novncproxy-0" Jan 29 15:51:46 crc kubenswrapper[5008]: I0129 15:51:46.495991 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 29 15:51:46 crc kubenswrapper[5008]: I0129 15:51:46.508250 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 29 15:51:46 crc kubenswrapper[5008]: I0129 15:51:46.580920 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 29 15:51:46 crc kubenswrapper[5008]: I0129 15:51:46.581197 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/65ae154d-9b35-408c-bcdb-8b9601be71c8-ovsdbserver-sb\") pod \"dnsmasq-dns-bccf8f775-xx5z4\" (UID: \"65ae154d-9b35-408c-bcdb-8b9601be71c8\") " pod="openstack/dnsmasq-dns-bccf8f775-xx5z4" Jan 29 15:51:46 crc kubenswrapper[5008]: I0129 15:51:46.581925 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/65ae154d-9b35-408c-bcdb-8b9601be71c8-config\") pod \"dnsmasq-dns-bccf8f775-xx5z4\" (UID: \"65ae154d-9b35-408c-bcdb-8b9601be71c8\") " pod="openstack/dnsmasq-dns-bccf8f775-xx5z4" Jan 29 15:51:46 crc kubenswrapper[5008]: I0129 15:51:46.581969 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/65ae154d-9b35-408c-bcdb-8b9601be71c8-ovsdbserver-nb\") pod \"dnsmasq-dns-bccf8f775-xx5z4\" (UID: \"65ae154d-9b35-408c-bcdb-8b9601be71c8\") " pod="openstack/dnsmasq-dns-bccf8f775-xx5z4" Jan 29 15:51:46 crc kubenswrapper[5008]: I0129 15:51:46.582081 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c2ns2\" (UniqueName: \"kubernetes.io/projected/65ae154d-9b35-408c-bcdb-8b9601be71c8-kube-api-access-c2ns2\") pod \"dnsmasq-dns-bccf8f775-xx5z4\" (UID: \"65ae154d-9b35-408c-bcdb-8b9601be71c8\") " pod="openstack/dnsmasq-dns-bccf8f775-xx5z4" Jan 29 15:51:46 crc kubenswrapper[5008]: I0129 15:51:46.582116 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/65ae154d-9b35-408c-bcdb-8b9601be71c8-ovsdbserver-sb\") pod \"dnsmasq-dns-bccf8f775-xx5z4\" (UID: \"65ae154d-9b35-408c-bcdb-8b9601be71c8\") " pod="openstack/dnsmasq-dns-bccf8f775-xx5z4" Jan 29 15:51:46 crc kubenswrapper[5008]: I0129 15:51:46.582126 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/65ae154d-9b35-408c-bcdb-8b9601be71c8-dns-swift-storage-0\") pod \"dnsmasq-dns-bccf8f775-xx5z4\" (UID: \"65ae154d-9b35-408c-bcdb-8b9601be71c8\") " pod="openstack/dnsmasq-dns-bccf8f775-xx5z4" Jan 29 15:51:46 crc kubenswrapper[5008]: I0129 15:51:46.582263 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/65ae154d-9b35-408c-bcdb-8b9601be71c8-dns-svc\") pod \"dnsmasq-dns-bccf8f775-xx5z4\" (UID: \"65ae154d-9b35-408c-bcdb-8b9601be71c8\") " pod="openstack/dnsmasq-dns-bccf8f775-xx5z4" Jan 29 15:51:46 crc kubenswrapper[5008]: I0129 15:51:46.582745 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/65ae154d-9b35-408c-bcdb-8b9601be71c8-dns-swift-storage-0\") pod \"dnsmasq-dns-bccf8f775-xx5z4\" (UID: \"65ae154d-9b35-408c-bcdb-8b9601be71c8\") " pod="openstack/dnsmasq-dns-bccf8f775-xx5z4" Jan 29 15:51:46 crc kubenswrapper[5008]: I0129 15:51:46.583033 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/65ae154d-9b35-408c-bcdb-8b9601be71c8-dns-svc\") pod \"dnsmasq-dns-bccf8f775-xx5z4\" (UID: \"65ae154d-9b35-408c-bcdb-8b9601be71c8\") " pod="openstack/dnsmasq-dns-bccf8f775-xx5z4" Jan 29 15:51:46 crc kubenswrapper[5008]: I0129 15:51:46.583615 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/65ae154d-9b35-408c-bcdb-8b9601be71c8-ovsdbserver-nb\") pod \"dnsmasq-dns-bccf8f775-xx5z4\" (UID: \"65ae154d-9b35-408c-bcdb-8b9601be71c8\") " pod="openstack/dnsmasq-dns-bccf8f775-xx5z4" Jan 29 15:51:46 crc kubenswrapper[5008]: I0129 15:51:46.584085 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/65ae154d-9b35-408c-bcdb-8b9601be71c8-config\") pod \"dnsmasq-dns-bccf8f775-xx5z4\" (UID: \"65ae154d-9b35-408c-bcdb-8b9601be71c8\") " pod="openstack/dnsmasq-dns-bccf8f775-xx5z4" Jan 29 15:51:46 crc kubenswrapper[5008]: I0129 15:51:46.588652 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 29 15:51:46 crc kubenswrapper[5008]: I0129 15:51:46.621078 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c2ns2\" (UniqueName: \"kubernetes.io/projected/65ae154d-9b35-408c-bcdb-8b9601be71c8-kube-api-access-c2ns2\") pod \"dnsmasq-dns-bccf8f775-xx5z4\" (UID: \"65ae154d-9b35-408c-bcdb-8b9601be71c8\") " pod="openstack/dnsmasq-dns-bccf8f775-xx5z4" Jan 29 15:51:46 crc kubenswrapper[5008]: I0129 15:51:46.624611 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-bccf8f775-xx5z4" Jan 29 15:51:46 crc kubenswrapper[5008]: I0129 15:51:46.697582 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-2crqc"] Jan 29 15:51:46 crc kubenswrapper[5008]: W0129 15:51:46.772316 5008 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podeef9ab07_3037_4115_bb8e_954191b169af.slice/crio-8469f70a82067f3e5e3ddeda22384487ef8ddf5579da62e05ab8aad6137879e6 WatchSource:0}: Error finding container 8469f70a82067f3e5e3ddeda22384487ef8ddf5579da62e05ab8aad6137879e6: Status 404 returned error can't find the container with id 8469f70a82067f3e5e3ddeda22384487ef8ddf5579da62e05ab8aad6137879e6 Jan 29 15:51:46 crc kubenswrapper[5008]: I0129 15:51:46.796291 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d1ab502b-4ec7-4a0b-b7a4-ed10d3f26be7","Type":"ContainerStarted","Data":"816da0ccd258b96ae016602b4eb20317eab184c219bbd3b28be883eb79a29a14"} Jan 29 15:51:46 crc kubenswrapper[5008]: I0129 15:51:46.919922 5008 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-db-sync-k5vpb"] Jan 29 15:51:46 crc kubenswrapper[5008]: I0129 15:51:46.921439 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-k5vpb" Jan 29 15:51:46 crc kubenswrapper[5008]: I0129 15:51:46.923813 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Jan 29 15:51:46 crc kubenswrapper[5008]: I0129 15:51:46.924672 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-scripts" Jan 29 15:51:46 crc kubenswrapper[5008]: I0129 15:51:46.932957 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-k5vpb"] Jan 29 15:51:47 crc kubenswrapper[5008]: I0129 15:51:47.010130 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 29 15:51:47 crc kubenswrapper[5008]: I0129 15:51:47.082415 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 29 15:51:47 crc kubenswrapper[5008]: I0129 15:51:47.095624 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-84qhc\" (UniqueName: \"kubernetes.io/projected/a0d0cf25-1253-4f34-91a0-c4381d2e8a3f-kube-api-access-84qhc\") pod \"nova-cell1-conductor-db-sync-k5vpb\" (UID: \"a0d0cf25-1253-4f34-91a0-c4381d2e8a3f\") " pod="openstack/nova-cell1-conductor-db-sync-k5vpb" Jan 29 15:51:47 crc kubenswrapper[5008]: I0129 15:51:47.095678 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a0d0cf25-1253-4f34-91a0-c4381d2e8a3f-config-data\") pod \"nova-cell1-conductor-db-sync-k5vpb\" (UID: \"a0d0cf25-1253-4f34-91a0-c4381d2e8a3f\") " pod="openstack/nova-cell1-conductor-db-sync-k5vpb" Jan 29 15:51:47 crc kubenswrapper[5008]: I0129 15:51:47.095731 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a0d0cf25-1253-4f34-91a0-c4381d2e8a3f-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-k5vpb\" (UID: \"a0d0cf25-1253-4f34-91a0-c4381d2e8a3f\") " pod="openstack/nova-cell1-conductor-db-sync-k5vpb" Jan 29 15:51:47 crc kubenswrapper[5008]: I0129 15:51:47.095769 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a0d0cf25-1253-4f34-91a0-c4381d2e8a3f-scripts\") pod \"nova-cell1-conductor-db-sync-k5vpb\" (UID: \"a0d0cf25-1253-4f34-91a0-c4381d2e8a3f\") " pod="openstack/nova-cell1-conductor-db-sync-k5vpb" Jan 29 15:51:47 crc kubenswrapper[5008]: I0129 15:51:47.197768 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-84qhc\" (UniqueName: \"kubernetes.io/projected/a0d0cf25-1253-4f34-91a0-c4381d2e8a3f-kube-api-access-84qhc\") pod \"nova-cell1-conductor-db-sync-k5vpb\" (UID: \"a0d0cf25-1253-4f34-91a0-c4381d2e8a3f\") " pod="openstack/nova-cell1-conductor-db-sync-k5vpb" Jan 29 15:51:47 crc kubenswrapper[5008]: I0129 15:51:47.198065 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a0d0cf25-1253-4f34-91a0-c4381d2e8a3f-config-data\") pod \"nova-cell1-conductor-db-sync-k5vpb\" (UID: \"a0d0cf25-1253-4f34-91a0-c4381d2e8a3f\") " pod="openstack/nova-cell1-conductor-db-sync-k5vpb" Jan 29 15:51:47 crc kubenswrapper[5008]: I0129 15:51:47.198091 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a0d0cf25-1253-4f34-91a0-c4381d2e8a3f-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-k5vpb\" (UID: \"a0d0cf25-1253-4f34-91a0-c4381d2e8a3f\") " pod="openstack/nova-cell1-conductor-db-sync-k5vpb" Jan 29 15:51:47 crc kubenswrapper[5008]: I0129 15:51:47.198130 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a0d0cf25-1253-4f34-91a0-c4381d2e8a3f-scripts\") pod \"nova-cell1-conductor-db-sync-k5vpb\" (UID: \"a0d0cf25-1253-4f34-91a0-c4381d2e8a3f\") " pod="openstack/nova-cell1-conductor-db-sync-k5vpb" Jan 29 15:51:47 crc kubenswrapper[5008]: I0129 15:51:47.205522 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a0d0cf25-1253-4f34-91a0-c4381d2e8a3f-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-k5vpb\" (UID: \"a0d0cf25-1253-4f34-91a0-c4381d2e8a3f\") " pod="openstack/nova-cell1-conductor-db-sync-k5vpb" Jan 29 15:51:47 crc kubenswrapper[5008]: I0129 15:51:47.206685 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a0d0cf25-1253-4f34-91a0-c4381d2e8a3f-scripts\") pod \"nova-cell1-conductor-db-sync-k5vpb\" (UID: \"a0d0cf25-1253-4f34-91a0-c4381d2e8a3f\") " pod="openstack/nova-cell1-conductor-db-sync-k5vpb" Jan 29 15:51:47 crc kubenswrapper[5008]: I0129 15:51:47.218039 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a0d0cf25-1253-4f34-91a0-c4381d2e8a3f-config-data\") pod \"nova-cell1-conductor-db-sync-k5vpb\" (UID: \"a0d0cf25-1253-4f34-91a0-c4381d2e8a3f\") " pod="openstack/nova-cell1-conductor-db-sync-k5vpb" Jan 29 15:51:47 crc kubenswrapper[5008]: I0129 15:51:47.220081 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-84qhc\" (UniqueName: \"kubernetes.io/projected/a0d0cf25-1253-4f34-91a0-c4381d2e8a3f-kube-api-access-84qhc\") pod \"nova-cell1-conductor-db-sync-k5vpb\" (UID: \"a0d0cf25-1253-4f34-91a0-c4381d2e8a3f\") " pod="openstack/nova-cell1-conductor-db-sync-k5vpb" Jan 29 15:51:47 crc kubenswrapper[5008]: I0129 15:51:47.226497 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 29 15:51:47 crc kubenswrapper[5008]: I0129 15:51:47.268232 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-k5vpb" Jan 29 15:51:47 crc kubenswrapper[5008]: I0129 15:51:47.438599 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 29 15:51:47 crc kubenswrapper[5008]: I0129 15:51:47.448515 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-bccf8f775-xx5z4"] Jan 29 15:51:47 crc kubenswrapper[5008]: I0129 15:51:47.784090 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-k5vpb"] Jan 29 15:51:47 crc kubenswrapper[5008]: W0129 15:51:47.792903 5008 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda0d0cf25_1253_4f34_91a0_c4381d2e8a3f.slice/crio-028242919e3f4265fc6386d321897f9b93da1293777fa8227ed9be3c5ccefdec WatchSource:0}: Error finding container 028242919e3f4265fc6386d321897f9b93da1293777fa8227ed9be3c5ccefdec: Status 404 returned error can't find the container with id 028242919e3f4265fc6386d321897f9b93da1293777fa8227ed9be3c5ccefdec Jan 29 15:51:47 crc kubenswrapper[5008]: I0129 15:51:47.824038 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"1f0bf87f-118b-4ad5-8354-688ae93d75e8","Type":"ContainerStarted","Data":"868f4c6e442b8edd70fd72637691064134ed05f40e47973b7eb3e61bb8292d33"} Jan 29 15:51:47 crc kubenswrapper[5008]: I0129 15:51:47.828127 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-2crqc" event={"ID":"eef9ab07-3037-4115-bb8e-954191b169af","Type":"ContainerStarted","Data":"89a0838edd76e8e3384f319feeb4aa997d5c03e52a3680d202106547bff689f7"} Jan 29 15:51:47 crc kubenswrapper[5008]: I0129 15:51:47.828176 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-2crqc" event={"ID":"eef9ab07-3037-4115-bb8e-954191b169af","Type":"ContainerStarted","Data":"8469f70a82067f3e5e3ddeda22384487ef8ddf5579da62e05ab8aad6137879e6"} Jan 29 15:51:47 crc kubenswrapper[5008]: I0129 15:51:47.831301 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"aafcc4fd-9cb2-458b-892e-0e56adcdfa2c","Type":"ContainerStarted","Data":"0fa105059117f2b4c51f1c17146bba198c1ad14ed2d53794274c62ac38095b80"} Jan 29 15:51:47 crc kubenswrapper[5008]: I0129 15:51:47.833639 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-k5vpb" event={"ID":"a0d0cf25-1253-4f34-91a0-c4381d2e8a3f","Type":"ContainerStarted","Data":"028242919e3f4265fc6386d321897f9b93da1293777fa8227ed9be3c5ccefdec"} Jan 29 15:51:47 crc kubenswrapper[5008]: I0129 15:51:47.835292 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"13fcb7f1-5a0f-427b-a4a4-709553d1c88d","Type":"ContainerStarted","Data":"89acbc3b89babecb84402f3ec55311a2ac1633dd886e5581dfb789b75a401ac3"} Jan 29 15:51:47 crc kubenswrapper[5008]: I0129 15:51:47.844016 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-bccf8f775-xx5z4" event={"ID":"65ae154d-9b35-408c-bcdb-8b9601be71c8","Type":"ContainerStarted","Data":"30bedbc0bc93f8ca5f3511d1081097f8182d9fc6d6457e41dfa4a6a23655328a"} Jan 29 15:51:47 crc kubenswrapper[5008]: I0129 15:51:47.853935 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"a768e5ff-0521-4ad2-aa02-6774dcb5cdd0","Type":"ContainerStarted","Data":"9a2f240b5615f7e4a96ac0c8a498b92dc644dc9f81d75537df39e3b9f01f9020"} Jan 29 15:51:47 crc kubenswrapper[5008]: I0129 15:51:47.864414 5008 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-cell-mapping-2crqc" podStartSLOduration=2.864392962 podStartE2EDuration="2.864392962s" podCreationTimestamp="2026-01-29 15:51:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 15:51:47.858485999 +0000 UTC m=+1451.531340246" watchObservedRunningTime="2026-01-29 15:51:47.864392962 +0000 UTC m=+1451.537247199" Jan 29 15:51:48 crc kubenswrapper[5008]: I0129 15:51:48.881318 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-k5vpb" event={"ID":"a0d0cf25-1253-4f34-91a0-c4381d2e8a3f","Type":"ContainerStarted","Data":"36c4369212a2c18b6f334f104822d0182e207e44849984ff3689c410393720c8"} Jan 29 15:51:48 crc kubenswrapper[5008]: I0129 15:51:48.886801 5008 generic.go:334] "Generic (PLEG): container finished" podID="65ae154d-9b35-408c-bcdb-8b9601be71c8" containerID="60289a7b443137e8ea46321b53a131c528f20b282f9018e51ed60f8d48fdfbaa" exitCode=0 Jan 29 15:51:48 crc kubenswrapper[5008]: I0129 15:51:48.887089 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-bccf8f775-xx5z4" event={"ID":"65ae154d-9b35-408c-bcdb-8b9601be71c8","Type":"ContainerDied","Data":"60289a7b443137e8ea46321b53a131c528f20b282f9018e51ed60f8d48fdfbaa"} Jan 29 15:51:48 crc kubenswrapper[5008]: I0129 15:51:48.914325 5008 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-db-sync-k5vpb" podStartSLOduration=2.91430617 podStartE2EDuration="2.91430617s" podCreationTimestamp="2026-01-29 15:51:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 15:51:48.910263662 +0000 UTC m=+1452.583117909" watchObservedRunningTime="2026-01-29 15:51:48.91430617 +0000 UTC m=+1452.587160407" Jan 29 15:51:49 crc kubenswrapper[5008]: I0129 15:51:49.623834 5008 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 29 15:51:49 crc kubenswrapper[5008]: I0129 15:51:49.632149 5008 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 29 15:51:52 crc kubenswrapper[5008]: E0129 15:51:52.930288 5008 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod00b42485_f42b_4ca6_8e84_1a795454dd9f.slice/crio-9cfdb60cd6bab187b310c7e3b7b9918a771aed98988c83c807016cc578b45171\": RecentStats: unable to find data in memory cache]" Jan 29 15:51:53 crc kubenswrapper[5008]: E0129 15:51:53.416870 5008 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://registry.redhat.io/ubi9/httpd-24:latest: Requesting bearer token: invalid status code from registry 403 (Forbidden)" image="registry.redhat.io/ubi9/httpd-24:latest" Jan 29 15:51:53 crc kubenswrapper[5008]: E0129 15:51:53.417579 5008 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:proxy-httpd,Image:registry.redhat.io/ubi9/httpd-24:latest,Command:[/usr/sbin/httpd],Args:[-DFOREGROUND],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:proxy-httpd,HostPort:0,ContainerPort:3000,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/httpd/conf/httpd.conf,SubPath:httpd.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/httpd/conf.d/ssl.conf,SubPath:ssl.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:run-httpd,ReadOnly:false,MountPath:/run/httpd,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:log-httpd,ReadOnly:false,MountPath:/var/log/httpd,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-5vwdz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/,Port:{0 3000 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:30,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/,Port:{0 3000 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:30,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(d1ab502b-4ec7-4a0b-b7a4-ed10d3f26be7): ErrImagePull: initializing source docker://registry.redhat.io/ubi9/httpd-24:latest: Requesting bearer token: invalid status code from registry 403 (Forbidden)" logger="UnhandledError" Jan 29 15:51:53 crc kubenswrapper[5008]: E0129 15:51:53.421345 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-httpd\" with ErrImagePull: \"initializing source docker://registry.redhat.io/ubi9/httpd-24:latest: Requesting bearer token: invalid status code from registry 403 (Forbidden)\"" pod="openstack/ceilometer-0" podUID="d1ab502b-4ec7-4a0b-b7a4-ed10d3f26be7" Jan 29 15:51:53 crc kubenswrapper[5008]: I0129 15:51:53.937488 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"a768e5ff-0521-4ad2-aa02-6774dcb5cdd0","Type":"ContainerStarted","Data":"3ff835b6bd219620556e6fd30136d1a1bc1bed3536d7cb9120523837f6c21c9e"} Jan 29 15:51:53 crc kubenswrapper[5008]: I0129 15:51:53.937543 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"a768e5ff-0521-4ad2-aa02-6774dcb5cdd0","Type":"ContainerStarted","Data":"445f7efc7b26cc5d17d632da559c671e24b9c9c10f2a6700aafd3aa57f34f1c0"} Jan 29 15:51:53 crc kubenswrapper[5008]: I0129 15:51:53.937594 5008 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="a768e5ff-0521-4ad2-aa02-6774dcb5cdd0" containerName="nova-metadata-log" containerID="cri-o://445f7efc7b26cc5d17d632da559c671e24b9c9c10f2a6700aafd3aa57f34f1c0" gracePeriod=30 Jan 29 15:51:53 crc kubenswrapper[5008]: I0129 15:51:53.937611 5008 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="a768e5ff-0521-4ad2-aa02-6774dcb5cdd0" containerName="nova-metadata-metadata" containerID="cri-o://3ff835b6bd219620556e6fd30136d1a1bc1bed3536d7cb9120523837f6c21c9e" gracePeriod=30 Jan 29 15:51:53 crc kubenswrapper[5008]: I0129 15:51:53.940646 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"1f0bf87f-118b-4ad5-8354-688ae93d75e8","Type":"ContainerStarted","Data":"a45eabdd3a916892c15bd4c53b9c5d38521f3313283444317d3b41cb672cda50"} Jan 29 15:51:53 crc kubenswrapper[5008]: I0129 15:51:53.945270 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"aafcc4fd-9cb2-458b-892e-0e56adcdfa2c","Type":"ContainerStarted","Data":"6577ef7af46ac87bbeb2eb62d4d6f390b86ce894a2b7eb71d0570cec11f0f60f"} Jan 29 15:51:53 crc kubenswrapper[5008]: I0129 15:51:53.945335 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"aafcc4fd-9cb2-458b-892e-0e56adcdfa2c","Type":"ContainerStarted","Data":"2d137f6ab32493e4c84e12dddea0af4d07130b45d33ad383089e874020edd1c9"} Jan 29 15:51:53 crc kubenswrapper[5008]: I0129 15:51:53.947529 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"13fcb7f1-5a0f-427b-a4a4-709553d1c88d","Type":"ContainerStarted","Data":"85b97eeb8fe553ff723bb92561ee6bde7c6975de4cf810b074233430e415f498"} Jan 29 15:51:53 crc kubenswrapper[5008]: I0129 15:51:53.947556 5008 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell1-novncproxy-0" podUID="13fcb7f1-5a0f-427b-a4a4-709553d1c88d" containerName="nova-cell1-novncproxy-novncproxy" containerID="cri-o://85b97eeb8fe553ff723bb92561ee6bde7c6975de4cf810b074233430e415f498" gracePeriod=30 Jan 29 15:51:53 crc kubenswrapper[5008]: I0129 15:51:53.949678 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d1ab502b-4ec7-4a0b-b7a4-ed10d3f26be7","Type":"ContainerStarted","Data":"b479429d051c9958a13fa2ef70a2c32999364b6d9f8db133530497550bd940a4"} Jan 29 15:51:53 crc kubenswrapper[5008]: E0129 15:51:53.952170 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-httpd\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/ubi9/httpd-24:latest\\\"\"" pod="openstack/ceilometer-0" podUID="d1ab502b-4ec7-4a0b-b7a4-ed10d3f26be7" Jan 29 15:51:53 crc kubenswrapper[5008]: I0129 15:51:53.953398 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-bccf8f775-xx5z4" event={"ID":"65ae154d-9b35-408c-bcdb-8b9601be71c8","Type":"ContainerStarted","Data":"1d607350ffbc24ef275435eb4ae5dec525e6f42db8162f7bae09094480df98a3"} Jan 29 15:51:53 crc kubenswrapper[5008]: I0129 15:51:53.954026 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-bccf8f775-xx5z4" Jan 29 15:51:53 crc kubenswrapper[5008]: I0129 15:51:53.969532 5008 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.329968576 podStartE2EDuration="7.969514845s" podCreationTimestamp="2026-01-29 15:51:46 +0000 UTC" firstStartedPulling="2026-01-29 15:51:47.235279552 +0000 UTC m=+1450.908133789" lastFinishedPulling="2026-01-29 15:51:52.874825781 +0000 UTC m=+1456.547680058" observedRunningTime="2026-01-29 15:51:53.961955561 +0000 UTC m=+1457.634809798" watchObservedRunningTime="2026-01-29 15:51:53.969514845 +0000 UTC m=+1457.642369092" Jan 29 15:51:53 crc kubenswrapper[5008]: I0129 15:51:53.984019 5008 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=3.207788502 podStartE2EDuration="8.984000346s" podCreationTimestamp="2026-01-29 15:51:45 +0000 UTC" firstStartedPulling="2026-01-29 15:51:47.098611547 +0000 UTC m=+1450.771465774" lastFinishedPulling="2026-01-29 15:51:52.874823381 +0000 UTC m=+1456.547677618" observedRunningTime="2026-01-29 15:51:53.981243989 +0000 UTC m=+1457.654098246" watchObservedRunningTime="2026-01-29 15:51:53.984000346 +0000 UTC m=+1457.656854603" Jan 29 15:51:54 crc kubenswrapper[5008]: I0129 15:51:54.026366 5008 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=3.136945453 podStartE2EDuration="9.026342583s" podCreationTimestamp="2026-01-29 15:51:45 +0000 UTC" firstStartedPulling="2026-01-29 15:51:47.019139889 +0000 UTC m=+1450.691994126" lastFinishedPulling="2026-01-29 15:51:52.908537009 +0000 UTC m=+1456.581391256" observedRunningTime="2026-01-29 15:51:54.015621574 +0000 UTC m=+1457.688475811" watchObservedRunningTime="2026-01-29 15:51:54.026342583 +0000 UTC m=+1457.699196830" Jan 29 15:51:54 crc kubenswrapper[5008]: I0129 15:51:54.042908 5008 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=2.597022364 podStartE2EDuration="8.042893395s" podCreationTimestamp="2026-01-29 15:51:46 +0000 UTC" firstStartedPulling="2026-01-29 15:51:47.42896244 +0000 UTC m=+1451.101816677" lastFinishedPulling="2026-01-29 15:51:52.874833461 +0000 UTC m=+1456.547687708" observedRunningTime="2026-01-29 15:51:54.036436019 +0000 UTC m=+1457.709290276" watchObservedRunningTime="2026-01-29 15:51:54.042893395 +0000 UTC m=+1457.715747632" Jan 29 15:51:54 crc kubenswrapper[5008]: I0129 15:51:54.059087 5008 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-bccf8f775-xx5z4" podStartSLOduration=8.059066817 podStartE2EDuration="8.059066817s" podCreationTimestamp="2026-01-29 15:51:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 15:51:54.053170575 +0000 UTC m=+1457.726024812" watchObservedRunningTime="2026-01-29 15:51:54.059066817 +0000 UTC m=+1457.731921064" Jan 29 15:51:54 crc kubenswrapper[5008]: I0129 15:51:54.977599 5008 generic.go:334] "Generic (PLEG): container finished" podID="eef9ab07-3037-4115-bb8e-954191b169af" containerID="89a0838edd76e8e3384f319feeb4aa997d5c03e52a3680d202106547bff689f7" exitCode=0 Jan 29 15:51:54 crc kubenswrapper[5008]: I0129 15:51:54.977688 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-2crqc" event={"ID":"eef9ab07-3037-4115-bb8e-954191b169af","Type":"ContainerDied","Data":"89a0838edd76e8e3384f319feeb4aa997d5c03e52a3680d202106547bff689f7"} Jan 29 15:51:54 crc kubenswrapper[5008]: I0129 15:51:54.982309 5008 generic.go:334] "Generic (PLEG): container finished" podID="a768e5ff-0521-4ad2-aa02-6774dcb5cdd0" containerID="3ff835b6bd219620556e6fd30136d1a1bc1bed3536d7cb9120523837f6c21c9e" exitCode=0 Jan 29 15:51:54 crc kubenswrapper[5008]: I0129 15:51:54.982342 5008 generic.go:334] "Generic (PLEG): container finished" podID="a768e5ff-0521-4ad2-aa02-6774dcb5cdd0" containerID="445f7efc7b26cc5d17d632da559c671e24b9c9c10f2a6700aafd3aa57f34f1c0" exitCode=143 Jan 29 15:51:54 crc kubenswrapper[5008]: I0129 15:51:54.982401 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"a768e5ff-0521-4ad2-aa02-6774dcb5cdd0","Type":"ContainerDied","Data":"3ff835b6bd219620556e6fd30136d1a1bc1bed3536d7cb9120523837f6c21c9e"} Jan 29 15:51:54 crc kubenswrapper[5008]: I0129 15:51:54.982436 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"a768e5ff-0521-4ad2-aa02-6774dcb5cdd0","Type":"ContainerDied","Data":"445f7efc7b26cc5d17d632da559c671e24b9c9c10f2a6700aafd3aa57f34f1c0"} Jan 29 15:51:54 crc kubenswrapper[5008]: E0129 15:51:54.989157 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-httpd\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/ubi9/httpd-24:latest\\\"\"" pod="openstack/ceilometer-0" podUID="d1ab502b-4ec7-4a0b-b7a4-ed10d3f26be7" Jan 29 15:51:55 crc kubenswrapper[5008]: I0129 15:51:55.292693 5008 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 29 15:51:55 crc kubenswrapper[5008]: I0129 15:51:55.463326 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a768e5ff-0521-4ad2-aa02-6774dcb5cdd0-config-data\") pod \"a768e5ff-0521-4ad2-aa02-6774dcb5cdd0\" (UID: \"a768e5ff-0521-4ad2-aa02-6774dcb5cdd0\") " Jan 29 15:51:55 crc kubenswrapper[5008]: I0129 15:51:55.463431 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2zl2n\" (UniqueName: \"kubernetes.io/projected/a768e5ff-0521-4ad2-aa02-6774dcb5cdd0-kube-api-access-2zl2n\") pod \"a768e5ff-0521-4ad2-aa02-6774dcb5cdd0\" (UID: \"a768e5ff-0521-4ad2-aa02-6774dcb5cdd0\") " Jan 29 15:51:55 crc kubenswrapper[5008]: I0129 15:51:55.463538 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a768e5ff-0521-4ad2-aa02-6774dcb5cdd0-combined-ca-bundle\") pod \"a768e5ff-0521-4ad2-aa02-6774dcb5cdd0\" (UID: \"a768e5ff-0521-4ad2-aa02-6774dcb5cdd0\") " Jan 29 15:51:55 crc kubenswrapper[5008]: I0129 15:51:55.463601 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a768e5ff-0521-4ad2-aa02-6774dcb5cdd0-logs\") pod \"a768e5ff-0521-4ad2-aa02-6774dcb5cdd0\" (UID: \"a768e5ff-0521-4ad2-aa02-6774dcb5cdd0\") " Jan 29 15:51:55 crc kubenswrapper[5008]: I0129 15:51:55.465405 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a768e5ff-0521-4ad2-aa02-6774dcb5cdd0-logs" (OuterVolumeSpecName: "logs") pod "a768e5ff-0521-4ad2-aa02-6774dcb5cdd0" (UID: "a768e5ff-0521-4ad2-aa02-6774dcb5cdd0"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 15:51:55 crc kubenswrapper[5008]: I0129 15:51:55.470913 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a768e5ff-0521-4ad2-aa02-6774dcb5cdd0-kube-api-access-2zl2n" (OuterVolumeSpecName: "kube-api-access-2zl2n") pod "a768e5ff-0521-4ad2-aa02-6774dcb5cdd0" (UID: "a768e5ff-0521-4ad2-aa02-6774dcb5cdd0"). InnerVolumeSpecName "kube-api-access-2zl2n". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:51:55 crc kubenswrapper[5008]: I0129 15:51:55.507623 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a768e5ff-0521-4ad2-aa02-6774dcb5cdd0-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a768e5ff-0521-4ad2-aa02-6774dcb5cdd0" (UID: "a768e5ff-0521-4ad2-aa02-6774dcb5cdd0"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:51:55 crc kubenswrapper[5008]: I0129 15:51:55.525687 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a768e5ff-0521-4ad2-aa02-6774dcb5cdd0-config-data" (OuterVolumeSpecName: "config-data") pod "a768e5ff-0521-4ad2-aa02-6774dcb5cdd0" (UID: "a768e5ff-0521-4ad2-aa02-6774dcb5cdd0"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:51:55 crc kubenswrapper[5008]: I0129 15:51:55.566067 5008 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a768e5ff-0521-4ad2-aa02-6774dcb5cdd0-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 15:51:55 crc kubenswrapper[5008]: I0129 15:51:55.566111 5008 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2zl2n\" (UniqueName: \"kubernetes.io/projected/a768e5ff-0521-4ad2-aa02-6774dcb5cdd0-kube-api-access-2zl2n\") on node \"crc\" DevicePath \"\"" Jan 29 15:51:55 crc kubenswrapper[5008]: I0129 15:51:55.566130 5008 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a768e5ff-0521-4ad2-aa02-6774dcb5cdd0-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 15:51:55 crc kubenswrapper[5008]: I0129 15:51:55.566144 5008 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a768e5ff-0521-4ad2-aa02-6774dcb5cdd0-logs\") on node \"crc\" DevicePath \"\"" Jan 29 15:51:55 crc kubenswrapper[5008]: I0129 15:51:55.996998 5008 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 29 15:51:55 crc kubenswrapper[5008]: I0129 15:51:55.996988 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"a768e5ff-0521-4ad2-aa02-6774dcb5cdd0","Type":"ContainerDied","Data":"9a2f240b5615f7e4a96ac0c8a498b92dc644dc9f81d75537df39e3b9f01f9020"} Jan 29 15:51:55 crc kubenswrapper[5008]: I0129 15:51:55.997092 5008 scope.go:117] "RemoveContainer" containerID="3ff835b6bd219620556e6fd30136d1a1bc1bed3536d7cb9120523837f6c21c9e" Jan 29 15:51:56 crc kubenswrapper[5008]: I0129 15:51:56.056801 5008 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 29 15:51:56 crc kubenswrapper[5008]: I0129 15:51:56.088103 5008 scope.go:117] "RemoveContainer" containerID="445f7efc7b26cc5d17d632da559c671e24b9c9c10f2a6700aafd3aa57f34f1c0" Jan 29 15:51:56 crc kubenswrapper[5008]: I0129 15:51:56.094042 5008 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Jan 29 15:51:56 crc kubenswrapper[5008]: I0129 15:51:56.112159 5008 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Jan 29 15:51:56 crc kubenswrapper[5008]: E0129 15:51:56.112553 5008 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a768e5ff-0521-4ad2-aa02-6774dcb5cdd0" containerName="nova-metadata-log" Jan 29 15:51:56 crc kubenswrapper[5008]: I0129 15:51:56.112571 5008 state_mem.go:107] "Deleted CPUSet assignment" podUID="a768e5ff-0521-4ad2-aa02-6774dcb5cdd0" containerName="nova-metadata-log" Jan 29 15:51:56 crc kubenswrapper[5008]: E0129 15:51:56.112585 5008 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a768e5ff-0521-4ad2-aa02-6774dcb5cdd0" containerName="nova-metadata-metadata" Jan 29 15:51:56 crc kubenswrapper[5008]: I0129 15:51:56.112593 5008 state_mem.go:107] "Deleted CPUSet assignment" podUID="a768e5ff-0521-4ad2-aa02-6774dcb5cdd0" containerName="nova-metadata-metadata" Jan 29 15:51:56 crc kubenswrapper[5008]: I0129 15:51:56.112774 5008 memory_manager.go:354] "RemoveStaleState removing state" podUID="a768e5ff-0521-4ad2-aa02-6774dcb5cdd0" containerName="nova-metadata-log" Jan 29 15:51:56 crc kubenswrapper[5008]: I0129 15:51:56.112875 5008 memory_manager.go:354] "RemoveStaleState removing state" podUID="a768e5ff-0521-4ad2-aa02-6774dcb5cdd0" containerName="nova-metadata-metadata" Jan 29 15:51:56 crc kubenswrapper[5008]: I0129 15:51:56.114123 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 29 15:51:56 crc kubenswrapper[5008]: I0129 15:51:56.117148 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Jan 29 15:51:56 crc kubenswrapper[5008]: I0129 15:51:56.118955 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Jan 29 15:51:56 crc kubenswrapper[5008]: I0129 15:51:56.145527 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 29 15:51:56 crc kubenswrapper[5008]: I0129 15:51:56.280286 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mdglm\" (UniqueName: \"kubernetes.io/projected/9e359ccb-1739-4978-b6d7-cc9c22ba4bad-kube-api-access-mdglm\") pod \"nova-metadata-0\" (UID: \"9e359ccb-1739-4978-b6d7-cc9c22ba4bad\") " pod="openstack/nova-metadata-0" Jan 29 15:51:56 crc kubenswrapper[5008]: I0129 15:51:56.280330 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9e359ccb-1739-4978-b6d7-cc9c22ba4bad-config-data\") pod \"nova-metadata-0\" (UID: \"9e359ccb-1739-4978-b6d7-cc9c22ba4bad\") " pod="openstack/nova-metadata-0" Jan 29 15:51:56 crc kubenswrapper[5008]: I0129 15:51:56.280508 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9e359ccb-1739-4978-b6d7-cc9c22ba4bad-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"9e359ccb-1739-4978-b6d7-cc9c22ba4bad\") " pod="openstack/nova-metadata-0" Jan 29 15:51:56 crc kubenswrapper[5008]: I0129 15:51:56.280642 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9e359ccb-1739-4978-b6d7-cc9c22ba4bad-logs\") pod \"nova-metadata-0\" (UID: \"9e359ccb-1739-4978-b6d7-cc9c22ba4bad\") " pod="openstack/nova-metadata-0" Jan 29 15:51:56 crc kubenswrapper[5008]: I0129 15:51:56.280714 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/9e359ccb-1739-4978-b6d7-cc9c22ba4bad-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"9e359ccb-1739-4978-b6d7-cc9c22ba4bad\") " pod="openstack/nova-metadata-0" Jan 29 15:51:56 crc kubenswrapper[5008]: I0129 15:51:56.383205 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mdglm\" (UniqueName: \"kubernetes.io/projected/9e359ccb-1739-4978-b6d7-cc9c22ba4bad-kube-api-access-mdglm\") pod \"nova-metadata-0\" (UID: \"9e359ccb-1739-4978-b6d7-cc9c22ba4bad\") " pod="openstack/nova-metadata-0" Jan 29 15:51:56 crc kubenswrapper[5008]: I0129 15:51:56.383269 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9e359ccb-1739-4978-b6d7-cc9c22ba4bad-config-data\") pod \"nova-metadata-0\" (UID: \"9e359ccb-1739-4978-b6d7-cc9c22ba4bad\") " pod="openstack/nova-metadata-0" Jan 29 15:51:56 crc kubenswrapper[5008]: I0129 15:51:56.383309 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9e359ccb-1739-4978-b6d7-cc9c22ba4bad-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"9e359ccb-1739-4978-b6d7-cc9c22ba4bad\") " pod="openstack/nova-metadata-0" Jan 29 15:51:56 crc kubenswrapper[5008]: I0129 15:51:56.383353 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9e359ccb-1739-4978-b6d7-cc9c22ba4bad-logs\") pod \"nova-metadata-0\" (UID: \"9e359ccb-1739-4978-b6d7-cc9c22ba4bad\") " pod="openstack/nova-metadata-0" Jan 29 15:51:56 crc kubenswrapper[5008]: I0129 15:51:56.383381 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/9e359ccb-1739-4978-b6d7-cc9c22ba4bad-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"9e359ccb-1739-4978-b6d7-cc9c22ba4bad\") " pod="openstack/nova-metadata-0" Jan 29 15:51:56 crc kubenswrapper[5008]: I0129 15:51:56.384402 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9e359ccb-1739-4978-b6d7-cc9c22ba4bad-logs\") pod \"nova-metadata-0\" (UID: \"9e359ccb-1739-4978-b6d7-cc9c22ba4bad\") " pod="openstack/nova-metadata-0" Jan 29 15:51:56 crc kubenswrapper[5008]: I0129 15:51:56.387596 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/9e359ccb-1739-4978-b6d7-cc9c22ba4bad-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"9e359ccb-1739-4978-b6d7-cc9c22ba4bad\") " pod="openstack/nova-metadata-0" Jan 29 15:51:56 crc kubenswrapper[5008]: I0129 15:51:56.389096 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9e359ccb-1739-4978-b6d7-cc9c22ba4bad-config-data\") pod \"nova-metadata-0\" (UID: \"9e359ccb-1739-4978-b6d7-cc9c22ba4bad\") " pod="openstack/nova-metadata-0" Jan 29 15:51:56 crc kubenswrapper[5008]: I0129 15:51:56.389409 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9e359ccb-1739-4978-b6d7-cc9c22ba4bad-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"9e359ccb-1739-4978-b6d7-cc9c22ba4bad\") " pod="openstack/nova-metadata-0" Jan 29 15:51:56 crc kubenswrapper[5008]: I0129 15:51:56.410397 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mdglm\" (UniqueName: \"kubernetes.io/projected/9e359ccb-1739-4978-b6d7-cc9c22ba4bad-kube-api-access-mdglm\") pod \"nova-metadata-0\" (UID: \"9e359ccb-1739-4978-b6d7-cc9c22ba4bad\") " pod="openstack/nova-metadata-0" Jan 29 15:51:56 crc kubenswrapper[5008]: I0129 15:51:56.438385 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 29 15:51:56 crc kubenswrapper[5008]: I0129 15:51:56.497105 5008 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 29 15:51:56 crc kubenswrapper[5008]: I0129 15:51:56.497156 5008 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 29 15:51:56 crc kubenswrapper[5008]: I0129 15:51:56.509280 5008 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Jan 29 15:51:56 crc kubenswrapper[5008]: I0129 15:51:56.509351 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Jan 29 15:51:56 crc kubenswrapper[5008]: I0129 15:51:56.565552 5008 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Jan 29 15:51:56 crc kubenswrapper[5008]: I0129 15:51:56.567641 5008 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-2crqc" Jan 29 15:51:56 crc kubenswrapper[5008]: I0129 15:51:56.589981 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Jan 29 15:51:56 crc kubenswrapper[5008]: I0129 15:51:56.689996 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eef9ab07-3037-4115-bb8e-954191b169af-config-data\") pod \"eef9ab07-3037-4115-bb8e-954191b169af\" (UID: \"eef9ab07-3037-4115-bb8e-954191b169af\") " Jan 29 15:51:56 crc kubenswrapper[5008]: I0129 15:51:56.690053 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eef9ab07-3037-4115-bb8e-954191b169af-combined-ca-bundle\") pod \"eef9ab07-3037-4115-bb8e-954191b169af\" (UID: \"eef9ab07-3037-4115-bb8e-954191b169af\") " Jan 29 15:51:56 crc kubenswrapper[5008]: I0129 15:51:56.690122 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/eef9ab07-3037-4115-bb8e-954191b169af-scripts\") pod \"eef9ab07-3037-4115-bb8e-954191b169af\" (UID: \"eef9ab07-3037-4115-bb8e-954191b169af\") " Jan 29 15:51:56 crc kubenswrapper[5008]: I0129 15:51:56.690178 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zsxq7\" (UniqueName: \"kubernetes.io/projected/eef9ab07-3037-4115-bb8e-954191b169af-kube-api-access-zsxq7\") pod \"eef9ab07-3037-4115-bb8e-954191b169af\" (UID: \"eef9ab07-3037-4115-bb8e-954191b169af\") " Jan 29 15:51:56 crc kubenswrapper[5008]: I0129 15:51:56.697913 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eef9ab07-3037-4115-bb8e-954191b169af-scripts" (OuterVolumeSpecName: "scripts") pod "eef9ab07-3037-4115-bb8e-954191b169af" (UID: "eef9ab07-3037-4115-bb8e-954191b169af"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:51:56 crc kubenswrapper[5008]: I0129 15:51:56.697949 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/eef9ab07-3037-4115-bb8e-954191b169af-kube-api-access-zsxq7" (OuterVolumeSpecName: "kube-api-access-zsxq7") pod "eef9ab07-3037-4115-bb8e-954191b169af" (UID: "eef9ab07-3037-4115-bb8e-954191b169af"). InnerVolumeSpecName "kube-api-access-zsxq7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:51:56 crc kubenswrapper[5008]: I0129 15:51:56.741882 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eef9ab07-3037-4115-bb8e-954191b169af-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "eef9ab07-3037-4115-bb8e-954191b169af" (UID: "eef9ab07-3037-4115-bb8e-954191b169af"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:51:56 crc kubenswrapper[5008]: I0129 15:51:56.745913 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eef9ab07-3037-4115-bb8e-954191b169af-config-data" (OuterVolumeSpecName: "config-data") pod "eef9ab07-3037-4115-bb8e-954191b169af" (UID: "eef9ab07-3037-4115-bb8e-954191b169af"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:51:56 crc kubenswrapper[5008]: I0129 15:51:56.792708 5008 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/eef9ab07-3037-4115-bb8e-954191b169af-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 15:51:56 crc kubenswrapper[5008]: I0129 15:51:56.792744 5008 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zsxq7\" (UniqueName: \"kubernetes.io/projected/eef9ab07-3037-4115-bb8e-954191b169af-kube-api-access-zsxq7\") on node \"crc\" DevicePath \"\"" Jan 29 15:51:56 crc kubenswrapper[5008]: I0129 15:51:56.792758 5008 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eef9ab07-3037-4115-bb8e-954191b169af-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 15:51:56 crc kubenswrapper[5008]: I0129 15:51:56.792768 5008 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eef9ab07-3037-4115-bb8e-954191b169af-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 15:51:56 crc kubenswrapper[5008]: I0129 15:51:56.963417 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 29 15:51:57 crc kubenswrapper[5008]: I0129 15:51:57.012247 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-2crqc" event={"ID":"eef9ab07-3037-4115-bb8e-954191b169af","Type":"ContainerDied","Data":"8469f70a82067f3e5e3ddeda22384487ef8ddf5579da62e05ab8aad6137879e6"} Jan 29 15:51:57 crc kubenswrapper[5008]: I0129 15:51:57.012634 5008 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8469f70a82067f3e5e3ddeda22384487ef8ddf5579da62e05ab8aad6137879e6" Jan 29 15:51:57 crc kubenswrapper[5008]: I0129 15:51:57.012268 5008 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-2crqc" Jan 29 15:51:57 crc kubenswrapper[5008]: I0129 15:51:57.015856 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"9e359ccb-1739-4978-b6d7-cc9c22ba4bad","Type":"ContainerStarted","Data":"99b77dffe653c476d71bee1455ad5af6222f956467e807cdf199f218d7c28bf8"} Jan 29 15:51:57 crc kubenswrapper[5008]: I0129 15:51:57.066554 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Jan 29 15:51:57 crc kubenswrapper[5008]: I0129 15:51:57.175255 5008 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 29 15:51:57 crc kubenswrapper[5008]: I0129 15:51:57.175502 5008 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="aafcc4fd-9cb2-458b-892e-0e56adcdfa2c" containerName="nova-api-log" containerID="cri-o://2d137f6ab32493e4c84e12dddea0af4d07130b45d33ad383089e874020edd1c9" gracePeriod=30 Jan 29 15:51:57 crc kubenswrapper[5008]: I0129 15:51:57.175585 5008 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="aafcc4fd-9cb2-458b-892e-0e56adcdfa2c" containerName="nova-api-api" containerID="cri-o://6577ef7af46ac87bbeb2eb62d4d6f390b86ce894a2b7eb71d0570cec11f0f60f" gracePeriod=30 Jan 29 15:51:57 crc kubenswrapper[5008]: I0129 15:51:57.182822 5008 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="aafcc4fd-9cb2-458b-892e-0e56adcdfa2c" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.190:8774/\": EOF" Jan 29 15:51:57 crc kubenswrapper[5008]: I0129 15:51:57.183086 5008 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="aafcc4fd-9cb2-458b-892e-0e56adcdfa2c" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.190:8774/\": EOF" Jan 29 15:51:57 crc kubenswrapper[5008]: I0129 15:51:57.198961 5008 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 29 15:51:57 crc kubenswrapper[5008]: I0129 15:51:57.334480 5008 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a768e5ff-0521-4ad2-aa02-6774dcb5cdd0" path="/var/lib/kubelet/pods/a768e5ff-0521-4ad2-aa02-6774dcb5cdd0/volumes" Jan 29 15:51:57 crc kubenswrapper[5008]: I0129 15:51:57.736939 5008 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Jan 29 15:51:58 crc kubenswrapper[5008]: I0129 15:51:58.025001 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"9e359ccb-1739-4978-b6d7-cc9c22ba4bad","Type":"ContainerStarted","Data":"dac97359f2204de7c90d0583e18d347f2ba09945977d72742713b4c743219109"} Jan 29 15:51:58 crc kubenswrapper[5008]: I0129 15:51:58.025042 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"9e359ccb-1739-4978-b6d7-cc9c22ba4bad","Type":"ContainerStarted","Data":"bdd55f3c6f6a7cf5c018fd856f235769dc493feebb4b8884aa1dd17420aa8b21"} Jan 29 15:51:58 crc kubenswrapper[5008]: I0129 15:51:58.025158 5008 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="9e359ccb-1739-4978-b6d7-cc9c22ba4bad" containerName="nova-metadata-log" containerID="cri-o://bdd55f3c6f6a7cf5c018fd856f235769dc493feebb4b8884aa1dd17420aa8b21" gracePeriod=30 Jan 29 15:51:58 crc kubenswrapper[5008]: I0129 15:51:58.025675 5008 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="9e359ccb-1739-4978-b6d7-cc9c22ba4bad" containerName="nova-metadata-metadata" containerID="cri-o://dac97359f2204de7c90d0583e18d347f2ba09945977d72742713b4c743219109" gracePeriod=30 Jan 29 15:51:58 crc kubenswrapper[5008]: I0129 15:51:58.032519 5008 generic.go:334] "Generic (PLEG): container finished" podID="aafcc4fd-9cb2-458b-892e-0e56adcdfa2c" containerID="2d137f6ab32493e4c84e12dddea0af4d07130b45d33ad383089e874020edd1c9" exitCode=143 Jan 29 15:51:58 crc kubenswrapper[5008]: I0129 15:51:58.032836 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"aafcc4fd-9cb2-458b-892e-0e56adcdfa2c","Type":"ContainerDied","Data":"2d137f6ab32493e4c84e12dddea0af4d07130b45d33ad383089e874020edd1c9"} Jan 29 15:51:58 crc kubenswrapper[5008]: I0129 15:51:58.085849 5008 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.085828305 podStartE2EDuration="2.085828305s" podCreationTimestamp="2026-01-29 15:51:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 15:51:58.058083332 +0000 UTC m=+1461.730937619" watchObservedRunningTime="2026-01-29 15:51:58.085828305 +0000 UTC m=+1461.758682562" Jan 29 15:51:59 crc kubenswrapper[5008]: I0129 15:51:59.053923 5008 generic.go:334] "Generic (PLEG): container finished" podID="9e359ccb-1739-4978-b6d7-cc9c22ba4bad" containerID="dac97359f2204de7c90d0583e18d347f2ba09945977d72742713b4c743219109" exitCode=0 Jan 29 15:51:59 crc kubenswrapper[5008]: I0129 15:51:59.054240 5008 generic.go:334] "Generic (PLEG): container finished" podID="9e359ccb-1739-4978-b6d7-cc9c22ba4bad" containerID="bdd55f3c6f6a7cf5c018fd856f235769dc493feebb4b8884aa1dd17420aa8b21" exitCode=143 Jan 29 15:51:59 crc kubenswrapper[5008]: I0129 15:51:59.054018 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"9e359ccb-1739-4978-b6d7-cc9c22ba4bad","Type":"ContainerDied","Data":"dac97359f2204de7c90d0583e18d347f2ba09945977d72742713b4c743219109"} Jan 29 15:51:59 crc kubenswrapper[5008]: I0129 15:51:59.054365 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"9e359ccb-1739-4978-b6d7-cc9c22ba4bad","Type":"ContainerDied","Data":"bdd55f3c6f6a7cf5c018fd856f235769dc493feebb4b8884aa1dd17420aa8b21"} Jan 29 15:51:59 crc kubenswrapper[5008]: I0129 15:51:59.054416 5008 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="1f0bf87f-118b-4ad5-8354-688ae93d75e8" containerName="nova-scheduler-scheduler" containerID="cri-o://a45eabdd3a916892c15bd4c53b9c5d38521f3313283444317d3b41cb672cda50" gracePeriod=30 Jan 29 15:51:59 crc kubenswrapper[5008]: I0129 15:51:59.925660 5008 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 29 15:52:00 crc kubenswrapper[5008]: I0129 15:52:00.052941 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9e359ccb-1739-4978-b6d7-cc9c22ba4bad-config-data\") pod \"9e359ccb-1739-4978-b6d7-cc9c22ba4bad\" (UID: \"9e359ccb-1739-4978-b6d7-cc9c22ba4bad\") " Jan 29 15:52:00 crc kubenswrapper[5008]: I0129 15:52:00.053009 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9e359ccb-1739-4978-b6d7-cc9c22ba4bad-combined-ca-bundle\") pod \"9e359ccb-1739-4978-b6d7-cc9c22ba4bad\" (UID: \"9e359ccb-1739-4978-b6d7-cc9c22ba4bad\") " Jan 29 15:52:00 crc kubenswrapper[5008]: I0129 15:52:00.053117 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/9e359ccb-1739-4978-b6d7-cc9c22ba4bad-nova-metadata-tls-certs\") pod \"9e359ccb-1739-4978-b6d7-cc9c22ba4bad\" (UID: \"9e359ccb-1739-4978-b6d7-cc9c22ba4bad\") " Jan 29 15:52:00 crc kubenswrapper[5008]: I0129 15:52:00.053159 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9e359ccb-1739-4978-b6d7-cc9c22ba4bad-logs\") pod \"9e359ccb-1739-4978-b6d7-cc9c22ba4bad\" (UID: \"9e359ccb-1739-4978-b6d7-cc9c22ba4bad\") " Jan 29 15:52:00 crc kubenswrapper[5008]: I0129 15:52:00.053208 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mdglm\" (UniqueName: \"kubernetes.io/projected/9e359ccb-1739-4978-b6d7-cc9c22ba4bad-kube-api-access-mdglm\") pod \"9e359ccb-1739-4978-b6d7-cc9c22ba4bad\" (UID: \"9e359ccb-1739-4978-b6d7-cc9c22ba4bad\") " Jan 29 15:52:00 crc kubenswrapper[5008]: I0129 15:52:00.053616 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9e359ccb-1739-4978-b6d7-cc9c22ba4bad-logs" (OuterVolumeSpecName: "logs") pod "9e359ccb-1739-4978-b6d7-cc9c22ba4bad" (UID: "9e359ccb-1739-4978-b6d7-cc9c22ba4bad"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 15:52:00 crc kubenswrapper[5008]: I0129 15:52:00.053803 5008 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9e359ccb-1739-4978-b6d7-cc9c22ba4bad-logs\") on node \"crc\" DevicePath \"\"" Jan 29 15:52:00 crc kubenswrapper[5008]: I0129 15:52:00.059040 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9e359ccb-1739-4978-b6d7-cc9c22ba4bad-kube-api-access-mdglm" (OuterVolumeSpecName: "kube-api-access-mdglm") pod "9e359ccb-1739-4978-b6d7-cc9c22ba4bad" (UID: "9e359ccb-1739-4978-b6d7-cc9c22ba4bad"). InnerVolumeSpecName "kube-api-access-mdglm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:52:00 crc kubenswrapper[5008]: I0129 15:52:00.069305 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"9e359ccb-1739-4978-b6d7-cc9c22ba4bad","Type":"ContainerDied","Data":"99b77dffe653c476d71bee1455ad5af6222f956467e807cdf199f218d7c28bf8"} Jan 29 15:52:00 crc kubenswrapper[5008]: I0129 15:52:00.069364 5008 scope.go:117] "RemoveContainer" containerID="dac97359f2204de7c90d0583e18d347f2ba09945977d72742713b4c743219109" Jan 29 15:52:00 crc kubenswrapper[5008]: I0129 15:52:00.069380 5008 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 29 15:52:00 crc kubenswrapper[5008]: I0129 15:52:00.086555 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9e359ccb-1739-4978-b6d7-cc9c22ba4bad-config-data" (OuterVolumeSpecName: "config-data") pod "9e359ccb-1739-4978-b6d7-cc9c22ba4bad" (UID: "9e359ccb-1739-4978-b6d7-cc9c22ba4bad"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:52:00 crc kubenswrapper[5008]: I0129 15:52:00.102762 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9e359ccb-1739-4978-b6d7-cc9c22ba4bad-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "9e359ccb-1739-4978-b6d7-cc9c22ba4bad" (UID: "9e359ccb-1739-4978-b6d7-cc9c22ba4bad"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:52:00 crc kubenswrapper[5008]: I0129 15:52:00.109156 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9e359ccb-1739-4978-b6d7-cc9c22ba4bad-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "9e359ccb-1739-4978-b6d7-cc9c22ba4bad" (UID: "9e359ccb-1739-4978-b6d7-cc9c22ba4bad"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:52:00 crc kubenswrapper[5008]: I0129 15:52:00.155450 5008 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/9e359ccb-1739-4978-b6d7-cc9c22ba4bad-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 29 15:52:00 crc kubenswrapper[5008]: I0129 15:52:00.155484 5008 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mdglm\" (UniqueName: \"kubernetes.io/projected/9e359ccb-1739-4978-b6d7-cc9c22ba4bad-kube-api-access-mdglm\") on node \"crc\" DevicePath \"\"" Jan 29 15:52:00 crc kubenswrapper[5008]: I0129 15:52:00.155494 5008 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9e359ccb-1739-4978-b6d7-cc9c22ba4bad-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 15:52:00 crc kubenswrapper[5008]: I0129 15:52:00.155505 5008 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9e359ccb-1739-4978-b6d7-cc9c22ba4bad-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 15:52:00 crc kubenswrapper[5008]: I0129 15:52:00.183943 5008 scope.go:117] "RemoveContainer" containerID="bdd55f3c6f6a7cf5c018fd856f235769dc493feebb4b8884aa1dd17420aa8b21" Jan 29 15:52:00 crc kubenswrapper[5008]: I0129 15:52:00.414955 5008 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 29 15:52:00 crc kubenswrapper[5008]: I0129 15:52:00.424807 5008 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Jan 29 15:52:00 crc kubenswrapper[5008]: I0129 15:52:00.439237 5008 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Jan 29 15:52:00 crc kubenswrapper[5008]: E0129 15:52:00.439777 5008 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eef9ab07-3037-4115-bb8e-954191b169af" containerName="nova-manage" Jan 29 15:52:00 crc kubenswrapper[5008]: I0129 15:52:00.439958 5008 state_mem.go:107] "Deleted CPUSet assignment" podUID="eef9ab07-3037-4115-bb8e-954191b169af" containerName="nova-manage" Jan 29 15:52:00 crc kubenswrapper[5008]: E0129 15:52:00.439985 5008 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9e359ccb-1739-4978-b6d7-cc9c22ba4bad" containerName="nova-metadata-metadata" Jan 29 15:52:00 crc kubenswrapper[5008]: I0129 15:52:00.440000 5008 state_mem.go:107] "Deleted CPUSet assignment" podUID="9e359ccb-1739-4978-b6d7-cc9c22ba4bad" containerName="nova-metadata-metadata" Jan 29 15:52:00 crc kubenswrapper[5008]: E0129 15:52:00.440037 5008 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9e359ccb-1739-4978-b6d7-cc9c22ba4bad" containerName="nova-metadata-log" Jan 29 15:52:00 crc kubenswrapper[5008]: I0129 15:52:00.440061 5008 state_mem.go:107] "Deleted CPUSet assignment" podUID="9e359ccb-1739-4978-b6d7-cc9c22ba4bad" containerName="nova-metadata-log" Jan 29 15:52:00 crc kubenswrapper[5008]: I0129 15:52:00.440831 5008 memory_manager.go:354] "RemoveStaleState removing state" podUID="9e359ccb-1739-4978-b6d7-cc9c22ba4bad" containerName="nova-metadata-metadata" Jan 29 15:52:00 crc kubenswrapper[5008]: I0129 15:52:00.440871 5008 memory_manager.go:354] "RemoveStaleState removing state" podUID="9e359ccb-1739-4978-b6d7-cc9c22ba4bad" containerName="nova-metadata-log" Jan 29 15:52:00 crc kubenswrapper[5008]: I0129 15:52:00.440917 5008 memory_manager.go:354] "RemoveStaleState removing state" podUID="eef9ab07-3037-4115-bb8e-954191b169af" containerName="nova-manage" Jan 29 15:52:00 crc kubenswrapper[5008]: I0129 15:52:00.442188 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 29 15:52:00 crc kubenswrapper[5008]: I0129 15:52:00.444288 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Jan 29 15:52:00 crc kubenswrapper[5008]: I0129 15:52:00.444349 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Jan 29 15:52:00 crc kubenswrapper[5008]: I0129 15:52:00.467330 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 29 15:52:00 crc kubenswrapper[5008]: I0129 15:52:00.562935 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/038b9a46-5128-497b-8073-557e8f3542fb-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"038b9a46-5128-497b-8073-557e8f3542fb\") " pod="openstack/nova-metadata-0" Jan 29 15:52:00 crc kubenswrapper[5008]: I0129 15:52:00.562990 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l5fvq\" (UniqueName: \"kubernetes.io/projected/038b9a46-5128-497b-8073-557e8f3542fb-kube-api-access-l5fvq\") pod \"nova-metadata-0\" (UID: \"038b9a46-5128-497b-8073-557e8f3542fb\") " pod="openstack/nova-metadata-0" Jan 29 15:52:00 crc kubenswrapper[5008]: I0129 15:52:00.563062 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/038b9a46-5128-497b-8073-557e8f3542fb-logs\") pod \"nova-metadata-0\" (UID: \"038b9a46-5128-497b-8073-557e8f3542fb\") " pod="openstack/nova-metadata-0" Jan 29 15:52:00 crc kubenswrapper[5008]: I0129 15:52:00.563114 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/038b9a46-5128-497b-8073-557e8f3542fb-config-data\") pod \"nova-metadata-0\" (UID: \"038b9a46-5128-497b-8073-557e8f3542fb\") " pod="openstack/nova-metadata-0" Jan 29 15:52:00 crc kubenswrapper[5008]: I0129 15:52:00.563189 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/038b9a46-5128-497b-8073-557e8f3542fb-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"038b9a46-5128-497b-8073-557e8f3542fb\") " pod="openstack/nova-metadata-0" Jan 29 15:52:00 crc kubenswrapper[5008]: I0129 15:52:00.664746 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/038b9a46-5128-497b-8073-557e8f3542fb-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"038b9a46-5128-497b-8073-557e8f3542fb\") " pod="openstack/nova-metadata-0" Jan 29 15:52:00 crc kubenswrapper[5008]: I0129 15:52:00.664958 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/038b9a46-5128-497b-8073-557e8f3542fb-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"038b9a46-5128-497b-8073-557e8f3542fb\") " pod="openstack/nova-metadata-0" Jan 29 15:52:00 crc kubenswrapper[5008]: I0129 15:52:00.664989 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l5fvq\" (UniqueName: \"kubernetes.io/projected/038b9a46-5128-497b-8073-557e8f3542fb-kube-api-access-l5fvq\") pod \"nova-metadata-0\" (UID: \"038b9a46-5128-497b-8073-557e8f3542fb\") " pod="openstack/nova-metadata-0" Jan 29 15:52:00 crc kubenswrapper[5008]: I0129 15:52:00.665026 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/038b9a46-5128-497b-8073-557e8f3542fb-logs\") pod \"nova-metadata-0\" (UID: \"038b9a46-5128-497b-8073-557e8f3542fb\") " pod="openstack/nova-metadata-0" Jan 29 15:52:00 crc kubenswrapper[5008]: I0129 15:52:00.665070 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/038b9a46-5128-497b-8073-557e8f3542fb-config-data\") pod \"nova-metadata-0\" (UID: \"038b9a46-5128-497b-8073-557e8f3542fb\") " pod="openstack/nova-metadata-0" Jan 29 15:52:00 crc kubenswrapper[5008]: I0129 15:52:00.666325 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/038b9a46-5128-497b-8073-557e8f3542fb-logs\") pod \"nova-metadata-0\" (UID: \"038b9a46-5128-497b-8073-557e8f3542fb\") " pod="openstack/nova-metadata-0" Jan 29 15:52:00 crc kubenswrapper[5008]: I0129 15:52:00.669084 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/038b9a46-5128-497b-8073-557e8f3542fb-config-data\") pod \"nova-metadata-0\" (UID: \"038b9a46-5128-497b-8073-557e8f3542fb\") " pod="openstack/nova-metadata-0" Jan 29 15:52:00 crc kubenswrapper[5008]: I0129 15:52:00.669675 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/038b9a46-5128-497b-8073-557e8f3542fb-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"038b9a46-5128-497b-8073-557e8f3542fb\") " pod="openstack/nova-metadata-0" Jan 29 15:52:00 crc kubenswrapper[5008]: I0129 15:52:00.670561 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/038b9a46-5128-497b-8073-557e8f3542fb-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"038b9a46-5128-497b-8073-557e8f3542fb\") " pod="openstack/nova-metadata-0" Jan 29 15:52:00 crc kubenswrapper[5008]: I0129 15:52:00.683751 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l5fvq\" (UniqueName: \"kubernetes.io/projected/038b9a46-5128-497b-8073-557e8f3542fb-kube-api-access-l5fvq\") pod \"nova-metadata-0\" (UID: \"038b9a46-5128-497b-8073-557e8f3542fb\") " pod="openstack/nova-metadata-0" Jan 29 15:52:00 crc kubenswrapper[5008]: I0129 15:52:00.778124 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 29 15:52:01 crc kubenswrapper[5008]: I0129 15:52:01.108395 5008 generic.go:334] "Generic (PLEG): container finished" podID="1f0bf87f-118b-4ad5-8354-688ae93d75e8" containerID="a45eabdd3a916892c15bd4c53b9c5d38521f3313283444317d3b41cb672cda50" exitCode=0 Jan 29 15:52:01 crc kubenswrapper[5008]: I0129 15:52:01.108717 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"1f0bf87f-118b-4ad5-8354-688ae93d75e8","Type":"ContainerDied","Data":"a45eabdd3a916892c15bd4c53b9c5d38521f3313283444317d3b41cb672cda50"} Jan 29 15:52:01 crc kubenswrapper[5008]: I0129 15:52:01.262196 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 29 15:52:01 crc kubenswrapper[5008]: I0129 15:52:01.338000 5008 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9e359ccb-1739-4978-b6d7-cc9c22ba4bad" path="/var/lib/kubelet/pods/9e359ccb-1739-4978-b6d7-cc9c22ba4bad/volumes" Jan 29 15:52:01 crc kubenswrapper[5008]: E0129 15:52:01.510423 5008 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of a45eabdd3a916892c15bd4c53b9c5d38521f3313283444317d3b41cb672cda50 is running failed: container process not found" containerID="a45eabdd3a916892c15bd4c53b9c5d38521f3313283444317d3b41cb672cda50" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 29 15:52:01 crc kubenswrapper[5008]: E0129 15:52:01.517344 5008 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of a45eabdd3a916892c15bd4c53b9c5d38521f3313283444317d3b41cb672cda50 is running failed: container process not found" containerID="a45eabdd3a916892c15bd4c53b9c5d38521f3313283444317d3b41cb672cda50" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 29 15:52:01 crc kubenswrapper[5008]: E0129 15:52:01.517683 5008 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of a45eabdd3a916892c15bd4c53b9c5d38521f3313283444317d3b41cb672cda50 is running failed: container process not found" containerID="a45eabdd3a916892c15bd4c53b9c5d38521f3313283444317d3b41cb672cda50" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 29 15:52:01 crc kubenswrapper[5008]: E0129 15:52:01.517712 5008 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of a45eabdd3a916892c15bd4c53b9c5d38521f3313283444317d3b41cb672cda50 is running failed: container process not found" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="1f0bf87f-118b-4ad5-8354-688ae93d75e8" containerName="nova-scheduler-scheduler" Jan 29 15:52:01 crc kubenswrapper[5008]: I0129 15:52:01.627978 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-bccf8f775-xx5z4" Jan 29 15:52:01 crc kubenswrapper[5008]: I0129 15:52:01.718591 5008 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6578955fd5-h99wm"] Jan 29 15:52:01 crc kubenswrapper[5008]: I0129 15:52:01.718871 5008 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-6578955fd5-h99wm" podUID="35979baf-dba0-453c-bafd-16985d082448" containerName="dnsmasq-dns" containerID="cri-o://517994ddf8724b531c045e361104301810488aaea5740758e3935f990fbe3040" gracePeriod=10 Jan 29 15:52:02 crc kubenswrapper[5008]: I0129 15:52:02.017250 5008 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 29 15:52:02 crc kubenswrapper[5008]: I0129 15:52:02.125495 5008 generic.go:334] "Generic (PLEG): container finished" podID="35979baf-dba0-453c-bafd-16985d082448" containerID="517994ddf8724b531c045e361104301810488aaea5740758e3935f990fbe3040" exitCode=0 Jan 29 15:52:02 crc kubenswrapper[5008]: I0129 15:52:02.125583 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6578955fd5-h99wm" event={"ID":"35979baf-dba0-453c-bafd-16985d082448","Type":"ContainerDied","Data":"517994ddf8724b531c045e361104301810488aaea5740758e3935f990fbe3040"} Jan 29 15:52:02 crc kubenswrapper[5008]: I0129 15:52:02.128101 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"038b9a46-5128-497b-8073-557e8f3542fb","Type":"ContainerStarted","Data":"951b0f36fd6a684d8c30fa21487872b1f27e31c08947dd98a725b29af452b297"} Jan 29 15:52:02 crc kubenswrapper[5008]: I0129 15:52:02.128174 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"038b9a46-5128-497b-8073-557e8f3542fb","Type":"ContainerStarted","Data":"b1cb4fe0e965ed395741ca05d4744c778b350ee5b58ae99ed0af4f4789b2408e"} Jan 29 15:52:02 crc kubenswrapper[5008]: I0129 15:52:02.128186 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"038b9a46-5128-497b-8073-557e8f3542fb","Type":"ContainerStarted","Data":"f54ae340e3e9e95461e8dd7339317d96f2c608cdca914d4ca65b81b43814916d"} Jan 29 15:52:02 crc kubenswrapper[5008]: I0129 15:52:02.129696 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"1f0bf87f-118b-4ad5-8354-688ae93d75e8","Type":"ContainerDied","Data":"868f4c6e442b8edd70fd72637691064134ed05f40e47973b7eb3e61bb8292d33"} Jan 29 15:52:02 crc kubenswrapper[5008]: I0129 15:52:02.129740 5008 scope.go:117] "RemoveContainer" containerID="a45eabdd3a916892c15bd4c53b9c5d38521f3313283444317d3b41cb672cda50" Jan 29 15:52:02 crc kubenswrapper[5008]: I0129 15:52:02.129860 5008 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 29 15:52:02 crc kubenswrapper[5008]: I0129 15:52:02.164309 5008 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.164286466 podStartE2EDuration="2.164286466s" podCreationTimestamp="2026-01-29 15:52:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 15:52:02.153557376 +0000 UTC m=+1465.826411623" watchObservedRunningTime="2026-01-29 15:52:02.164286466 +0000 UTC m=+1465.837140703" Jan 29 15:52:02 crc kubenswrapper[5008]: I0129 15:52:02.194581 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1f0bf87f-118b-4ad5-8354-688ae93d75e8-combined-ca-bundle\") pod \"1f0bf87f-118b-4ad5-8354-688ae93d75e8\" (UID: \"1f0bf87f-118b-4ad5-8354-688ae93d75e8\") " Jan 29 15:52:02 crc kubenswrapper[5008]: I0129 15:52:02.194822 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1f0bf87f-118b-4ad5-8354-688ae93d75e8-config-data\") pod \"1f0bf87f-118b-4ad5-8354-688ae93d75e8\" (UID: \"1f0bf87f-118b-4ad5-8354-688ae93d75e8\") " Jan 29 15:52:02 crc kubenswrapper[5008]: I0129 15:52:02.194888 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h4pfj\" (UniqueName: \"kubernetes.io/projected/1f0bf87f-118b-4ad5-8354-688ae93d75e8-kube-api-access-h4pfj\") pod \"1f0bf87f-118b-4ad5-8354-688ae93d75e8\" (UID: \"1f0bf87f-118b-4ad5-8354-688ae93d75e8\") " Jan 29 15:52:02 crc kubenswrapper[5008]: I0129 15:52:02.199901 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1f0bf87f-118b-4ad5-8354-688ae93d75e8-kube-api-access-h4pfj" (OuterVolumeSpecName: "kube-api-access-h4pfj") pod "1f0bf87f-118b-4ad5-8354-688ae93d75e8" (UID: "1f0bf87f-118b-4ad5-8354-688ae93d75e8"). InnerVolumeSpecName "kube-api-access-h4pfj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:52:02 crc kubenswrapper[5008]: I0129 15:52:02.225275 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1f0bf87f-118b-4ad5-8354-688ae93d75e8-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "1f0bf87f-118b-4ad5-8354-688ae93d75e8" (UID: "1f0bf87f-118b-4ad5-8354-688ae93d75e8"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:52:02 crc kubenswrapper[5008]: I0129 15:52:02.228405 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1f0bf87f-118b-4ad5-8354-688ae93d75e8-config-data" (OuterVolumeSpecName: "config-data") pod "1f0bf87f-118b-4ad5-8354-688ae93d75e8" (UID: "1f0bf87f-118b-4ad5-8354-688ae93d75e8"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:52:02 crc kubenswrapper[5008]: I0129 15:52:02.297283 5008 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1f0bf87f-118b-4ad5-8354-688ae93d75e8-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 15:52:02 crc kubenswrapper[5008]: I0129 15:52:02.297731 5008 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h4pfj\" (UniqueName: \"kubernetes.io/projected/1f0bf87f-118b-4ad5-8354-688ae93d75e8-kube-api-access-h4pfj\") on node \"crc\" DevicePath \"\"" Jan 29 15:52:02 crc kubenswrapper[5008]: I0129 15:52:02.297744 5008 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1f0bf87f-118b-4ad5-8354-688ae93d75e8-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 15:52:02 crc kubenswrapper[5008]: I0129 15:52:02.467876 5008 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Jan 29 15:52:02 crc kubenswrapper[5008]: I0129 15:52:02.478124 5008 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Jan 29 15:52:02 crc kubenswrapper[5008]: I0129 15:52:02.489661 5008 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Jan 29 15:52:02 crc kubenswrapper[5008]: E0129 15:52:02.490165 5008 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1f0bf87f-118b-4ad5-8354-688ae93d75e8" containerName="nova-scheduler-scheduler" Jan 29 15:52:02 crc kubenswrapper[5008]: I0129 15:52:02.490197 5008 state_mem.go:107] "Deleted CPUSet assignment" podUID="1f0bf87f-118b-4ad5-8354-688ae93d75e8" containerName="nova-scheduler-scheduler" Jan 29 15:52:02 crc kubenswrapper[5008]: I0129 15:52:02.490429 5008 memory_manager.go:354] "RemoveStaleState removing state" podUID="1f0bf87f-118b-4ad5-8354-688ae93d75e8" containerName="nova-scheduler-scheduler" Jan 29 15:52:02 crc kubenswrapper[5008]: I0129 15:52:02.491196 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 29 15:52:02 crc kubenswrapper[5008]: I0129 15:52:02.494352 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Jan 29 15:52:02 crc kubenswrapper[5008]: I0129 15:52:02.526485 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 29 15:52:02 crc kubenswrapper[5008]: I0129 15:52:02.603702 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2fb31b59-3f31-4c28-ab5c-e2248ed9fd68-config-data\") pod \"nova-scheduler-0\" (UID: \"2fb31b59-3f31-4c28-ab5c-e2248ed9fd68\") " pod="openstack/nova-scheduler-0" Jan 29 15:52:02 crc kubenswrapper[5008]: I0129 15:52:02.603808 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fr65m\" (UniqueName: \"kubernetes.io/projected/2fb31b59-3f31-4c28-ab5c-e2248ed9fd68-kube-api-access-fr65m\") pod \"nova-scheduler-0\" (UID: \"2fb31b59-3f31-4c28-ab5c-e2248ed9fd68\") " pod="openstack/nova-scheduler-0" Jan 29 15:52:02 crc kubenswrapper[5008]: I0129 15:52:02.603864 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2fb31b59-3f31-4c28-ab5c-e2248ed9fd68-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"2fb31b59-3f31-4c28-ab5c-e2248ed9fd68\") " pod="openstack/nova-scheduler-0" Jan 29 15:52:02 crc kubenswrapper[5008]: I0129 15:52:02.705353 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2fb31b59-3f31-4c28-ab5c-e2248ed9fd68-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"2fb31b59-3f31-4c28-ab5c-e2248ed9fd68\") " pod="openstack/nova-scheduler-0" Jan 29 15:52:02 crc kubenswrapper[5008]: I0129 15:52:02.705520 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2fb31b59-3f31-4c28-ab5c-e2248ed9fd68-config-data\") pod \"nova-scheduler-0\" (UID: \"2fb31b59-3f31-4c28-ab5c-e2248ed9fd68\") " pod="openstack/nova-scheduler-0" Jan 29 15:52:02 crc kubenswrapper[5008]: I0129 15:52:02.705592 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fr65m\" (UniqueName: \"kubernetes.io/projected/2fb31b59-3f31-4c28-ab5c-e2248ed9fd68-kube-api-access-fr65m\") pod \"nova-scheduler-0\" (UID: \"2fb31b59-3f31-4c28-ab5c-e2248ed9fd68\") " pod="openstack/nova-scheduler-0" Jan 29 15:52:02 crc kubenswrapper[5008]: I0129 15:52:02.711388 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2fb31b59-3f31-4c28-ab5c-e2248ed9fd68-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"2fb31b59-3f31-4c28-ab5c-e2248ed9fd68\") " pod="openstack/nova-scheduler-0" Jan 29 15:52:02 crc kubenswrapper[5008]: I0129 15:52:02.712655 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2fb31b59-3f31-4c28-ab5c-e2248ed9fd68-config-data\") pod \"nova-scheduler-0\" (UID: \"2fb31b59-3f31-4c28-ab5c-e2248ed9fd68\") " pod="openstack/nova-scheduler-0" Jan 29 15:52:02 crc kubenswrapper[5008]: I0129 15:52:02.728388 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fr65m\" (UniqueName: \"kubernetes.io/projected/2fb31b59-3f31-4c28-ab5c-e2248ed9fd68-kube-api-access-fr65m\") pod \"nova-scheduler-0\" (UID: \"2fb31b59-3f31-4c28-ab5c-e2248ed9fd68\") " pod="openstack/nova-scheduler-0" Jan 29 15:52:02 crc kubenswrapper[5008]: I0129 15:52:02.787971 5008 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6578955fd5-h99wm" Jan 29 15:52:02 crc kubenswrapper[5008]: I0129 15:52:02.828031 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 29 15:52:02 crc kubenswrapper[5008]: I0129 15:52:02.908860 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/35979baf-dba0-453c-bafd-16985d082448-ovsdbserver-sb\") pod \"35979baf-dba0-453c-bafd-16985d082448\" (UID: \"35979baf-dba0-453c-bafd-16985d082448\") " Jan 29 15:52:02 crc kubenswrapper[5008]: I0129 15:52:02.908922 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w4hcq\" (UniqueName: \"kubernetes.io/projected/35979baf-dba0-453c-bafd-16985d082448-kube-api-access-w4hcq\") pod \"35979baf-dba0-453c-bafd-16985d082448\" (UID: \"35979baf-dba0-453c-bafd-16985d082448\") " Jan 29 15:52:02 crc kubenswrapper[5008]: I0129 15:52:02.908964 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/35979baf-dba0-453c-bafd-16985d082448-dns-swift-storage-0\") pod \"35979baf-dba0-453c-bafd-16985d082448\" (UID: \"35979baf-dba0-453c-bafd-16985d082448\") " Jan 29 15:52:02 crc kubenswrapper[5008]: I0129 15:52:02.909004 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/35979baf-dba0-453c-bafd-16985d082448-dns-svc\") pod \"35979baf-dba0-453c-bafd-16985d082448\" (UID: \"35979baf-dba0-453c-bafd-16985d082448\") " Jan 29 15:52:02 crc kubenswrapper[5008]: I0129 15:52:02.909022 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/35979baf-dba0-453c-bafd-16985d082448-ovsdbserver-nb\") pod \"35979baf-dba0-453c-bafd-16985d082448\" (UID: \"35979baf-dba0-453c-bafd-16985d082448\") " Jan 29 15:52:02 crc kubenswrapper[5008]: I0129 15:52:02.909070 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/35979baf-dba0-453c-bafd-16985d082448-config\") pod \"35979baf-dba0-453c-bafd-16985d082448\" (UID: \"35979baf-dba0-453c-bafd-16985d082448\") " Jan 29 15:52:02 crc kubenswrapper[5008]: I0129 15:52:02.924225 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/35979baf-dba0-453c-bafd-16985d082448-kube-api-access-w4hcq" (OuterVolumeSpecName: "kube-api-access-w4hcq") pod "35979baf-dba0-453c-bafd-16985d082448" (UID: "35979baf-dba0-453c-bafd-16985d082448"). InnerVolumeSpecName "kube-api-access-w4hcq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:52:02 crc kubenswrapper[5008]: I0129 15:52:02.959526 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/35979baf-dba0-453c-bafd-16985d082448-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "35979baf-dba0-453c-bafd-16985d082448" (UID: "35979baf-dba0-453c-bafd-16985d082448"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:52:02 crc kubenswrapper[5008]: I0129 15:52:02.977438 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/35979baf-dba0-453c-bafd-16985d082448-config" (OuterVolumeSpecName: "config") pod "35979baf-dba0-453c-bafd-16985d082448" (UID: "35979baf-dba0-453c-bafd-16985d082448"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:52:02 crc kubenswrapper[5008]: I0129 15:52:02.985810 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/35979baf-dba0-453c-bafd-16985d082448-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "35979baf-dba0-453c-bafd-16985d082448" (UID: "35979baf-dba0-453c-bafd-16985d082448"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:52:03 crc kubenswrapper[5008]: I0129 15:52:03.005231 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/35979baf-dba0-453c-bafd-16985d082448-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "35979baf-dba0-453c-bafd-16985d082448" (UID: "35979baf-dba0-453c-bafd-16985d082448"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:52:03 crc kubenswrapper[5008]: I0129 15:52:03.013353 5008 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w4hcq\" (UniqueName: \"kubernetes.io/projected/35979baf-dba0-453c-bafd-16985d082448-kube-api-access-w4hcq\") on node \"crc\" DevicePath \"\"" Jan 29 15:52:03 crc kubenswrapper[5008]: I0129 15:52:03.013402 5008 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/35979baf-dba0-453c-bafd-16985d082448-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 29 15:52:03 crc kubenswrapper[5008]: I0129 15:52:03.013415 5008 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/35979baf-dba0-453c-bafd-16985d082448-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 29 15:52:03 crc kubenswrapper[5008]: I0129 15:52:03.013427 5008 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/35979baf-dba0-453c-bafd-16985d082448-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 29 15:52:03 crc kubenswrapper[5008]: I0129 15:52:03.013441 5008 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/35979baf-dba0-453c-bafd-16985d082448-config\") on node \"crc\" DevicePath \"\"" Jan 29 15:52:03 crc kubenswrapper[5008]: I0129 15:52:03.022015 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/35979baf-dba0-453c-bafd-16985d082448-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "35979baf-dba0-453c-bafd-16985d082448" (UID: "35979baf-dba0-453c-bafd-16985d082448"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:52:03 crc kubenswrapper[5008]: I0129 15:52:03.115116 5008 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/35979baf-dba0-453c-bafd-16985d082448-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 29 15:52:03 crc kubenswrapper[5008]: I0129 15:52:03.142023 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6578955fd5-h99wm" event={"ID":"35979baf-dba0-453c-bafd-16985d082448","Type":"ContainerDied","Data":"3e9db3acbe84cb18dcd650ffdeedfffc3c78951f208824646557062d45cea8c7"} Jan 29 15:52:03 crc kubenswrapper[5008]: I0129 15:52:03.142083 5008 scope.go:117] "RemoveContainer" containerID="517994ddf8724b531c045e361104301810488aaea5740758e3935f990fbe3040" Jan 29 15:52:03 crc kubenswrapper[5008]: I0129 15:52:03.142106 5008 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6578955fd5-h99wm" Jan 29 15:52:03 crc kubenswrapper[5008]: I0129 15:52:03.181542 5008 scope.go:117] "RemoveContainer" containerID="054e6e3ef42c95903f288b4bdf317b2b2caa13f9aeb23d4a04ff1cd84e828a41" Jan 29 15:52:03 crc kubenswrapper[5008]: I0129 15:52:03.182863 5008 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6578955fd5-h99wm"] Jan 29 15:52:03 crc kubenswrapper[5008]: E0129 15:52:03.186014 5008 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod00b42485_f42b_4ca6_8e84_1a795454dd9f.slice/crio-9cfdb60cd6bab187b310c7e3b7b9918a771aed98988c83c807016cc578b45171\": RecentStats: unable to find data in memory cache]" Jan 29 15:52:03 crc kubenswrapper[5008]: I0129 15:52:03.191539 5008 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6578955fd5-h99wm"] Jan 29 15:52:03 crc kubenswrapper[5008]: I0129 15:52:03.312233 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 29 15:52:03 crc kubenswrapper[5008]: W0129 15:52:03.319657 5008 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2fb31b59_3f31_4c28_ab5c_e2248ed9fd68.slice/crio-389259437b307b4cfc4471206316ecc9ba9f12cd3bf0806c91536ddba10b92db WatchSource:0}: Error finding container 389259437b307b4cfc4471206316ecc9ba9f12cd3bf0806c91536ddba10b92db: Status 404 returned error can't find the container with id 389259437b307b4cfc4471206316ecc9ba9f12cd3bf0806c91536ddba10b92db Jan 29 15:52:03 crc kubenswrapper[5008]: I0129 15:52:03.346211 5008 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1f0bf87f-118b-4ad5-8354-688ae93d75e8" path="/var/lib/kubelet/pods/1f0bf87f-118b-4ad5-8354-688ae93d75e8/volumes" Jan 29 15:52:03 crc kubenswrapper[5008]: I0129 15:52:03.347086 5008 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="35979baf-dba0-453c-bafd-16985d082448" path="/var/lib/kubelet/pods/35979baf-dba0-453c-bafd-16985d082448/volumes" Jan 29 15:52:04 crc kubenswrapper[5008]: I0129 15:52:04.215082 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"2fb31b59-3f31-4c28-ab5c-e2248ed9fd68","Type":"ContainerStarted","Data":"17b2938a300945d89c2e820081e7b2a24c3ca3bec8b7edb3be53cf8c5bdf2768"} Jan 29 15:52:04 crc kubenswrapper[5008]: I0129 15:52:04.215540 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"2fb31b59-3f31-4c28-ab5c-e2248ed9fd68","Type":"ContainerStarted","Data":"389259437b307b4cfc4471206316ecc9ba9f12cd3bf0806c91536ddba10b92db"} Jan 29 15:52:04 crc kubenswrapper[5008]: I0129 15:52:04.241071 5008 generic.go:334] "Generic (PLEG): container finished" podID="aafcc4fd-9cb2-458b-892e-0e56adcdfa2c" containerID="6577ef7af46ac87bbeb2eb62d4d6f390b86ce894a2b7eb71d0570cec11f0f60f" exitCode=0 Jan 29 15:52:04 crc kubenswrapper[5008]: I0129 15:52:04.241142 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"aafcc4fd-9cb2-458b-892e-0e56adcdfa2c","Type":"ContainerDied","Data":"6577ef7af46ac87bbeb2eb62d4d6f390b86ce894a2b7eb71d0570cec11f0f60f"} Jan 29 15:52:04 crc kubenswrapper[5008]: I0129 15:52:04.241990 5008 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.241974705 podStartE2EDuration="2.241974705s" podCreationTimestamp="2026-01-29 15:52:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 15:52:04.237632659 +0000 UTC m=+1467.910486916" watchObservedRunningTime="2026-01-29 15:52:04.241974705 +0000 UTC m=+1467.914828942" Jan 29 15:52:04 crc kubenswrapper[5008]: I0129 15:52:04.641709 5008 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 29 15:52:04 crc kubenswrapper[5008]: I0129 15:52:04.742370 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f6cf4\" (UniqueName: \"kubernetes.io/projected/aafcc4fd-9cb2-458b-892e-0e56adcdfa2c-kube-api-access-f6cf4\") pod \"aafcc4fd-9cb2-458b-892e-0e56adcdfa2c\" (UID: \"aafcc4fd-9cb2-458b-892e-0e56adcdfa2c\") " Jan 29 15:52:04 crc kubenswrapper[5008]: I0129 15:52:04.742437 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aafcc4fd-9cb2-458b-892e-0e56adcdfa2c-config-data\") pod \"aafcc4fd-9cb2-458b-892e-0e56adcdfa2c\" (UID: \"aafcc4fd-9cb2-458b-892e-0e56adcdfa2c\") " Jan 29 15:52:04 crc kubenswrapper[5008]: I0129 15:52:04.742484 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aafcc4fd-9cb2-458b-892e-0e56adcdfa2c-combined-ca-bundle\") pod \"aafcc4fd-9cb2-458b-892e-0e56adcdfa2c\" (UID: \"aafcc4fd-9cb2-458b-892e-0e56adcdfa2c\") " Jan 29 15:52:04 crc kubenswrapper[5008]: I0129 15:52:04.742699 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/aafcc4fd-9cb2-458b-892e-0e56adcdfa2c-logs\") pod \"aafcc4fd-9cb2-458b-892e-0e56adcdfa2c\" (UID: \"aafcc4fd-9cb2-458b-892e-0e56adcdfa2c\") " Jan 29 15:52:04 crc kubenswrapper[5008]: I0129 15:52:04.743825 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/aafcc4fd-9cb2-458b-892e-0e56adcdfa2c-logs" (OuterVolumeSpecName: "logs") pod "aafcc4fd-9cb2-458b-892e-0e56adcdfa2c" (UID: "aafcc4fd-9cb2-458b-892e-0e56adcdfa2c"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 15:52:04 crc kubenswrapper[5008]: I0129 15:52:04.750840 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/aafcc4fd-9cb2-458b-892e-0e56adcdfa2c-kube-api-access-f6cf4" (OuterVolumeSpecName: "kube-api-access-f6cf4") pod "aafcc4fd-9cb2-458b-892e-0e56adcdfa2c" (UID: "aafcc4fd-9cb2-458b-892e-0e56adcdfa2c"). InnerVolumeSpecName "kube-api-access-f6cf4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:52:04 crc kubenswrapper[5008]: I0129 15:52:04.777239 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aafcc4fd-9cb2-458b-892e-0e56adcdfa2c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "aafcc4fd-9cb2-458b-892e-0e56adcdfa2c" (UID: "aafcc4fd-9cb2-458b-892e-0e56adcdfa2c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:52:04 crc kubenswrapper[5008]: I0129 15:52:04.777871 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aafcc4fd-9cb2-458b-892e-0e56adcdfa2c-config-data" (OuterVolumeSpecName: "config-data") pod "aafcc4fd-9cb2-458b-892e-0e56adcdfa2c" (UID: "aafcc4fd-9cb2-458b-892e-0e56adcdfa2c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:52:04 crc kubenswrapper[5008]: I0129 15:52:04.845381 5008 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/aafcc4fd-9cb2-458b-892e-0e56adcdfa2c-logs\") on node \"crc\" DevicePath \"\"" Jan 29 15:52:04 crc kubenswrapper[5008]: I0129 15:52:04.845438 5008 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f6cf4\" (UniqueName: \"kubernetes.io/projected/aafcc4fd-9cb2-458b-892e-0e56adcdfa2c-kube-api-access-f6cf4\") on node \"crc\" DevicePath \"\"" Jan 29 15:52:04 crc kubenswrapper[5008]: I0129 15:52:04.845453 5008 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aafcc4fd-9cb2-458b-892e-0e56adcdfa2c-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 15:52:04 crc kubenswrapper[5008]: I0129 15:52:04.845467 5008 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aafcc4fd-9cb2-458b-892e-0e56adcdfa2c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 15:52:05 crc kubenswrapper[5008]: I0129 15:52:05.259701 5008 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 29 15:52:05 crc kubenswrapper[5008]: I0129 15:52:05.260310 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"aafcc4fd-9cb2-458b-892e-0e56adcdfa2c","Type":"ContainerDied","Data":"0fa105059117f2b4c51f1c17146bba198c1ad14ed2d53794274c62ac38095b80"} Jan 29 15:52:05 crc kubenswrapper[5008]: I0129 15:52:05.260354 5008 scope.go:117] "RemoveContainer" containerID="6577ef7af46ac87bbeb2eb62d4d6f390b86ce894a2b7eb71d0570cec11f0f60f" Jan 29 15:52:05 crc kubenswrapper[5008]: I0129 15:52:05.286405 5008 scope.go:117] "RemoveContainer" containerID="2d137f6ab32493e4c84e12dddea0af4d07130b45d33ad383089e874020edd1c9" Jan 29 15:52:05 crc kubenswrapper[5008]: I0129 15:52:05.342493 5008 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 29 15:52:05 crc kubenswrapper[5008]: I0129 15:52:05.342533 5008 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Jan 29 15:52:05 crc kubenswrapper[5008]: I0129 15:52:05.357905 5008 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Jan 29 15:52:05 crc kubenswrapper[5008]: E0129 15:52:05.358312 5008 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aafcc4fd-9cb2-458b-892e-0e56adcdfa2c" containerName="nova-api-api" Jan 29 15:52:05 crc kubenswrapper[5008]: I0129 15:52:05.358336 5008 state_mem.go:107] "Deleted CPUSet assignment" podUID="aafcc4fd-9cb2-458b-892e-0e56adcdfa2c" containerName="nova-api-api" Jan 29 15:52:05 crc kubenswrapper[5008]: E0129 15:52:05.358352 5008 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="35979baf-dba0-453c-bafd-16985d082448" containerName="dnsmasq-dns" Jan 29 15:52:05 crc kubenswrapper[5008]: I0129 15:52:05.358360 5008 state_mem.go:107] "Deleted CPUSet assignment" podUID="35979baf-dba0-453c-bafd-16985d082448" containerName="dnsmasq-dns" Jan 29 15:52:05 crc kubenswrapper[5008]: E0129 15:52:05.358377 5008 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aafcc4fd-9cb2-458b-892e-0e56adcdfa2c" containerName="nova-api-log" Jan 29 15:52:05 crc kubenswrapper[5008]: I0129 15:52:05.358384 5008 state_mem.go:107] "Deleted CPUSet assignment" podUID="aafcc4fd-9cb2-458b-892e-0e56adcdfa2c" containerName="nova-api-log" Jan 29 15:52:05 crc kubenswrapper[5008]: E0129 15:52:05.358396 5008 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="35979baf-dba0-453c-bafd-16985d082448" containerName="init" Jan 29 15:52:05 crc kubenswrapper[5008]: I0129 15:52:05.358403 5008 state_mem.go:107] "Deleted CPUSet assignment" podUID="35979baf-dba0-453c-bafd-16985d082448" containerName="init" Jan 29 15:52:05 crc kubenswrapper[5008]: I0129 15:52:05.358584 5008 memory_manager.go:354] "RemoveStaleState removing state" podUID="35979baf-dba0-453c-bafd-16985d082448" containerName="dnsmasq-dns" Jan 29 15:52:05 crc kubenswrapper[5008]: I0129 15:52:05.358596 5008 memory_manager.go:354] "RemoveStaleState removing state" podUID="aafcc4fd-9cb2-458b-892e-0e56adcdfa2c" containerName="nova-api-api" Jan 29 15:52:05 crc kubenswrapper[5008]: I0129 15:52:05.358614 5008 memory_manager.go:354] "RemoveStaleState removing state" podUID="aafcc4fd-9cb2-458b-892e-0e56adcdfa2c" containerName="nova-api-log" Jan 29 15:52:05 crc kubenswrapper[5008]: I0129 15:52:05.359539 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 29 15:52:05 crc kubenswrapper[5008]: I0129 15:52:05.364562 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Jan 29 15:52:05 crc kubenswrapper[5008]: I0129 15:52:05.365463 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 29 15:52:05 crc kubenswrapper[5008]: I0129 15:52:05.455229 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kmzzn\" (UniqueName: \"kubernetes.io/projected/efd2d95c-747e-4f68-9eca-436834c87a96-kube-api-access-kmzzn\") pod \"nova-api-0\" (UID: \"efd2d95c-747e-4f68-9eca-436834c87a96\") " pod="openstack/nova-api-0" Jan 29 15:52:05 crc kubenswrapper[5008]: I0129 15:52:05.456028 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/efd2d95c-747e-4f68-9eca-436834c87a96-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"efd2d95c-747e-4f68-9eca-436834c87a96\") " pod="openstack/nova-api-0" Jan 29 15:52:05 crc kubenswrapper[5008]: I0129 15:52:05.456133 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/efd2d95c-747e-4f68-9eca-436834c87a96-logs\") pod \"nova-api-0\" (UID: \"efd2d95c-747e-4f68-9eca-436834c87a96\") " pod="openstack/nova-api-0" Jan 29 15:52:05 crc kubenswrapper[5008]: I0129 15:52:05.456209 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/efd2d95c-747e-4f68-9eca-436834c87a96-config-data\") pod \"nova-api-0\" (UID: \"efd2d95c-747e-4f68-9eca-436834c87a96\") " pod="openstack/nova-api-0" Jan 29 15:52:05 crc kubenswrapper[5008]: I0129 15:52:05.558079 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kmzzn\" (UniqueName: \"kubernetes.io/projected/efd2d95c-747e-4f68-9eca-436834c87a96-kube-api-access-kmzzn\") pod \"nova-api-0\" (UID: \"efd2d95c-747e-4f68-9eca-436834c87a96\") " pod="openstack/nova-api-0" Jan 29 15:52:05 crc kubenswrapper[5008]: I0129 15:52:05.558261 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/efd2d95c-747e-4f68-9eca-436834c87a96-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"efd2d95c-747e-4f68-9eca-436834c87a96\") " pod="openstack/nova-api-0" Jan 29 15:52:05 crc kubenswrapper[5008]: I0129 15:52:05.558299 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/efd2d95c-747e-4f68-9eca-436834c87a96-logs\") pod \"nova-api-0\" (UID: \"efd2d95c-747e-4f68-9eca-436834c87a96\") " pod="openstack/nova-api-0" Jan 29 15:52:05 crc kubenswrapper[5008]: I0129 15:52:05.558339 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/efd2d95c-747e-4f68-9eca-436834c87a96-config-data\") pod \"nova-api-0\" (UID: \"efd2d95c-747e-4f68-9eca-436834c87a96\") " pod="openstack/nova-api-0" Jan 29 15:52:05 crc kubenswrapper[5008]: I0129 15:52:05.559065 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/efd2d95c-747e-4f68-9eca-436834c87a96-logs\") pod \"nova-api-0\" (UID: \"efd2d95c-747e-4f68-9eca-436834c87a96\") " pod="openstack/nova-api-0" Jan 29 15:52:05 crc kubenswrapper[5008]: I0129 15:52:05.563991 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/efd2d95c-747e-4f68-9eca-436834c87a96-config-data\") pod \"nova-api-0\" (UID: \"efd2d95c-747e-4f68-9eca-436834c87a96\") " pod="openstack/nova-api-0" Jan 29 15:52:05 crc kubenswrapper[5008]: I0129 15:52:05.564151 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/efd2d95c-747e-4f68-9eca-436834c87a96-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"efd2d95c-747e-4f68-9eca-436834c87a96\") " pod="openstack/nova-api-0" Jan 29 15:52:05 crc kubenswrapper[5008]: I0129 15:52:05.576855 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kmzzn\" (UniqueName: \"kubernetes.io/projected/efd2d95c-747e-4f68-9eca-436834c87a96-kube-api-access-kmzzn\") pod \"nova-api-0\" (UID: \"efd2d95c-747e-4f68-9eca-436834c87a96\") " pod="openstack/nova-api-0" Jan 29 15:52:05 crc kubenswrapper[5008]: I0129 15:52:05.678749 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 29 15:52:05 crc kubenswrapper[5008]: I0129 15:52:05.779077 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 29 15:52:05 crc kubenswrapper[5008]: I0129 15:52:05.779807 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 29 15:52:06 crc kubenswrapper[5008]: I0129 15:52:06.202759 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 29 15:52:06 crc kubenswrapper[5008]: W0129 15:52:06.206190 5008 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podefd2d95c_747e_4f68_9eca_436834c87a96.slice/crio-fffdd9b250912494bb4bca4bfd92b94d0781e1cf0cbf079d1c4fe2bc1d2f70ff WatchSource:0}: Error finding container fffdd9b250912494bb4bca4bfd92b94d0781e1cf0cbf079d1c4fe2bc1d2f70ff: Status 404 returned error can't find the container with id fffdd9b250912494bb4bca4bfd92b94d0781e1cf0cbf079d1c4fe2bc1d2f70ff Jan 29 15:52:06 crc kubenswrapper[5008]: I0129 15:52:06.271543 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"efd2d95c-747e-4f68-9eca-436834c87a96","Type":"ContainerStarted","Data":"fffdd9b250912494bb4bca4bfd92b94d0781e1cf0cbf079d1c4fe2bc1d2f70ff"} Jan 29 15:52:07 crc kubenswrapper[5008]: I0129 15:52:07.285978 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"efd2d95c-747e-4f68-9eca-436834c87a96","Type":"ContainerStarted","Data":"8a801ee0afabe9a56e81dd0e385057e7647970f6e434df2be1749ac0726c9c9c"} Jan 29 15:52:07 crc kubenswrapper[5008]: I0129 15:52:07.286335 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"efd2d95c-747e-4f68-9eca-436834c87a96","Type":"ContainerStarted","Data":"3dd7f1c9512e33fd74ad75dbb59ae738d4a68177c58dd491acfa86b6b891688b"} Jan 29 15:52:07 crc kubenswrapper[5008]: I0129 15:52:07.312328 5008 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.312306551 podStartE2EDuration="2.312306551s" podCreationTimestamp="2026-01-29 15:52:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 15:52:07.306652774 +0000 UTC m=+1470.979507051" watchObservedRunningTime="2026-01-29 15:52:07.312306551 +0000 UTC m=+1470.985160808" Jan 29 15:52:07 crc kubenswrapper[5008]: I0129 15:52:07.342913 5008 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="aafcc4fd-9cb2-458b-892e-0e56adcdfa2c" path="/var/lib/kubelet/pods/aafcc4fd-9cb2-458b-892e-0e56adcdfa2c/volumes" Jan 29 15:52:07 crc kubenswrapper[5008]: I0129 15:52:07.828742 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Jan 29 15:52:09 crc kubenswrapper[5008]: E0129 15:52:09.451894 5008 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://registry.redhat.io/ubi9/httpd-24:latest: Requesting bearer token: invalid status code from registry 403 (Forbidden)" image="registry.redhat.io/ubi9/httpd-24:latest" Jan 29 15:52:09 crc kubenswrapper[5008]: E0129 15:52:09.452415 5008 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:proxy-httpd,Image:registry.redhat.io/ubi9/httpd-24:latest,Command:[/usr/sbin/httpd],Args:[-DFOREGROUND],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:proxy-httpd,HostPort:0,ContainerPort:3000,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/httpd/conf/httpd.conf,SubPath:httpd.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/httpd/conf.d/ssl.conf,SubPath:ssl.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:run-httpd,ReadOnly:false,MountPath:/run/httpd,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:log-httpd,ReadOnly:false,MountPath:/var/log/httpd,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-5vwdz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/,Port:{0 3000 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:30,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/,Port:{0 3000 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:30,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(d1ab502b-4ec7-4a0b-b7a4-ed10d3f26be7): ErrImagePull: initializing source docker://registry.redhat.io/ubi9/httpd-24:latest: Requesting bearer token: invalid status code from registry 403 (Forbidden)" logger="UnhandledError" Jan 29 15:52:09 crc kubenswrapper[5008]: E0129 15:52:09.453646 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-httpd\" with ErrImagePull: \"initializing source docker://registry.redhat.io/ubi9/httpd-24:latest: Requesting bearer token: invalid status code from registry 403 (Forbidden)\"" pod="openstack/ceilometer-0" podUID="d1ab502b-4ec7-4a0b-b7a4-ed10d3f26be7" Jan 29 15:52:10 crc kubenswrapper[5008]: I0129 15:52:10.779989 5008 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Jan 29 15:52:10 crc kubenswrapper[5008]: I0129 15:52:10.780058 5008 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Jan 29 15:52:11 crc kubenswrapper[5008]: I0129 15:52:11.800897 5008 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="038b9a46-5128-497b-8073-557e8f3542fb" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.196:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 29 15:52:11 crc kubenswrapper[5008]: I0129 15:52:11.800957 5008 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="038b9a46-5128-497b-8073-557e8f3542fb" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.196:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 29 15:52:12 crc kubenswrapper[5008]: I0129 15:52:12.329919 5008 generic.go:334] "Generic (PLEG): container finished" podID="a0d0cf25-1253-4f34-91a0-c4381d2e8a3f" containerID="36c4369212a2c18b6f334f104822d0182e207e44849984ff3689c410393720c8" exitCode=0 Jan 29 15:52:12 crc kubenswrapper[5008]: I0129 15:52:12.329954 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-k5vpb" event={"ID":"a0d0cf25-1253-4f34-91a0-c4381d2e8a3f","Type":"ContainerDied","Data":"36c4369212a2c18b6f334f104822d0182e207e44849984ff3689c410393720c8"} Jan 29 15:52:12 crc kubenswrapper[5008]: I0129 15:52:12.828482 5008 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Jan 29 15:52:12 crc kubenswrapper[5008]: I0129 15:52:12.861287 5008 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Jan 29 15:52:13 crc kubenswrapper[5008]: I0129 15:52:13.396082 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Jan 29 15:52:13 crc kubenswrapper[5008]: E0129 15:52:13.400287 5008 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod00b42485_f42b_4ca6_8e84_1a795454dd9f.slice/crio-9cfdb60cd6bab187b310c7e3b7b9918a771aed98988c83c807016cc578b45171\": RecentStats: unable to find data in memory cache]" Jan 29 15:52:13 crc kubenswrapper[5008]: I0129 15:52:13.779641 5008 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-k5vpb" Jan 29 15:52:13 crc kubenswrapper[5008]: I0129 15:52:13.818387 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a0d0cf25-1253-4f34-91a0-c4381d2e8a3f-config-data\") pod \"a0d0cf25-1253-4f34-91a0-c4381d2e8a3f\" (UID: \"a0d0cf25-1253-4f34-91a0-c4381d2e8a3f\") " Jan 29 15:52:13 crc kubenswrapper[5008]: I0129 15:52:13.818553 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a0d0cf25-1253-4f34-91a0-c4381d2e8a3f-scripts\") pod \"a0d0cf25-1253-4f34-91a0-c4381d2e8a3f\" (UID: \"a0d0cf25-1253-4f34-91a0-c4381d2e8a3f\") " Jan 29 15:52:13 crc kubenswrapper[5008]: I0129 15:52:13.818680 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a0d0cf25-1253-4f34-91a0-c4381d2e8a3f-combined-ca-bundle\") pod \"a0d0cf25-1253-4f34-91a0-c4381d2e8a3f\" (UID: \"a0d0cf25-1253-4f34-91a0-c4381d2e8a3f\") " Jan 29 15:52:13 crc kubenswrapper[5008]: I0129 15:52:13.818721 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-84qhc\" (UniqueName: \"kubernetes.io/projected/a0d0cf25-1253-4f34-91a0-c4381d2e8a3f-kube-api-access-84qhc\") pod \"a0d0cf25-1253-4f34-91a0-c4381d2e8a3f\" (UID: \"a0d0cf25-1253-4f34-91a0-c4381d2e8a3f\") " Jan 29 15:52:13 crc kubenswrapper[5008]: I0129 15:52:13.825013 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a0d0cf25-1253-4f34-91a0-c4381d2e8a3f-kube-api-access-84qhc" (OuterVolumeSpecName: "kube-api-access-84qhc") pod "a0d0cf25-1253-4f34-91a0-c4381d2e8a3f" (UID: "a0d0cf25-1253-4f34-91a0-c4381d2e8a3f"). InnerVolumeSpecName "kube-api-access-84qhc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:52:13 crc kubenswrapper[5008]: I0129 15:52:13.836059 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0d0cf25-1253-4f34-91a0-c4381d2e8a3f-scripts" (OuterVolumeSpecName: "scripts") pod "a0d0cf25-1253-4f34-91a0-c4381d2e8a3f" (UID: "a0d0cf25-1253-4f34-91a0-c4381d2e8a3f"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:52:13 crc kubenswrapper[5008]: I0129 15:52:13.851416 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0d0cf25-1253-4f34-91a0-c4381d2e8a3f-config-data" (OuterVolumeSpecName: "config-data") pod "a0d0cf25-1253-4f34-91a0-c4381d2e8a3f" (UID: "a0d0cf25-1253-4f34-91a0-c4381d2e8a3f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:52:13 crc kubenswrapper[5008]: I0129 15:52:13.853148 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0d0cf25-1253-4f34-91a0-c4381d2e8a3f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a0d0cf25-1253-4f34-91a0-c4381d2e8a3f" (UID: "a0d0cf25-1253-4f34-91a0-c4381d2e8a3f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:52:13 crc kubenswrapper[5008]: I0129 15:52:13.920755 5008 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a0d0cf25-1253-4f34-91a0-c4381d2e8a3f-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 15:52:13 crc kubenswrapper[5008]: I0129 15:52:13.920816 5008 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a0d0cf25-1253-4f34-91a0-c4381d2e8a3f-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 15:52:13 crc kubenswrapper[5008]: I0129 15:52:13.920829 5008 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a0d0cf25-1253-4f34-91a0-c4381d2e8a3f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 15:52:13 crc kubenswrapper[5008]: I0129 15:52:13.920843 5008 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-84qhc\" (UniqueName: \"kubernetes.io/projected/a0d0cf25-1253-4f34-91a0-c4381d2e8a3f-kube-api-access-84qhc\") on node \"crc\" DevicePath \"\"" Jan 29 15:52:13 crc kubenswrapper[5008]: I0129 15:52:13.990290 5008 patch_prober.go:28] interesting pod/machine-config-daemon-gk9q8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 15:52:13 crc kubenswrapper[5008]: I0129 15:52:13.990361 5008 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-gk9q8" podUID="ca0fcb2d-733d-4bde-9bbf-3f7082d0e244" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 15:52:14 crc kubenswrapper[5008]: I0129 15:52:14.358761 5008 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-k5vpb" Jan 29 15:52:14 crc kubenswrapper[5008]: I0129 15:52:14.358913 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-k5vpb" event={"ID":"a0d0cf25-1253-4f34-91a0-c4381d2e8a3f","Type":"ContainerDied","Data":"028242919e3f4265fc6386d321897f9b93da1293777fa8227ed9be3c5ccefdec"} Jan 29 15:52:14 crc kubenswrapper[5008]: I0129 15:52:14.360122 5008 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="028242919e3f4265fc6386d321897f9b93da1293777fa8227ed9be3c5ccefdec" Jan 29 15:52:14 crc kubenswrapper[5008]: I0129 15:52:14.441645 5008 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-0"] Jan 29 15:52:14 crc kubenswrapper[5008]: E0129 15:52:14.442320 5008 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a0d0cf25-1253-4f34-91a0-c4381d2e8a3f" containerName="nova-cell1-conductor-db-sync" Jan 29 15:52:14 crc kubenswrapper[5008]: I0129 15:52:14.442339 5008 state_mem.go:107] "Deleted CPUSet assignment" podUID="a0d0cf25-1253-4f34-91a0-c4381d2e8a3f" containerName="nova-cell1-conductor-db-sync" Jan 29 15:52:14 crc kubenswrapper[5008]: I0129 15:52:14.442572 5008 memory_manager.go:354] "RemoveStaleState removing state" podUID="a0d0cf25-1253-4f34-91a0-c4381d2e8a3f" containerName="nova-cell1-conductor-db-sync" Jan 29 15:52:14 crc kubenswrapper[5008]: I0129 15:52:14.443511 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Jan 29 15:52:14 crc kubenswrapper[5008]: I0129 15:52:14.446425 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Jan 29 15:52:14 crc kubenswrapper[5008]: I0129 15:52:14.449983 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1a40e352-7353-41e6-8c6e-58b7beca8ab9-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"1a40e352-7353-41e6-8c6e-58b7beca8ab9\") " pod="openstack/nova-cell1-conductor-0" Jan 29 15:52:14 crc kubenswrapper[5008]: I0129 15:52:14.450087 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1a40e352-7353-41e6-8c6e-58b7beca8ab9-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"1a40e352-7353-41e6-8c6e-58b7beca8ab9\") " pod="openstack/nova-cell1-conductor-0" Jan 29 15:52:14 crc kubenswrapper[5008]: I0129 15:52:14.450192 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qm4js\" (UniqueName: \"kubernetes.io/projected/1a40e352-7353-41e6-8c6e-58b7beca8ab9-kube-api-access-qm4js\") pod \"nova-cell1-conductor-0\" (UID: \"1a40e352-7353-41e6-8c6e-58b7beca8ab9\") " pod="openstack/nova-cell1-conductor-0" Jan 29 15:52:14 crc kubenswrapper[5008]: I0129 15:52:14.457955 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Jan 29 15:52:14 crc kubenswrapper[5008]: I0129 15:52:14.550768 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1a40e352-7353-41e6-8c6e-58b7beca8ab9-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"1a40e352-7353-41e6-8c6e-58b7beca8ab9\") " pod="openstack/nova-cell1-conductor-0" Jan 29 15:52:14 crc kubenswrapper[5008]: I0129 15:52:14.550888 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qm4js\" (UniqueName: \"kubernetes.io/projected/1a40e352-7353-41e6-8c6e-58b7beca8ab9-kube-api-access-qm4js\") pod \"nova-cell1-conductor-0\" (UID: \"1a40e352-7353-41e6-8c6e-58b7beca8ab9\") " pod="openstack/nova-cell1-conductor-0" Jan 29 15:52:14 crc kubenswrapper[5008]: I0129 15:52:14.550979 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1a40e352-7353-41e6-8c6e-58b7beca8ab9-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"1a40e352-7353-41e6-8c6e-58b7beca8ab9\") " pod="openstack/nova-cell1-conductor-0" Jan 29 15:52:14 crc kubenswrapper[5008]: I0129 15:52:14.571224 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1a40e352-7353-41e6-8c6e-58b7beca8ab9-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"1a40e352-7353-41e6-8c6e-58b7beca8ab9\") " pod="openstack/nova-cell1-conductor-0" Jan 29 15:52:14 crc kubenswrapper[5008]: I0129 15:52:14.571979 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qm4js\" (UniqueName: \"kubernetes.io/projected/1a40e352-7353-41e6-8c6e-58b7beca8ab9-kube-api-access-qm4js\") pod \"nova-cell1-conductor-0\" (UID: \"1a40e352-7353-41e6-8c6e-58b7beca8ab9\") " pod="openstack/nova-cell1-conductor-0" Jan 29 15:52:14 crc kubenswrapper[5008]: I0129 15:52:14.590911 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1a40e352-7353-41e6-8c6e-58b7beca8ab9-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"1a40e352-7353-41e6-8c6e-58b7beca8ab9\") " pod="openstack/nova-cell1-conductor-0" Jan 29 15:52:14 crc kubenswrapper[5008]: I0129 15:52:14.772891 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Jan 29 15:52:15 crc kubenswrapper[5008]: I0129 15:52:15.244226 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Jan 29 15:52:15 crc kubenswrapper[5008]: W0129 15:52:15.275594 5008 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1a40e352_7353_41e6_8c6e_58b7beca8ab9.slice/crio-b28ae5b5ff57fbd4e555d2d9db1c5d302ee3406e774dfb7ad2b06776a2585d70 WatchSource:0}: Error finding container b28ae5b5ff57fbd4e555d2d9db1c5d302ee3406e774dfb7ad2b06776a2585d70: Status 404 returned error can't find the container with id b28ae5b5ff57fbd4e555d2d9db1c5d302ee3406e774dfb7ad2b06776a2585d70 Jan 29 15:52:15 crc kubenswrapper[5008]: I0129 15:52:15.369438 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"1a40e352-7353-41e6-8c6e-58b7beca8ab9","Type":"ContainerStarted","Data":"b28ae5b5ff57fbd4e555d2d9db1c5d302ee3406e774dfb7ad2b06776a2585d70"} Jan 29 15:52:15 crc kubenswrapper[5008]: I0129 15:52:15.678987 5008 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 29 15:52:15 crc kubenswrapper[5008]: I0129 15:52:15.679050 5008 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 29 15:52:16 crc kubenswrapper[5008]: I0129 15:52:16.384029 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"1a40e352-7353-41e6-8c6e-58b7beca8ab9","Type":"ContainerStarted","Data":"82fdfd42d6fe42d23008b43a8882e8abe9c698de4e1ef0dac6a007e0ec6158c8"} Jan 29 15:52:16 crc kubenswrapper[5008]: I0129 15:52:16.384919 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-conductor-0" Jan 29 15:52:16 crc kubenswrapper[5008]: I0129 15:52:16.408205 5008 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-0" podStartSLOduration=2.408186471 podStartE2EDuration="2.408186471s" podCreationTimestamp="2026-01-29 15:52:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 15:52:16.406234713 +0000 UTC m=+1480.079089000" watchObservedRunningTime="2026-01-29 15:52:16.408186471 +0000 UTC m=+1480.081040718" Jan 29 15:52:16 crc kubenswrapper[5008]: I0129 15:52:16.721195 5008 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="efd2d95c-747e-4f68-9eca-436834c87a96" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.198:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 29 15:52:16 crc kubenswrapper[5008]: I0129 15:52:16.764893 5008 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="efd2d95c-747e-4f68-9eca-436834c87a96" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.198:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 29 15:52:20 crc kubenswrapper[5008]: I0129 15:52:20.789758 5008 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Jan 29 15:52:20 crc kubenswrapper[5008]: I0129 15:52:20.791931 5008 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Jan 29 15:52:20 crc kubenswrapper[5008]: I0129 15:52:20.805526 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Jan 29 15:52:21 crc kubenswrapper[5008]: I0129 15:52:21.434265 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Jan 29 15:52:23 crc kubenswrapper[5008]: E0129 15:52:23.327279 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-httpd\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/ubi9/httpd-24:latest\\\"\"" pod="openstack/ceilometer-0" podUID="d1ab502b-4ec7-4a0b-b7a4-ed10d3f26be7" Jan 29 15:52:23 crc kubenswrapper[5008]: E0129 15:52:23.628810 5008 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod00b42485_f42b_4ca6_8e84_1a795454dd9f.slice/crio-9cfdb60cd6bab187b310c7e3b7b9918a771aed98988c83c807016cc578b45171\": RecentStats: unable to find data in memory cache]" Jan 29 15:52:24 crc kubenswrapper[5008]: I0129 15:52:24.468807 5008 generic.go:334] "Generic (PLEG): container finished" podID="13fcb7f1-5a0f-427b-a4a4-709553d1c88d" containerID="85b97eeb8fe553ff723bb92561ee6bde7c6975de4cf810b074233430e415f498" exitCode=137 Jan 29 15:52:24 crc kubenswrapper[5008]: I0129 15:52:24.468910 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"13fcb7f1-5a0f-427b-a4a4-709553d1c88d","Type":"ContainerDied","Data":"85b97eeb8fe553ff723bb92561ee6bde7c6975de4cf810b074233430e415f498"} Jan 29 15:52:24 crc kubenswrapper[5008]: I0129 15:52:24.627036 5008 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 29 15:52:24 crc kubenswrapper[5008]: I0129 15:52:24.799417 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-conductor-0" Jan 29 15:52:24 crc kubenswrapper[5008]: I0129 15:52:24.805619 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/13fcb7f1-5a0f-427b-a4a4-709553d1c88d-config-data\") pod \"13fcb7f1-5a0f-427b-a4a4-709553d1c88d\" (UID: \"13fcb7f1-5a0f-427b-a4a4-709553d1c88d\") " Jan 29 15:52:24 crc kubenswrapper[5008]: I0129 15:52:24.805726 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/13fcb7f1-5a0f-427b-a4a4-709553d1c88d-combined-ca-bundle\") pod \"13fcb7f1-5a0f-427b-a4a4-709553d1c88d\" (UID: \"13fcb7f1-5a0f-427b-a4a4-709553d1c88d\") " Jan 29 15:52:24 crc kubenswrapper[5008]: I0129 15:52:24.805967 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cbfkv\" (UniqueName: \"kubernetes.io/projected/13fcb7f1-5a0f-427b-a4a4-709553d1c88d-kube-api-access-cbfkv\") pod \"13fcb7f1-5a0f-427b-a4a4-709553d1c88d\" (UID: \"13fcb7f1-5a0f-427b-a4a4-709553d1c88d\") " Jan 29 15:52:24 crc kubenswrapper[5008]: I0129 15:52:24.811293 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/13fcb7f1-5a0f-427b-a4a4-709553d1c88d-kube-api-access-cbfkv" (OuterVolumeSpecName: "kube-api-access-cbfkv") pod "13fcb7f1-5a0f-427b-a4a4-709553d1c88d" (UID: "13fcb7f1-5a0f-427b-a4a4-709553d1c88d"). InnerVolumeSpecName "kube-api-access-cbfkv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:52:24 crc kubenswrapper[5008]: I0129 15:52:24.847577 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/13fcb7f1-5a0f-427b-a4a4-709553d1c88d-config-data" (OuterVolumeSpecName: "config-data") pod "13fcb7f1-5a0f-427b-a4a4-709553d1c88d" (UID: "13fcb7f1-5a0f-427b-a4a4-709553d1c88d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:52:24 crc kubenswrapper[5008]: I0129 15:52:24.849912 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/13fcb7f1-5a0f-427b-a4a4-709553d1c88d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "13fcb7f1-5a0f-427b-a4a4-709553d1c88d" (UID: "13fcb7f1-5a0f-427b-a4a4-709553d1c88d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:52:24 crc kubenswrapper[5008]: I0129 15:52:24.908545 5008 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cbfkv\" (UniqueName: \"kubernetes.io/projected/13fcb7f1-5a0f-427b-a4a4-709553d1c88d-kube-api-access-cbfkv\") on node \"crc\" DevicePath \"\"" Jan 29 15:52:24 crc kubenswrapper[5008]: I0129 15:52:24.908571 5008 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/13fcb7f1-5a0f-427b-a4a4-709553d1c88d-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 15:52:24 crc kubenswrapper[5008]: I0129 15:52:24.908580 5008 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/13fcb7f1-5a0f-427b-a4a4-709553d1c88d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 15:52:25 crc kubenswrapper[5008]: I0129 15:52:25.480395 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"13fcb7f1-5a0f-427b-a4a4-709553d1c88d","Type":"ContainerDied","Data":"89acbc3b89babecb84402f3ec55311a2ac1633dd886e5581dfb789b75a401ac3"} Jan 29 15:52:25 crc kubenswrapper[5008]: I0129 15:52:25.480469 5008 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 29 15:52:25 crc kubenswrapper[5008]: I0129 15:52:25.480959 5008 scope.go:117] "RemoveContainer" containerID="85b97eeb8fe553ff723bb92561ee6bde7c6975de4cf810b074233430e415f498" Jan 29 15:52:25 crc kubenswrapper[5008]: I0129 15:52:25.527235 5008 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 29 15:52:25 crc kubenswrapper[5008]: I0129 15:52:25.554959 5008 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 29 15:52:25 crc kubenswrapper[5008]: I0129 15:52:25.569324 5008 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 29 15:52:25 crc kubenswrapper[5008]: E0129 15:52:25.569668 5008 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="13fcb7f1-5a0f-427b-a4a4-709553d1c88d" containerName="nova-cell1-novncproxy-novncproxy" Jan 29 15:52:25 crc kubenswrapper[5008]: I0129 15:52:25.569687 5008 state_mem.go:107] "Deleted CPUSet assignment" podUID="13fcb7f1-5a0f-427b-a4a4-709553d1c88d" containerName="nova-cell1-novncproxy-novncproxy" Jan 29 15:52:25 crc kubenswrapper[5008]: I0129 15:52:25.569983 5008 memory_manager.go:354] "RemoveStaleState removing state" podUID="13fcb7f1-5a0f-427b-a4a4-709553d1c88d" containerName="nova-cell1-novncproxy-novncproxy" Jan 29 15:52:25 crc kubenswrapper[5008]: I0129 15:52:25.570729 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 29 15:52:25 crc kubenswrapper[5008]: I0129 15:52:25.572969 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 29 15:52:25 crc kubenswrapper[5008]: I0129 15:52:25.573351 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-public-svc" Jan 29 15:52:25 crc kubenswrapper[5008]: I0129 15:52:25.573514 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Jan 29 15:52:25 crc kubenswrapper[5008]: I0129 15:52:25.577669 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-vencrypt" Jan 29 15:52:25 crc kubenswrapper[5008]: I0129 15:52:25.695854 5008 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Jan 29 15:52:25 crc kubenswrapper[5008]: I0129 15:52:25.696428 5008 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Jan 29 15:52:25 crc kubenswrapper[5008]: I0129 15:52:25.696463 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Jan 29 15:52:25 crc kubenswrapper[5008]: I0129 15:52:25.698847 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Jan 29 15:52:25 crc kubenswrapper[5008]: I0129 15:52:25.723917 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/21ca19b4-0317-4b08-8dc2-a4295c2fb8e4-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"21ca19b4-0317-4b08-8dc2-a4295c2fb8e4\") " pod="openstack/nova-cell1-novncproxy-0" Jan 29 15:52:25 crc kubenswrapper[5008]: I0129 15:52:25.724210 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/21ca19b4-0317-4b08-8dc2-a4295c2fb8e4-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"21ca19b4-0317-4b08-8dc2-a4295c2fb8e4\") " pod="openstack/nova-cell1-novncproxy-0" Jan 29 15:52:25 crc kubenswrapper[5008]: I0129 15:52:25.724337 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/21ca19b4-0317-4b08-8dc2-a4295c2fb8e4-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"21ca19b4-0317-4b08-8dc2-a4295c2fb8e4\") " pod="openstack/nova-cell1-novncproxy-0" Jan 29 15:52:25 crc kubenswrapper[5008]: I0129 15:52:25.724511 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qbrsw\" (UniqueName: \"kubernetes.io/projected/21ca19b4-0317-4b08-8dc2-a4295c2fb8e4-kube-api-access-qbrsw\") pod \"nova-cell1-novncproxy-0\" (UID: \"21ca19b4-0317-4b08-8dc2-a4295c2fb8e4\") " pod="openstack/nova-cell1-novncproxy-0" Jan 29 15:52:25 crc kubenswrapper[5008]: I0129 15:52:25.724711 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/21ca19b4-0317-4b08-8dc2-a4295c2fb8e4-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"21ca19b4-0317-4b08-8dc2-a4295c2fb8e4\") " pod="openstack/nova-cell1-novncproxy-0" Jan 29 15:52:25 crc kubenswrapper[5008]: I0129 15:52:25.826637 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qbrsw\" (UniqueName: \"kubernetes.io/projected/21ca19b4-0317-4b08-8dc2-a4295c2fb8e4-kube-api-access-qbrsw\") pod \"nova-cell1-novncproxy-0\" (UID: \"21ca19b4-0317-4b08-8dc2-a4295c2fb8e4\") " pod="openstack/nova-cell1-novncproxy-0" Jan 29 15:52:25 crc kubenswrapper[5008]: I0129 15:52:25.826714 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/21ca19b4-0317-4b08-8dc2-a4295c2fb8e4-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"21ca19b4-0317-4b08-8dc2-a4295c2fb8e4\") " pod="openstack/nova-cell1-novncproxy-0" Jan 29 15:52:25 crc kubenswrapper[5008]: I0129 15:52:25.826790 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/21ca19b4-0317-4b08-8dc2-a4295c2fb8e4-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"21ca19b4-0317-4b08-8dc2-a4295c2fb8e4\") " pod="openstack/nova-cell1-novncproxy-0" Jan 29 15:52:25 crc kubenswrapper[5008]: I0129 15:52:25.826857 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/21ca19b4-0317-4b08-8dc2-a4295c2fb8e4-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"21ca19b4-0317-4b08-8dc2-a4295c2fb8e4\") " pod="openstack/nova-cell1-novncproxy-0" Jan 29 15:52:25 crc kubenswrapper[5008]: I0129 15:52:25.826889 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/21ca19b4-0317-4b08-8dc2-a4295c2fb8e4-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"21ca19b4-0317-4b08-8dc2-a4295c2fb8e4\") " pod="openstack/nova-cell1-novncproxy-0" Jan 29 15:52:25 crc kubenswrapper[5008]: I0129 15:52:25.832224 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/21ca19b4-0317-4b08-8dc2-a4295c2fb8e4-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"21ca19b4-0317-4b08-8dc2-a4295c2fb8e4\") " pod="openstack/nova-cell1-novncproxy-0" Jan 29 15:52:25 crc kubenswrapper[5008]: I0129 15:52:25.832466 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/21ca19b4-0317-4b08-8dc2-a4295c2fb8e4-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"21ca19b4-0317-4b08-8dc2-a4295c2fb8e4\") " pod="openstack/nova-cell1-novncproxy-0" Jan 29 15:52:25 crc kubenswrapper[5008]: I0129 15:52:25.837465 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/21ca19b4-0317-4b08-8dc2-a4295c2fb8e4-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"21ca19b4-0317-4b08-8dc2-a4295c2fb8e4\") " pod="openstack/nova-cell1-novncproxy-0" Jan 29 15:52:25 crc kubenswrapper[5008]: I0129 15:52:25.838145 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/21ca19b4-0317-4b08-8dc2-a4295c2fb8e4-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"21ca19b4-0317-4b08-8dc2-a4295c2fb8e4\") " pod="openstack/nova-cell1-novncproxy-0" Jan 29 15:52:25 crc kubenswrapper[5008]: I0129 15:52:25.852883 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qbrsw\" (UniqueName: \"kubernetes.io/projected/21ca19b4-0317-4b08-8dc2-a4295c2fb8e4-kube-api-access-qbrsw\") pod \"nova-cell1-novncproxy-0\" (UID: \"21ca19b4-0317-4b08-8dc2-a4295c2fb8e4\") " pod="openstack/nova-cell1-novncproxy-0" Jan 29 15:52:25 crc kubenswrapper[5008]: I0129 15:52:25.892072 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 29 15:52:26 crc kubenswrapper[5008]: I0129 15:52:26.441028 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 29 15:52:26 crc kubenswrapper[5008]: W0129 15:52:26.456993 5008 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod21ca19b4_0317_4b08_8dc2_a4295c2fb8e4.slice/crio-5d67c97d721e1080d30926dadfc79a16d7170f7cfb94187c47909b5c047cbb58 WatchSource:0}: Error finding container 5d67c97d721e1080d30926dadfc79a16d7170f7cfb94187c47909b5c047cbb58: Status 404 returned error can't find the container with id 5d67c97d721e1080d30926dadfc79a16d7170f7cfb94187c47909b5c047cbb58 Jan 29 15:52:26 crc kubenswrapper[5008]: I0129 15:52:26.493708 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"21ca19b4-0317-4b08-8dc2-a4295c2fb8e4","Type":"ContainerStarted","Data":"5d67c97d721e1080d30926dadfc79a16d7170f7cfb94187c47909b5c047cbb58"} Jan 29 15:52:26 crc kubenswrapper[5008]: I0129 15:52:26.495514 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Jan 29 15:52:26 crc kubenswrapper[5008]: I0129 15:52:26.516263 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Jan 29 15:52:26 crc kubenswrapper[5008]: I0129 15:52:26.709952 5008 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-cd5cbd7b9-ttnd7"] Jan 29 15:52:26 crc kubenswrapper[5008]: I0129 15:52:26.718235 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-cd5cbd7b9-ttnd7" Jan 29 15:52:26 crc kubenswrapper[5008]: I0129 15:52:26.734746 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-cd5cbd7b9-ttnd7"] Jan 29 15:52:26 crc kubenswrapper[5008]: I0129 15:52:26.849327 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ffdf9dd1-5826-4e41-90ba-770e9ae42cc2-dns-svc\") pod \"dnsmasq-dns-cd5cbd7b9-ttnd7\" (UID: \"ffdf9dd1-5826-4e41-90ba-770e9ae42cc2\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-ttnd7" Jan 29 15:52:26 crc kubenswrapper[5008]: I0129 15:52:26.849641 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f5brt\" (UniqueName: \"kubernetes.io/projected/ffdf9dd1-5826-4e41-90ba-770e9ae42cc2-kube-api-access-f5brt\") pod \"dnsmasq-dns-cd5cbd7b9-ttnd7\" (UID: \"ffdf9dd1-5826-4e41-90ba-770e9ae42cc2\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-ttnd7" Jan 29 15:52:26 crc kubenswrapper[5008]: I0129 15:52:26.849668 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ffdf9dd1-5826-4e41-90ba-770e9ae42cc2-ovsdbserver-sb\") pod \"dnsmasq-dns-cd5cbd7b9-ttnd7\" (UID: \"ffdf9dd1-5826-4e41-90ba-770e9ae42cc2\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-ttnd7" Jan 29 15:52:26 crc kubenswrapper[5008]: I0129 15:52:26.849685 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ffdf9dd1-5826-4e41-90ba-770e9ae42cc2-ovsdbserver-nb\") pod \"dnsmasq-dns-cd5cbd7b9-ttnd7\" (UID: \"ffdf9dd1-5826-4e41-90ba-770e9ae42cc2\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-ttnd7" Jan 29 15:52:26 crc kubenswrapper[5008]: I0129 15:52:26.849767 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ffdf9dd1-5826-4e41-90ba-770e9ae42cc2-config\") pod \"dnsmasq-dns-cd5cbd7b9-ttnd7\" (UID: \"ffdf9dd1-5826-4e41-90ba-770e9ae42cc2\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-ttnd7" Jan 29 15:52:26 crc kubenswrapper[5008]: I0129 15:52:26.849789 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ffdf9dd1-5826-4e41-90ba-770e9ae42cc2-dns-swift-storage-0\") pod \"dnsmasq-dns-cd5cbd7b9-ttnd7\" (UID: \"ffdf9dd1-5826-4e41-90ba-770e9ae42cc2\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-ttnd7" Jan 29 15:52:26 crc kubenswrapper[5008]: I0129 15:52:26.951830 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ffdf9dd1-5826-4e41-90ba-770e9ae42cc2-dns-svc\") pod \"dnsmasq-dns-cd5cbd7b9-ttnd7\" (UID: \"ffdf9dd1-5826-4e41-90ba-770e9ae42cc2\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-ttnd7" Jan 29 15:52:26 crc kubenswrapper[5008]: I0129 15:52:26.951882 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f5brt\" (UniqueName: \"kubernetes.io/projected/ffdf9dd1-5826-4e41-90ba-770e9ae42cc2-kube-api-access-f5brt\") pod \"dnsmasq-dns-cd5cbd7b9-ttnd7\" (UID: \"ffdf9dd1-5826-4e41-90ba-770e9ae42cc2\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-ttnd7" Jan 29 15:52:26 crc kubenswrapper[5008]: I0129 15:52:26.951901 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ffdf9dd1-5826-4e41-90ba-770e9ae42cc2-ovsdbserver-sb\") pod \"dnsmasq-dns-cd5cbd7b9-ttnd7\" (UID: \"ffdf9dd1-5826-4e41-90ba-770e9ae42cc2\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-ttnd7" Jan 29 15:52:26 crc kubenswrapper[5008]: I0129 15:52:26.951919 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ffdf9dd1-5826-4e41-90ba-770e9ae42cc2-ovsdbserver-nb\") pod \"dnsmasq-dns-cd5cbd7b9-ttnd7\" (UID: \"ffdf9dd1-5826-4e41-90ba-770e9ae42cc2\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-ttnd7" Jan 29 15:52:26 crc kubenswrapper[5008]: I0129 15:52:26.951999 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ffdf9dd1-5826-4e41-90ba-770e9ae42cc2-config\") pod \"dnsmasq-dns-cd5cbd7b9-ttnd7\" (UID: \"ffdf9dd1-5826-4e41-90ba-770e9ae42cc2\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-ttnd7" Jan 29 15:52:26 crc kubenswrapper[5008]: I0129 15:52:26.952016 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ffdf9dd1-5826-4e41-90ba-770e9ae42cc2-dns-swift-storage-0\") pod \"dnsmasq-dns-cd5cbd7b9-ttnd7\" (UID: \"ffdf9dd1-5826-4e41-90ba-770e9ae42cc2\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-ttnd7" Jan 29 15:52:26 crc kubenswrapper[5008]: I0129 15:52:26.952666 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ffdf9dd1-5826-4e41-90ba-770e9ae42cc2-dns-svc\") pod \"dnsmasq-dns-cd5cbd7b9-ttnd7\" (UID: \"ffdf9dd1-5826-4e41-90ba-770e9ae42cc2\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-ttnd7" Jan 29 15:52:26 crc kubenswrapper[5008]: I0129 15:52:26.952694 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ffdf9dd1-5826-4e41-90ba-770e9ae42cc2-ovsdbserver-nb\") pod \"dnsmasq-dns-cd5cbd7b9-ttnd7\" (UID: \"ffdf9dd1-5826-4e41-90ba-770e9ae42cc2\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-ttnd7" Jan 29 15:52:26 crc kubenswrapper[5008]: I0129 15:52:26.952753 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ffdf9dd1-5826-4e41-90ba-770e9ae42cc2-ovsdbserver-sb\") pod \"dnsmasq-dns-cd5cbd7b9-ttnd7\" (UID: \"ffdf9dd1-5826-4e41-90ba-770e9ae42cc2\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-ttnd7" Jan 29 15:52:26 crc kubenswrapper[5008]: I0129 15:52:26.952909 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ffdf9dd1-5826-4e41-90ba-770e9ae42cc2-dns-swift-storage-0\") pod \"dnsmasq-dns-cd5cbd7b9-ttnd7\" (UID: \"ffdf9dd1-5826-4e41-90ba-770e9ae42cc2\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-ttnd7" Jan 29 15:52:26 crc kubenswrapper[5008]: I0129 15:52:26.953307 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ffdf9dd1-5826-4e41-90ba-770e9ae42cc2-config\") pod \"dnsmasq-dns-cd5cbd7b9-ttnd7\" (UID: \"ffdf9dd1-5826-4e41-90ba-770e9ae42cc2\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-ttnd7" Jan 29 15:52:26 crc kubenswrapper[5008]: I0129 15:52:26.970034 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f5brt\" (UniqueName: \"kubernetes.io/projected/ffdf9dd1-5826-4e41-90ba-770e9ae42cc2-kube-api-access-f5brt\") pod \"dnsmasq-dns-cd5cbd7b9-ttnd7\" (UID: \"ffdf9dd1-5826-4e41-90ba-770e9ae42cc2\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-ttnd7" Jan 29 15:52:27 crc kubenswrapper[5008]: I0129 15:52:27.051117 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-cd5cbd7b9-ttnd7" Jan 29 15:52:27 crc kubenswrapper[5008]: I0129 15:52:27.344639 5008 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="13fcb7f1-5a0f-427b-a4a4-709553d1c88d" path="/var/lib/kubelet/pods/13fcb7f1-5a0f-427b-a4a4-709553d1c88d/volumes" Jan 29 15:52:27 crc kubenswrapper[5008]: I0129 15:52:27.503080 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"21ca19b4-0317-4b08-8dc2-a4295c2fb8e4","Type":"ContainerStarted","Data":"67f16d1b387a0d34b2551b42771ef2767b595fae063dd42beeac6345275b6da4"} Jan 29 15:52:27 crc kubenswrapper[5008]: I0129 15:52:27.527831 5008 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=2.527810461 podStartE2EDuration="2.527810461s" podCreationTimestamp="2026-01-29 15:52:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 15:52:27.517486674 +0000 UTC m=+1491.190340931" watchObservedRunningTime="2026-01-29 15:52:27.527810461 +0000 UTC m=+1491.200664708" Jan 29 15:52:27 crc kubenswrapper[5008]: I0129 15:52:27.557496 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-cd5cbd7b9-ttnd7"] Jan 29 15:52:28 crc kubenswrapper[5008]: I0129 15:52:28.550341 5008 generic.go:334] "Generic (PLEG): container finished" podID="ffdf9dd1-5826-4e41-90ba-770e9ae42cc2" containerID="9123fb9e96d8e10624659bdb5df46afbf9710486281b7894a1e9c73d7a7fa101" exitCode=0 Jan 29 15:52:28 crc kubenswrapper[5008]: I0129 15:52:28.552752 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-cd5cbd7b9-ttnd7" event={"ID":"ffdf9dd1-5826-4e41-90ba-770e9ae42cc2","Type":"ContainerDied","Data":"9123fb9e96d8e10624659bdb5df46afbf9710486281b7894a1e9c73d7a7fa101"} Jan 29 15:52:28 crc kubenswrapper[5008]: I0129 15:52:28.552785 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-cd5cbd7b9-ttnd7" event={"ID":"ffdf9dd1-5826-4e41-90ba-770e9ae42cc2","Type":"ContainerStarted","Data":"8bb9aecd790e2955eab838b530d9b210e3da5bc976e325cecc92aa2c2f24aa45"} Jan 29 15:52:28 crc kubenswrapper[5008]: I0129 15:52:28.930374 5008 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 29 15:52:28 crc kubenswrapper[5008]: I0129 15:52:28.930629 5008 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="d1ab502b-4ec7-4a0b-b7a4-ed10d3f26be7" containerName="ceilometer-central-agent" containerID="cri-o://1f0cac0f22132fbe8eb8ceb4b6f38d3eb51e2e56dc4d95059f929e668ed362f6" gracePeriod=30 Jan 29 15:52:28 crc kubenswrapper[5008]: I0129 15:52:28.930705 5008 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="d1ab502b-4ec7-4a0b-b7a4-ed10d3f26be7" containerName="sg-core" containerID="cri-o://b479429d051c9958a13fa2ef70a2c32999364b6d9f8db133530497550bd940a4" gracePeriod=30 Jan 29 15:52:28 crc kubenswrapper[5008]: I0129 15:52:28.930801 5008 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="d1ab502b-4ec7-4a0b-b7a4-ed10d3f26be7" containerName="ceilometer-notification-agent" containerID="cri-o://816da0ccd258b96ae016602b4eb20317eab184c219bbd3b28be883eb79a29a14" gracePeriod=30 Jan 29 15:52:29 crc kubenswrapper[5008]: I0129 15:52:29.312193 5008 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 29 15:52:29 crc kubenswrapper[5008]: I0129 15:52:29.583882 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-cd5cbd7b9-ttnd7" event={"ID":"ffdf9dd1-5826-4e41-90ba-770e9ae42cc2","Type":"ContainerStarted","Data":"9a687724e247ca718da90a31ceafd46b7a02908221bfaf3e0c033da3a7d70d68"} Jan 29 15:52:29 crc kubenswrapper[5008]: I0129 15:52:29.585169 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-cd5cbd7b9-ttnd7" Jan 29 15:52:29 crc kubenswrapper[5008]: I0129 15:52:29.591308 5008 generic.go:334] "Generic (PLEG): container finished" podID="d1ab502b-4ec7-4a0b-b7a4-ed10d3f26be7" containerID="b479429d051c9958a13fa2ef70a2c32999364b6d9f8db133530497550bd940a4" exitCode=2 Jan 29 15:52:29 crc kubenswrapper[5008]: I0129 15:52:29.591343 5008 generic.go:334] "Generic (PLEG): container finished" podID="d1ab502b-4ec7-4a0b-b7a4-ed10d3f26be7" containerID="816da0ccd258b96ae016602b4eb20317eab184c219bbd3b28be883eb79a29a14" exitCode=0 Jan 29 15:52:29 crc kubenswrapper[5008]: I0129 15:52:29.591352 5008 generic.go:334] "Generic (PLEG): container finished" podID="d1ab502b-4ec7-4a0b-b7a4-ed10d3f26be7" containerID="1f0cac0f22132fbe8eb8ceb4b6f38d3eb51e2e56dc4d95059f929e668ed362f6" exitCode=0 Jan 29 15:52:29 crc kubenswrapper[5008]: I0129 15:52:29.591389 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d1ab502b-4ec7-4a0b-b7a4-ed10d3f26be7","Type":"ContainerDied","Data":"b479429d051c9958a13fa2ef70a2c32999364b6d9f8db133530497550bd940a4"} Jan 29 15:52:29 crc kubenswrapper[5008]: I0129 15:52:29.591441 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d1ab502b-4ec7-4a0b-b7a4-ed10d3f26be7","Type":"ContainerDied","Data":"816da0ccd258b96ae016602b4eb20317eab184c219bbd3b28be883eb79a29a14"} Jan 29 15:52:29 crc kubenswrapper[5008]: I0129 15:52:29.591451 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d1ab502b-4ec7-4a0b-b7a4-ed10d3f26be7","Type":"ContainerDied","Data":"1f0cac0f22132fbe8eb8ceb4b6f38d3eb51e2e56dc4d95059f929e668ed362f6"} Jan 29 15:52:29 crc kubenswrapper[5008]: I0129 15:52:29.591551 5008 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="efd2d95c-747e-4f68-9eca-436834c87a96" containerName="nova-api-log" containerID="cri-o://3dd7f1c9512e33fd74ad75dbb59ae738d4a68177c58dd491acfa86b6b891688b" gracePeriod=30 Jan 29 15:52:29 crc kubenswrapper[5008]: I0129 15:52:29.591649 5008 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="efd2d95c-747e-4f68-9eca-436834c87a96" containerName="nova-api-api" containerID="cri-o://8a801ee0afabe9a56e81dd0e385057e7647970f6e434df2be1749ac0726c9c9c" gracePeriod=30 Jan 29 15:52:29 crc kubenswrapper[5008]: I0129 15:52:29.615785 5008 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-cd5cbd7b9-ttnd7" podStartSLOduration=3.615764107 podStartE2EDuration="3.615764107s" podCreationTimestamp="2026-01-29 15:52:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 15:52:29.614041946 +0000 UTC m=+1493.286896173" watchObservedRunningTime="2026-01-29 15:52:29.615764107 +0000 UTC m=+1493.288618364" Jan 29 15:52:30 crc kubenswrapper[5008]: I0129 15:52:30.615354 5008 generic.go:334] "Generic (PLEG): container finished" podID="efd2d95c-747e-4f68-9eca-436834c87a96" containerID="3dd7f1c9512e33fd74ad75dbb59ae738d4a68177c58dd491acfa86b6b891688b" exitCode=143 Jan 29 15:52:30 crc kubenswrapper[5008]: I0129 15:52:30.615465 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"efd2d95c-747e-4f68-9eca-436834c87a96","Type":"ContainerDied","Data":"3dd7f1c9512e33fd74ad75dbb59ae738d4a68177c58dd491acfa86b6b891688b"} Jan 29 15:52:30 crc kubenswrapper[5008]: I0129 15:52:30.761365 5008 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 29 15:52:30 crc kubenswrapper[5008]: I0129 15:52:30.893307 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Jan 29 15:52:30 crc kubenswrapper[5008]: I0129 15:52:30.960997 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d1ab502b-4ec7-4a0b-b7a4-ed10d3f26be7-config-data\") pod \"d1ab502b-4ec7-4a0b-b7a4-ed10d3f26be7\" (UID: \"d1ab502b-4ec7-4a0b-b7a4-ed10d3f26be7\") " Jan 29 15:52:30 crc kubenswrapper[5008]: I0129 15:52:30.961100 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d1ab502b-4ec7-4a0b-b7a4-ed10d3f26be7-scripts\") pod \"d1ab502b-4ec7-4a0b-b7a4-ed10d3f26be7\" (UID: \"d1ab502b-4ec7-4a0b-b7a4-ed10d3f26be7\") " Jan 29 15:52:30 crc kubenswrapper[5008]: I0129 15:52:30.961124 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5vwdz\" (UniqueName: \"kubernetes.io/projected/d1ab502b-4ec7-4a0b-b7a4-ed10d3f26be7-kube-api-access-5vwdz\") pod \"d1ab502b-4ec7-4a0b-b7a4-ed10d3f26be7\" (UID: \"d1ab502b-4ec7-4a0b-b7a4-ed10d3f26be7\") " Jan 29 15:52:30 crc kubenswrapper[5008]: I0129 15:52:30.961184 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d1ab502b-4ec7-4a0b-b7a4-ed10d3f26be7-combined-ca-bundle\") pod \"d1ab502b-4ec7-4a0b-b7a4-ed10d3f26be7\" (UID: \"d1ab502b-4ec7-4a0b-b7a4-ed10d3f26be7\") " Jan 29 15:52:30 crc kubenswrapper[5008]: I0129 15:52:30.961253 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d1ab502b-4ec7-4a0b-b7a4-ed10d3f26be7-log-httpd\") pod \"d1ab502b-4ec7-4a0b-b7a4-ed10d3f26be7\" (UID: \"d1ab502b-4ec7-4a0b-b7a4-ed10d3f26be7\") " Jan 29 15:52:30 crc kubenswrapper[5008]: I0129 15:52:30.961292 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/d1ab502b-4ec7-4a0b-b7a4-ed10d3f26be7-sg-core-conf-yaml\") pod \"d1ab502b-4ec7-4a0b-b7a4-ed10d3f26be7\" (UID: \"d1ab502b-4ec7-4a0b-b7a4-ed10d3f26be7\") " Jan 29 15:52:30 crc kubenswrapper[5008]: I0129 15:52:30.961327 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d1ab502b-4ec7-4a0b-b7a4-ed10d3f26be7-run-httpd\") pod \"d1ab502b-4ec7-4a0b-b7a4-ed10d3f26be7\" (UID: \"d1ab502b-4ec7-4a0b-b7a4-ed10d3f26be7\") " Jan 29 15:52:30 crc kubenswrapper[5008]: I0129 15:52:30.961572 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d1ab502b-4ec7-4a0b-b7a4-ed10d3f26be7-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "d1ab502b-4ec7-4a0b-b7a4-ed10d3f26be7" (UID: "d1ab502b-4ec7-4a0b-b7a4-ed10d3f26be7"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 15:52:30 crc kubenswrapper[5008]: I0129 15:52:30.961765 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d1ab502b-4ec7-4a0b-b7a4-ed10d3f26be7-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "d1ab502b-4ec7-4a0b-b7a4-ed10d3f26be7" (UID: "d1ab502b-4ec7-4a0b-b7a4-ed10d3f26be7"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 15:52:30 crc kubenswrapper[5008]: I0129 15:52:30.962101 5008 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d1ab502b-4ec7-4a0b-b7a4-ed10d3f26be7-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 29 15:52:30 crc kubenswrapper[5008]: I0129 15:52:30.962131 5008 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d1ab502b-4ec7-4a0b-b7a4-ed10d3f26be7-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 29 15:52:30 crc kubenswrapper[5008]: I0129 15:52:30.967037 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d1ab502b-4ec7-4a0b-b7a4-ed10d3f26be7-kube-api-access-5vwdz" (OuterVolumeSpecName: "kube-api-access-5vwdz") pod "d1ab502b-4ec7-4a0b-b7a4-ed10d3f26be7" (UID: "d1ab502b-4ec7-4a0b-b7a4-ed10d3f26be7"). InnerVolumeSpecName "kube-api-access-5vwdz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:52:30 crc kubenswrapper[5008]: I0129 15:52:30.968662 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d1ab502b-4ec7-4a0b-b7a4-ed10d3f26be7-scripts" (OuterVolumeSpecName: "scripts") pod "d1ab502b-4ec7-4a0b-b7a4-ed10d3f26be7" (UID: "d1ab502b-4ec7-4a0b-b7a4-ed10d3f26be7"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:52:30 crc kubenswrapper[5008]: I0129 15:52:30.998924 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d1ab502b-4ec7-4a0b-b7a4-ed10d3f26be7-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "d1ab502b-4ec7-4a0b-b7a4-ed10d3f26be7" (UID: "d1ab502b-4ec7-4a0b-b7a4-ed10d3f26be7"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:52:31 crc kubenswrapper[5008]: I0129 15:52:31.026140 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d1ab502b-4ec7-4a0b-b7a4-ed10d3f26be7-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d1ab502b-4ec7-4a0b-b7a4-ed10d3f26be7" (UID: "d1ab502b-4ec7-4a0b-b7a4-ed10d3f26be7"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:52:31 crc kubenswrapper[5008]: I0129 15:52:31.028640 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d1ab502b-4ec7-4a0b-b7a4-ed10d3f26be7-config-data" (OuterVolumeSpecName: "config-data") pod "d1ab502b-4ec7-4a0b-b7a4-ed10d3f26be7" (UID: "d1ab502b-4ec7-4a0b-b7a4-ed10d3f26be7"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:52:31 crc kubenswrapper[5008]: I0129 15:52:31.063954 5008 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d1ab502b-4ec7-4a0b-b7a4-ed10d3f26be7-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 15:52:31 crc kubenswrapper[5008]: I0129 15:52:31.064020 5008 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d1ab502b-4ec7-4a0b-b7a4-ed10d3f26be7-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 15:52:31 crc kubenswrapper[5008]: I0129 15:52:31.064038 5008 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5vwdz\" (UniqueName: \"kubernetes.io/projected/d1ab502b-4ec7-4a0b-b7a4-ed10d3f26be7-kube-api-access-5vwdz\") on node \"crc\" DevicePath \"\"" Jan 29 15:52:31 crc kubenswrapper[5008]: I0129 15:52:31.064052 5008 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d1ab502b-4ec7-4a0b-b7a4-ed10d3f26be7-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 15:52:31 crc kubenswrapper[5008]: I0129 15:52:31.064064 5008 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/d1ab502b-4ec7-4a0b-b7a4-ed10d3f26be7-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 29 15:52:31 crc kubenswrapper[5008]: I0129 15:52:31.626917 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d1ab502b-4ec7-4a0b-b7a4-ed10d3f26be7","Type":"ContainerDied","Data":"0c880a32127e0f9cf20872f0cb9c9103c1ec0fcb4e31857d57145ee7e6ef5eff"} Jan 29 15:52:31 crc kubenswrapper[5008]: I0129 15:52:31.626957 5008 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 29 15:52:31 crc kubenswrapper[5008]: I0129 15:52:31.626987 5008 scope.go:117] "RemoveContainer" containerID="b479429d051c9958a13fa2ef70a2c32999364b6d9f8db133530497550bd940a4" Jan 29 15:52:31 crc kubenswrapper[5008]: I0129 15:52:31.654310 5008 scope.go:117] "RemoveContainer" containerID="816da0ccd258b96ae016602b4eb20317eab184c219bbd3b28be883eb79a29a14" Jan 29 15:52:31 crc kubenswrapper[5008]: I0129 15:52:31.674066 5008 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 29 15:52:31 crc kubenswrapper[5008]: I0129 15:52:31.688996 5008 scope.go:117] "RemoveContainer" containerID="1f0cac0f22132fbe8eb8ceb4b6f38d3eb51e2e56dc4d95059f929e668ed362f6" Jan 29 15:52:31 crc kubenswrapper[5008]: I0129 15:52:31.712670 5008 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 29 15:52:31 crc kubenswrapper[5008]: I0129 15:52:31.721156 5008 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 29 15:52:31 crc kubenswrapper[5008]: E0129 15:52:31.721734 5008 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d1ab502b-4ec7-4a0b-b7a4-ed10d3f26be7" containerName="ceilometer-notification-agent" Jan 29 15:52:31 crc kubenswrapper[5008]: I0129 15:52:31.721757 5008 state_mem.go:107] "Deleted CPUSet assignment" podUID="d1ab502b-4ec7-4a0b-b7a4-ed10d3f26be7" containerName="ceilometer-notification-agent" Jan 29 15:52:31 crc kubenswrapper[5008]: E0129 15:52:31.721775 5008 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d1ab502b-4ec7-4a0b-b7a4-ed10d3f26be7" containerName="sg-core" Jan 29 15:52:31 crc kubenswrapper[5008]: I0129 15:52:31.721794 5008 state_mem.go:107] "Deleted CPUSet assignment" podUID="d1ab502b-4ec7-4a0b-b7a4-ed10d3f26be7" containerName="sg-core" Jan 29 15:52:31 crc kubenswrapper[5008]: E0129 15:52:31.721817 5008 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d1ab502b-4ec7-4a0b-b7a4-ed10d3f26be7" containerName="ceilometer-central-agent" Jan 29 15:52:31 crc kubenswrapper[5008]: I0129 15:52:31.721824 5008 state_mem.go:107] "Deleted CPUSet assignment" podUID="d1ab502b-4ec7-4a0b-b7a4-ed10d3f26be7" containerName="ceilometer-central-agent" Jan 29 15:52:31 crc kubenswrapper[5008]: I0129 15:52:31.722044 5008 memory_manager.go:354] "RemoveStaleState removing state" podUID="d1ab502b-4ec7-4a0b-b7a4-ed10d3f26be7" containerName="ceilometer-notification-agent" Jan 29 15:52:31 crc kubenswrapper[5008]: I0129 15:52:31.722080 5008 memory_manager.go:354] "RemoveStaleState removing state" podUID="d1ab502b-4ec7-4a0b-b7a4-ed10d3f26be7" containerName="sg-core" Jan 29 15:52:31 crc kubenswrapper[5008]: I0129 15:52:31.722098 5008 memory_manager.go:354] "RemoveStaleState removing state" podUID="d1ab502b-4ec7-4a0b-b7a4-ed10d3f26be7" containerName="ceilometer-central-agent" Jan 29 15:52:31 crc kubenswrapper[5008]: I0129 15:52:31.723954 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 29 15:52:31 crc kubenswrapper[5008]: I0129 15:52:31.726110 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 29 15:52:31 crc kubenswrapper[5008]: I0129 15:52:31.726633 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 29 15:52:31 crc kubenswrapper[5008]: I0129 15:52:31.762692 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 29 15:52:31 crc kubenswrapper[5008]: I0129 15:52:31.885742 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4zk8n\" (UniqueName: \"kubernetes.io/projected/d40740f9-e8d8-4f46-b8b0-d913a6c33210-kube-api-access-4zk8n\") pod \"ceilometer-0\" (UID: \"d40740f9-e8d8-4f46-b8b0-d913a6c33210\") " pod="openstack/ceilometer-0" Jan 29 15:52:31 crc kubenswrapper[5008]: I0129 15:52:31.885982 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/d40740f9-e8d8-4f46-b8b0-d913a6c33210-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"d40740f9-e8d8-4f46-b8b0-d913a6c33210\") " pod="openstack/ceilometer-0" Jan 29 15:52:31 crc kubenswrapper[5008]: I0129 15:52:31.886030 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d40740f9-e8d8-4f46-b8b0-d913a6c33210-log-httpd\") pod \"ceilometer-0\" (UID: \"d40740f9-e8d8-4f46-b8b0-d913a6c33210\") " pod="openstack/ceilometer-0" Jan 29 15:52:31 crc kubenswrapper[5008]: I0129 15:52:31.886209 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d40740f9-e8d8-4f46-b8b0-d913a6c33210-run-httpd\") pod \"ceilometer-0\" (UID: \"d40740f9-e8d8-4f46-b8b0-d913a6c33210\") " pod="openstack/ceilometer-0" Jan 29 15:52:31 crc kubenswrapper[5008]: I0129 15:52:31.886247 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d40740f9-e8d8-4f46-b8b0-d913a6c33210-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"d40740f9-e8d8-4f46-b8b0-d913a6c33210\") " pod="openstack/ceilometer-0" Jan 29 15:52:31 crc kubenswrapper[5008]: I0129 15:52:31.886298 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d40740f9-e8d8-4f46-b8b0-d913a6c33210-scripts\") pod \"ceilometer-0\" (UID: \"d40740f9-e8d8-4f46-b8b0-d913a6c33210\") " pod="openstack/ceilometer-0" Jan 29 15:52:31 crc kubenswrapper[5008]: I0129 15:52:31.886329 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d40740f9-e8d8-4f46-b8b0-d913a6c33210-config-data\") pod \"ceilometer-0\" (UID: \"d40740f9-e8d8-4f46-b8b0-d913a6c33210\") " pod="openstack/ceilometer-0" Jan 29 15:52:31 crc kubenswrapper[5008]: I0129 15:52:31.988397 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d40740f9-e8d8-4f46-b8b0-d913a6c33210-run-httpd\") pod \"ceilometer-0\" (UID: \"d40740f9-e8d8-4f46-b8b0-d913a6c33210\") " pod="openstack/ceilometer-0" Jan 29 15:52:31 crc kubenswrapper[5008]: I0129 15:52:31.988451 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d40740f9-e8d8-4f46-b8b0-d913a6c33210-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"d40740f9-e8d8-4f46-b8b0-d913a6c33210\") " pod="openstack/ceilometer-0" Jan 29 15:52:31 crc kubenswrapper[5008]: I0129 15:52:31.988486 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d40740f9-e8d8-4f46-b8b0-d913a6c33210-scripts\") pod \"ceilometer-0\" (UID: \"d40740f9-e8d8-4f46-b8b0-d913a6c33210\") " pod="openstack/ceilometer-0" Jan 29 15:52:31 crc kubenswrapper[5008]: I0129 15:52:31.988507 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d40740f9-e8d8-4f46-b8b0-d913a6c33210-config-data\") pod \"ceilometer-0\" (UID: \"d40740f9-e8d8-4f46-b8b0-d913a6c33210\") " pod="openstack/ceilometer-0" Jan 29 15:52:31 crc kubenswrapper[5008]: I0129 15:52:31.988529 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4zk8n\" (UniqueName: \"kubernetes.io/projected/d40740f9-e8d8-4f46-b8b0-d913a6c33210-kube-api-access-4zk8n\") pod \"ceilometer-0\" (UID: \"d40740f9-e8d8-4f46-b8b0-d913a6c33210\") " pod="openstack/ceilometer-0" Jan 29 15:52:31 crc kubenswrapper[5008]: I0129 15:52:31.988570 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/d40740f9-e8d8-4f46-b8b0-d913a6c33210-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"d40740f9-e8d8-4f46-b8b0-d913a6c33210\") " pod="openstack/ceilometer-0" Jan 29 15:52:31 crc kubenswrapper[5008]: I0129 15:52:31.988586 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d40740f9-e8d8-4f46-b8b0-d913a6c33210-log-httpd\") pod \"ceilometer-0\" (UID: \"d40740f9-e8d8-4f46-b8b0-d913a6c33210\") " pod="openstack/ceilometer-0" Jan 29 15:52:31 crc kubenswrapper[5008]: I0129 15:52:31.989488 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d40740f9-e8d8-4f46-b8b0-d913a6c33210-run-httpd\") pod \"ceilometer-0\" (UID: \"d40740f9-e8d8-4f46-b8b0-d913a6c33210\") " pod="openstack/ceilometer-0" Jan 29 15:52:31 crc kubenswrapper[5008]: I0129 15:52:31.989607 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d40740f9-e8d8-4f46-b8b0-d913a6c33210-log-httpd\") pod \"ceilometer-0\" (UID: \"d40740f9-e8d8-4f46-b8b0-d913a6c33210\") " pod="openstack/ceilometer-0" Jan 29 15:52:31 crc kubenswrapper[5008]: I0129 15:52:31.993660 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d40740f9-e8d8-4f46-b8b0-d913a6c33210-scripts\") pod \"ceilometer-0\" (UID: \"d40740f9-e8d8-4f46-b8b0-d913a6c33210\") " pod="openstack/ceilometer-0" Jan 29 15:52:31 crc kubenswrapper[5008]: I0129 15:52:31.993696 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/d40740f9-e8d8-4f46-b8b0-d913a6c33210-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"d40740f9-e8d8-4f46-b8b0-d913a6c33210\") " pod="openstack/ceilometer-0" Jan 29 15:52:31 crc kubenswrapper[5008]: I0129 15:52:31.994918 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d40740f9-e8d8-4f46-b8b0-d913a6c33210-config-data\") pod \"ceilometer-0\" (UID: \"d40740f9-e8d8-4f46-b8b0-d913a6c33210\") " pod="openstack/ceilometer-0" Jan 29 15:52:32 crc kubenswrapper[5008]: I0129 15:52:32.003676 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d40740f9-e8d8-4f46-b8b0-d913a6c33210-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"d40740f9-e8d8-4f46-b8b0-d913a6c33210\") " pod="openstack/ceilometer-0" Jan 29 15:52:32 crc kubenswrapper[5008]: I0129 15:52:32.006735 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4zk8n\" (UniqueName: \"kubernetes.io/projected/d40740f9-e8d8-4f46-b8b0-d913a6c33210-kube-api-access-4zk8n\") pod \"ceilometer-0\" (UID: \"d40740f9-e8d8-4f46-b8b0-d913a6c33210\") " pod="openstack/ceilometer-0" Jan 29 15:52:32 crc kubenswrapper[5008]: I0129 15:52:32.056021 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 29 15:52:32 crc kubenswrapper[5008]: I0129 15:52:32.535189 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 29 15:52:32 crc kubenswrapper[5008]: I0129 15:52:32.637971 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d40740f9-e8d8-4f46-b8b0-d913a6c33210","Type":"ContainerStarted","Data":"c0e05b5105ed0e3757d467eff34631c34dcca13e2acddb3cd6556349dd4ddb10"} Jan 29 15:52:33 crc kubenswrapper[5008]: I0129 15:52:33.285087 5008 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 29 15:52:33 crc kubenswrapper[5008]: I0129 15:52:33.315325 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/efd2d95c-747e-4f68-9eca-436834c87a96-config-data\") pod \"efd2d95c-747e-4f68-9eca-436834c87a96\" (UID: \"efd2d95c-747e-4f68-9eca-436834c87a96\") " Jan 29 15:52:33 crc kubenswrapper[5008]: I0129 15:52:33.315492 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/efd2d95c-747e-4f68-9eca-436834c87a96-logs\") pod \"efd2d95c-747e-4f68-9eca-436834c87a96\" (UID: \"efd2d95c-747e-4f68-9eca-436834c87a96\") " Jan 29 15:52:33 crc kubenswrapper[5008]: I0129 15:52:33.315519 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kmzzn\" (UniqueName: \"kubernetes.io/projected/efd2d95c-747e-4f68-9eca-436834c87a96-kube-api-access-kmzzn\") pod \"efd2d95c-747e-4f68-9eca-436834c87a96\" (UID: \"efd2d95c-747e-4f68-9eca-436834c87a96\") " Jan 29 15:52:33 crc kubenswrapper[5008]: I0129 15:52:33.315560 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/efd2d95c-747e-4f68-9eca-436834c87a96-combined-ca-bundle\") pod \"efd2d95c-747e-4f68-9eca-436834c87a96\" (UID: \"efd2d95c-747e-4f68-9eca-436834c87a96\") " Jan 29 15:52:33 crc kubenswrapper[5008]: I0129 15:52:33.316160 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/efd2d95c-747e-4f68-9eca-436834c87a96-logs" (OuterVolumeSpecName: "logs") pod "efd2d95c-747e-4f68-9eca-436834c87a96" (UID: "efd2d95c-747e-4f68-9eca-436834c87a96"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 15:52:33 crc kubenswrapper[5008]: I0129 15:52:33.347826 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/efd2d95c-747e-4f68-9eca-436834c87a96-kube-api-access-kmzzn" (OuterVolumeSpecName: "kube-api-access-kmzzn") pod "efd2d95c-747e-4f68-9eca-436834c87a96" (UID: "efd2d95c-747e-4f68-9eca-436834c87a96"). InnerVolumeSpecName "kube-api-access-kmzzn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:52:33 crc kubenswrapper[5008]: I0129 15:52:33.353129 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/efd2d95c-747e-4f68-9eca-436834c87a96-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "efd2d95c-747e-4f68-9eca-436834c87a96" (UID: "efd2d95c-747e-4f68-9eca-436834c87a96"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:52:33 crc kubenswrapper[5008]: I0129 15:52:33.356021 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/efd2d95c-747e-4f68-9eca-436834c87a96-config-data" (OuterVolumeSpecName: "config-data") pod "efd2d95c-747e-4f68-9eca-436834c87a96" (UID: "efd2d95c-747e-4f68-9eca-436834c87a96"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:52:33 crc kubenswrapper[5008]: I0129 15:52:33.378381 5008 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d1ab502b-4ec7-4a0b-b7a4-ed10d3f26be7" path="/var/lib/kubelet/pods/d1ab502b-4ec7-4a0b-b7a4-ed10d3f26be7/volumes" Jan 29 15:52:33 crc kubenswrapper[5008]: I0129 15:52:33.417491 5008 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/efd2d95c-747e-4f68-9eca-436834c87a96-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 15:52:33 crc kubenswrapper[5008]: I0129 15:52:33.417528 5008 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/efd2d95c-747e-4f68-9eca-436834c87a96-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 15:52:33 crc kubenswrapper[5008]: I0129 15:52:33.417537 5008 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/efd2d95c-747e-4f68-9eca-436834c87a96-logs\") on node \"crc\" DevicePath \"\"" Jan 29 15:52:33 crc kubenswrapper[5008]: I0129 15:52:33.417546 5008 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kmzzn\" (UniqueName: \"kubernetes.io/projected/efd2d95c-747e-4f68-9eca-436834c87a96-kube-api-access-kmzzn\") on node \"crc\" DevicePath \"\"" Jan 29 15:52:33 crc kubenswrapper[5008]: I0129 15:52:33.649940 5008 generic.go:334] "Generic (PLEG): container finished" podID="efd2d95c-747e-4f68-9eca-436834c87a96" containerID="8a801ee0afabe9a56e81dd0e385057e7647970f6e434df2be1749ac0726c9c9c" exitCode=0 Jan 29 15:52:33 crc kubenswrapper[5008]: I0129 15:52:33.649991 5008 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 29 15:52:33 crc kubenswrapper[5008]: I0129 15:52:33.649979 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"efd2d95c-747e-4f68-9eca-436834c87a96","Type":"ContainerDied","Data":"8a801ee0afabe9a56e81dd0e385057e7647970f6e434df2be1749ac0726c9c9c"} Jan 29 15:52:33 crc kubenswrapper[5008]: I0129 15:52:33.650118 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"efd2d95c-747e-4f68-9eca-436834c87a96","Type":"ContainerDied","Data":"fffdd9b250912494bb4bca4bfd92b94d0781e1cf0cbf079d1c4fe2bc1d2f70ff"} Jan 29 15:52:33 crc kubenswrapper[5008]: I0129 15:52:33.650137 5008 scope.go:117] "RemoveContainer" containerID="8a801ee0afabe9a56e81dd0e385057e7647970f6e434df2be1749ac0726c9c9c" Jan 29 15:52:33 crc kubenswrapper[5008]: I0129 15:52:33.679413 5008 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 29 15:52:33 crc kubenswrapper[5008]: I0129 15:52:33.682803 5008 scope.go:117] "RemoveContainer" containerID="3dd7f1c9512e33fd74ad75dbb59ae738d4a68177c58dd491acfa86b6b891688b" Jan 29 15:52:33 crc kubenswrapper[5008]: I0129 15:52:33.694310 5008 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Jan 29 15:52:33 crc kubenswrapper[5008]: I0129 15:52:33.703056 5008 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Jan 29 15:52:33 crc kubenswrapper[5008]: E0129 15:52:33.703420 5008 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="efd2d95c-747e-4f68-9eca-436834c87a96" containerName="nova-api-log" Jan 29 15:52:33 crc kubenswrapper[5008]: I0129 15:52:33.703430 5008 state_mem.go:107] "Deleted CPUSet assignment" podUID="efd2d95c-747e-4f68-9eca-436834c87a96" containerName="nova-api-log" Jan 29 15:52:33 crc kubenswrapper[5008]: E0129 15:52:33.703456 5008 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="efd2d95c-747e-4f68-9eca-436834c87a96" containerName="nova-api-api" Jan 29 15:52:33 crc kubenswrapper[5008]: I0129 15:52:33.703463 5008 state_mem.go:107] "Deleted CPUSet assignment" podUID="efd2d95c-747e-4f68-9eca-436834c87a96" containerName="nova-api-api" Jan 29 15:52:33 crc kubenswrapper[5008]: I0129 15:52:33.705124 5008 memory_manager.go:354] "RemoveStaleState removing state" podUID="efd2d95c-747e-4f68-9eca-436834c87a96" containerName="nova-api-api" Jan 29 15:52:33 crc kubenswrapper[5008]: I0129 15:52:33.705158 5008 memory_manager.go:354] "RemoveStaleState removing state" podUID="efd2d95c-747e-4f68-9eca-436834c87a96" containerName="nova-api-log" Jan 29 15:52:33 crc kubenswrapper[5008]: I0129 15:52:33.706131 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 29 15:52:33 crc kubenswrapper[5008]: I0129 15:52:33.708775 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Jan 29 15:52:33 crc kubenswrapper[5008]: I0129 15:52:33.709017 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Jan 29 15:52:33 crc kubenswrapper[5008]: I0129 15:52:33.709152 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Jan 29 15:52:33 crc kubenswrapper[5008]: I0129 15:52:33.723470 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ns2nh\" (UniqueName: \"kubernetes.io/projected/f3e5f6eb-04c4-4797-9a4a-e4a2a710bcb6-kube-api-access-ns2nh\") pod \"nova-api-0\" (UID: \"f3e5f6eb-04c4-4797-9a4a-e4a2a710bcb6\") " pod="openstack/nova-api-0" Jan 29 15:52:33 crc kubenswrapper[5008]: I0129 15:52:33.723531 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f3e5f6eb-04c4-4797-9a4a-e4a2a710bcb6-internal-tls-certs\") pod \"nova-api-0\" (UID: \"f3e5f6eb-04c4-4797-9a4a-e4a2a710bcb6\") " pod="openstack/nova-api-0" Jan 29 15:52:33 crc kubenswrapper[5008]: I0129 15:52:33.723598 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f3e5f6eb-04c4-4797-9a4a-e4a2a710bcb6-logs\") pod \"nova-api-0\" (UID: \"f3e5f6eb-04c4-4797-9a4a-e4a2a710bcb6\") " pod="openstack/nova-api-0" Jan 29 15:52:33 crc kubenswrapper[5008]: I0129 15:52:33.723617 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f3e5f6eb-04c4-4797-9a4a-e4a2a710bcb6-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"f3e5f6eb-04c4-4797-9a4a-e4a2a710bcb6\") " pod="openstack/nova-api-0" Jan 29 15:52:33 crc kubenswrapper[5008]: I0129 15:52:33.723648 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f3e5f6eb-04c4-4797-9a4a-e4a2a710bcb6-public-tls-certs\") pod \"nova-api-0\" (UID: \"f3e5f6eb-04c4-4797-9a4a-e4a2a710bcb6\") " pod="openstack/nova-api-0" Jan 29 15:52:33 crc kubenswrapper[5008]: I0129 15:52:33.723686 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f3e5f6eb-04c4-4797-9a4a-e4a2a710bcb6-config-data\") pod \"nova-api-0\" (UID: \"f3e5f6eb-04c4-4797-9a4a-e4a2a710bcb6\") " pod="openstack/nova-api-0" Jan 29 15:52:33 crc kubenswrapper[5008]: I0129 15:52:33.723763 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 29 15:52:33 crc kubenswrapper[5008]: I0129 15:52:33.759911 5008 scope.go:117] "RemoveContainer" containerID="8a801ee0afabe9a56e81dd0e385057e7647970f6e434df2be1749ac0726c9c9c" Jan 29 15:52:33 crc kubenswrapper[5008]: E0129 15:52:33.761585 5008 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8a801ee0afabe9a56e81dd0e385057e7647970f6e434df2be1749ac0726c9c9c\": container with ID starting with 8a801ee0afabe9a56e81dd0e385057e7647970f6e434df2be1749ac0726c9c9c not found: ID does not exist" containerID="8a801ee0afabe9a56e81dd0e385057e7647970f6e434df2be1749ac0726c9c9c" Jan 29 15:52:33 crc kubenswrapper[5008]: I0129 15:52:33.761626 5008 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8a801ee0afabe9a56e81dd0e385057e7647970f6e434df2be1749ac0726c9c9c"} err="failed to get container status \"8a801ee0afabe9a56e81dd0e385057e7647970f6e434df2be1749ac0726c9c9c\": rpc error: code = NotFound desc = could not find container \"8a801ee0afabe9a56e81dd0e385057e7647970f6e434df2be1749ac0726c9c9c\": container with ID starting with 8a801ee0afabe9a56e81dd0e385057e7647970f6e434df2be1749ac0726c9c9c not found: ID does not exist" Jan 29 15:52:33 crc kubenswrapper[5008]: I0129 15:52:33.761657 5008 scope.go:117] "RemoveContainer" containerID="3dd7f1c9512e33fd74ad75dbb59ae738d4a68177c58dd491acfa86b6b891688b" Jan 29 15:52:33 crc kubenswrapper[5008]: E0129 15:52:33.762075 5008 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3dd7f1c9512e33fd74ad75dbb59ae738d4a68177c58dd491acfa86b6b891688b\": container with ID starting with 3dd7f1c9512e33fd74ad75dbb59ae738d4a68177c58dd491acfa86b6b891688b not found: ID does not exist" containerID="3dd7f1c9512e33fd74ad75dbb59ae738d4a68177c58dd491acfa86b6b891688b" Jan 29 15:52:33 crc kubenswrapper[5008]: I0129 15:52:33.762099 5008 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3dd7f1c9512e33fd74ad75dbb59ae738d4a68177c58dd491acfa86b6b891688b"} err="failed to get container status \"3dd7f1c9512e33fd74ad75dbb59ae738d4a68177c58dd491acfa86b6b891688b\": rpc error: code = NotFound desc = could not find container \"3dd7f1c9512e33fd74ad75dbb59ae738d4a68177c58dd491acfa86b6b891688b\": container with ID starting with 3dd7f1c9512e33fd74ad75dbb59ae738d4a68177c58dd491acfa86b6b891688b not found: ID does not exist" Jan 29 15:52:33 crc kubenswrapper[5008]: I0129 15:52:33.824916 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f3e5f6eb-04c4-4797-9a4a-e4a2a710bcb6-internal-tls-certs\") pod \"nova-api-0\" (UID: \"f3e5f6eb-04c4-4797-9a4a-e4a2a710bcb6\") " pod="openstack/nova-api-0" Jan 29 15:52:33 crc kubenswrapper[5008]: I0129 15:52:33.825169 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f3e5f6eb-04c4-4797-9a4a-e4a2a710bcb6-logs\") pod \"nova-api-0\" (UID: \"f3e5f6eb-04c4-4797-9a4a-e4a2a710bcb6\") " pod="openstack/nova-api-0" Jan 29 15:52:33 crc kubenswrapper[5008]: I0129 15:52:33.825193 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f3e5f6eb-04c4-4797-9a4a-e4a2a710bcb6-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"f3e5f6eb-04c4-4797-9a4a-e4a2a710bcb6\") " pod="openstack/nova-api-0" Jan 29 15:52:33 crc kubenswrapper[5008]: I0129 15:52:33.825226 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f3e5f6eb-04c4-4797-9a4a-e4a2a710bcb6-public-tls-certs\") pod \"nova-api-0\" (UID: \"f3e5f6eb-04c4-4797-9a4a-e4a2a710bcb6\") " pod="openstack/nova-api-0" Jan 29 15:52:33 crc kubenswrapper[5008]: I0129 15:52:33.825264 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f3e5f6eb-04c4-4797-9a4a-e4a2a710bcb6-config-data\") pod \"nova-api-0\" (UID: \"f3e5f6eb-04c4-4797-9a4a-e4a2a710bcb6\") " pod="openstack/nova-api-0" Jan 29 15:52:33 crc kubenswrapper[5008]: I0129 15:52:33.825315 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ns2nh\" (UniqueName: \"kubernetes.io/projected/f3e5f6eb-04c4-4797-9a4a-e4a2a710bcb6-kube-api-access-ns2nh\") pod \"nova-api-0\" (UID: \"f3e5f6eb-04c4-4797-9a4a-e4a2a710bcb6\") " pod="openstack/nova-api-0" Jan 29 15:52:33 crc kubenswrapper[5008]: I0129 15:52:33.825522 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f3e5f6eb-04c4-4797-9a4a-e4a2a710bcb6-logs\") pod \"nova-api-0\" (UID: \"f3e5f6eb-04c4-4797-9a4a-e4a2a710bcb6\") " pod="openstack/nova-api-0" Jan 29 15:52:33 crc kubenswrapper[5008]: I0129 15:52:33.835854 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f3e5f6eb-04c4-4797-9a4a-e4a2a710bcb6-config-data\") pod \"nova-api-0\" (UID: \"f3e5f6eb-04c4-4797-9a4a-e4a2a710bcb6\") " pod="openstack/nova-api-0" Jan 29 15:52:33 crc kubenswrapper[5008]: I0129 15:52:33.837740 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f3e5f6eb-04c4-4797-9a4a-e4a2a710bcb6-public-tls-certs\") pod \"nova-api-0\" (UID: \"f3e5f6eb-04c4-4797-9a4a-e4a2a710bcb6\") " pod="openstack/nova-api-0" Jan 29 15:52:33 crc kubenswrapper[5008]: I0129 15:52:33.838535 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f3e5f6eb-04c4-4797-9a4a-e4a2a710bcb6-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"f3e5f6eb-04c4-4797-9a4a-e4a2a710bcb6\") " pod="openstack/nova-api-0" Jan 29 15:52:33 crc kubenswrapper[5008]: I0129 15:52:33.840212 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ns2nh\" (UniqueName: \"kubernetes.io/projected/f3e5f6eb-04c4-4797-9a4a-e4a2a710bcb6-kube-api-access-ns2nh\") pod \"nova-api-0\" (UID: \"f3e5f6eb-04c4-4797-9a4a-e4a2a710bcb6\") " pod="openstack/nova-api-0" Jan 29 15:52:33 crc kubenswrapper[5008]: I0129 15:52:33.851469 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f3e5f6eb-04c4-4797-9a4a-e4a2a710bcb6-internal-tls-certs\") pod \"nova-api-0\" (UID: \"f3e5f6eb-04c4-4797-9a4a-e4a2a710bcb6\") " pod="openstack/nova-api-0" Jan 29 15:52:33 crc kubenswrapper[5008]: E0129 15:52:33.912097 5008 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod00b42485_f42b_4ca6_8e84_1a795454dd9f.slice/crio-9cfdb60cd6bab187b310c7e3b7b9918a771aed98988c83c807016cc578b45171\": RecentStats: unable to find data in memory cache]" Jan 29 15:52:34 crc kubenswrapper[5008]: I0129 15:52:34.066310 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 29 15:52:34 crc kubenswrapper[5008]: I0129 15:52:34.563468 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 29 15:52:34 crc kubenswrapper[5008]: I0129 15:52:34.661970 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d40740f9-e8d8-4f46-b8b0-d913a6c33210","Type":"ContainerStarted","Data":"cbbd1ae9f5180a48bfb6b0e06422201465dab2f80d3bcb0bb07d69614c78274c"} Jan 29 15:52:34 crc kubenswrapper[5008]: I0129 15:52:34.663561 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"f3e5f6eb-04c4-4797-9a4a-e4a2a710bcb6","Type":"ContainerStarted","Data":"f95804822e24c4b9f3caf2c4f8e60772c884987c449b1013ddd08314002b1592"} Jan 29 15:52:35 crc kubenswrapper[5008]: I0129 15:52:35.336549 5008 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="efd2d95c-747e-4f68-9eca-436834c87a96" path="/var/lib/kubelet/pods/efd2d95c-747e-4f68-9eca-436834c87a96/volumes" Jan 29 15:52:35 crc kubenswrapper[5008]: I0129 15:52:35.677400 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d40740f9-e8d8-4f46-b8b0-d913a6c33210","Type":"ContainerStarted","Data":"c4722e08cd543a7198136070e2b6ad5db84511db8bbbbb4f4cc49e9edd0c3d33"} Jan 29 15:52:35 crc kubenswrapper[5008]: I0129 15:52:35.680777 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"f3e5f6eb-04c4-4797-9a4a-e4a2a710bcb6","Type":"ContainerStarted","Data":"f79f38ff0afa3885296e624a49ae42810a26d27a384ceccb3214269c19350348"} Jan 29 15:52:35 crc kubenswrapper[5008]: I0129 15:52:35.680900 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"f3e5f6eb-04c4-4797-9a4a-e4a2a710bcb6","Type":"ContainerStarted","Data":"bcb62e0a30103f70c2e23448f433250c8f5931d78a534a384a1188d58be16119"} Jan 29 15:52:35 crc kubenswrapper[5008]: I0129 15:52:35.709470 5008 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.709433789 podStartE2EDuration="2.709433789s" podCreationTimestamp="2026-01-29 15:52:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 15:52:35.705054094 +0000 UTC m=+1499.377908391" watchObservedRunningTime="2026-01-29 15:52:35.709433789 +0000 UTC m=+1499.382288056" Jan 29 15:52:35 crc kubenswrapper[5008]: I0129 15:52:35.892931 5008 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-cell1-novncproxy-0" Jan 29 15:52:35 crc kubenswrapper[5008]: I0129 15:52:35.915675 5008 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-cell1-novncproxy-0" Jan 29 15:52:36 crc kubenswrapper[5008]: I0129 15:52:36.703035 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-novncproxy-0" Jan 29 15:52:36 crc kubenswrapper[5008]: I0129 15:52:36.934967 5008 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-cell-mapping-k4msd"] Jan 29 15:52:36 crc kubenswrapper[5008]: I0129 15:52:36.936185 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-k4msd" Jan 29 15:52:36 crc kubenswrapper[5008]: I0129 15:52:36.946333 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-config-data" Jan 29 15:52:36 crc kubenswrapper[5008]: I0129 15:52:36.953154 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-scripts" Jan 29 15:52:36 crc kubenswrapper[5008]: I0129 15:52:36.962340 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-k4msd"] Jan 29 15:52:36 crc kubenswrapper[5008]: I0129 15:52:36.984912 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4kn6j\" (UniqueName: \"kubernetes.io/projected/dfacde84-7d28-464b-8854-622fd127956c-kube-api-access-4kn6j\") pod \"nova-cell1-cell-mapping-k4msd\" (UID: \"dfacde84-7d28-464b-8854-622fd127956c\") " pod="openstack/nova-cell1-cell-mapping-k4msd" Jan 29 15:52:36 crc kubenswrapper[5008]: I0129 15:52:36.984982 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dfacde84-7d28-464b-8854-622fd127956c-scripts\") pod \"nova-cell1-cell-mapping-k4msd\" (UID: \"dfacde84-7d28-464b-8854-622fd127956c\") " pod="openstack/nova-cell1-cell-mapping-k4msd" Jan 29 15:52:36 crc kubenswrapper[5008]: I0129 15:52:36.985055 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dfacde84-7d28-464b-8854-622fd127956c-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-k4msd\" (UID: \"dfacde84-7d28-464b-8854-622fd127956c\") " pod="openstack/nova-cell1-cell-mapping-k4msd" Jan 29 15:52:36 crc kubenswrapper[5008]: I0129 15:52:36.985103 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dfacde84-7d28-464b-8854-622fd127956c-config-data\") pod \"nova-cell1-cell-mapping-k4msd\" (UID: \"dfacde84-7d28-464b-8854-622fd127956c\") " pod="openstack/nova-cell1-cell-mapping-k4msd" Jan 29 15:52:37 crc kubenswrapper[5008]: I0129 15:52:37.053473 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-cd5cbd7b9-ttnd7" Jan 29 15:52:37 crc kubenswrapper[5008]: I0129 15:52:37.086360 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dfacde84-7d28-464b-8854-622fd127956c-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-k4msd\" (UID: \"dfacde84-7d28-464b-8854-622fd127956c\") " pod="openstack/nova-cell1-cell-mapping-k4msd" Jan 29 15:52:37 crc kubenswrapper[5008]: I0129 15:52:37.086475 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dfacde84-7d28-464b-8854-622fd127956c-config-data\") pod \"nova-cell1-cell-mapping-k4msd\" (UID: \"dfacde84-7d28-464b-8854-622fd127956c\") " pod="openstack/nova-cell1-cell-mapping-k4msd" Jan 29 15:52:37 crc kubenswrapper[5008]: I0129 15:52:37.086565 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4kn6j\" (UniqueName: \"kubernetes.io/projected/dfacde84-7d28-464b-8854-622fd127956c-kube-api-access-4kn6j\") pod \"nova-cell1-cell-mapping-k4msd\" (UID: \"dfacde84-7d28-464b-8854-622fd127956c\") " pod="openstack/nova-cell1-cell-mapping-k4msd" Jan 29 15:52:37 crc kubenswrapper[5008]: I0129 15:52:37.086601 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dfacde84-7d28-464b-8854-622fd127956c-scripts\") pod \"nova-cell1-cell-mapping-k4msd\" (UID: \"dfacde84-7d28-464b-8854-622fd127956c\") " pod="openstack/nova-cell1-cell-mapping-k4msd" Jan 29 15:52:37 crc kubenswrapper[5008]: I0129 15:52:37.093684 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dfacde84-7d28-464b-8854-622fd127956c-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-k4msd\" (UID: \"dfacde84-7d28-464b-8854-622fd127956c\") " pod="openstack/nova-cell1-cell-mapping-k4msd" Jan 29 15:52:37 crc kubenswrapper[5008]: I0129 15:52:37.093849 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dfacde84-7d28-464b-8854-622fd127956c-scripts\") pod \"nova-cell1-cell-mapping-k4msd\" (UID: \"dfacde84-7d28-464b-8854-622fd127956c\") " pod="openstack/nova-cell1-cell-mapping-k4msd" Jan 29 15:52:37 crc kubenswrapper[5008]: I0129 15:52:37.110127 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dfacde84-7d28-464b-8854-622fd127956c-config-data\") pod \"nova-cell1-cell-mapping-k4msd\" (UID: \"dfacde84-7d28-464b-8854-622fd127956c\") " pod="openstack/nova-cell1-cell-mapping-k4msd" Jan 29 15:52:37 crc kubenswrapper[5008]: I0129 15:52:37.167221 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4kn6j\" (UniqueName: \"kubernetes.io/projected/dfacde84-7d28-464b-8854-622fd127956c-kube-api-access-4kn6j\") pod \"nova-cell1-cell-mapping-k4msd\" (UID: \"dfacde84-7d28-464b-8854-622fd127956c\") " pod="openstack/nova-cell1-cell-mapping-k4msd" Jan 29 15:52:37 crc kubenswrapper[5008]: I0129 15:52:37.210200 5008 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-bccf8f775-xx5z4"] Jan 29 15:52:37 crc kubenswrapper[5008]: I0129 15:52:37.210448 5008 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-bccf8f775-xx5z4" podUID="65ae154d-9b35-408c-bcdb-8b9601be71c8" containerName="dnsmasq-dns" containerID="cri-o://1d607350ffbc24ef275435eb4ae5dec525e6f42db8162f7bae09094480df98a3" gracePeriod=10 Jan 29 15:52:37 crc kubenswrapper[5008]: I0129 15:52:37.254244 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-k4msd" Jan 29 15:52:37 crc kubenswrapper[5008]: I0129 15:52:37.703238 5008 generic.go:334] "Generic (PLEG): container finished" podID="65ae154d-9b35-408c-bcdb-8b9601be71c8" containerID="1d607350ffbc24ef275435eb4ae5dec525e6f42db8162f7bae09094480df98a3" exitCode=0 Jan 29 15:52:37 crc kubenswrapper[5008]: I0129 15:52:37.703435 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-bccf8f775-xx5z4" event={"ID":"65ae154d-9b35-408c-bcdb-8b9601be71c8","Type":"ContainerDied","Data":"1d607350ffbc24ef275435eb4ae5dec525e6f42db8162f7bae09094480df98a3"} Jan 29 15:52:37 crc kubenswrapper[5008]: I0129 15:52:37.828387 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-k4msd"] Jan 29 15:52:38 crc kubenswrapper[5008]: I0129 15:52:38.504329 5008 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-bccf8f775-xx5z4" Jan 29 15:52:38 crc kubenswrapper[5008]: E0129 15:52:38.549358 5008 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://registry.redhat.io/ubi9/httpd-24:latest: Requesting bearer token: invalid status code from registry 403 (Forbidden)" image="registry.redhat.io/ubi9/httpd-24:latest" Jan 29 15:52:38 crc kubenswrapper[5008]: E0129 15:52:38.549579 5008 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:proxy-httpd,Image:registry.redhat.io/ubi9/httpd-24:latest,Command:[/usr/sbin/httpd],Args:[-DFOREGROUND],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:proxy-httpd,HostPort:0,ContainerPort:3000,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/httpd/conf/httpd.conf,SubPath:httpd.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/httpd/conf.d/ssl.conf,SubPath:ssl.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:run-httpd,ReadOnly:false,MountPath:/run/httpd,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:log-httpd,ReadOnly:false,MountPath:/var/log/httpd,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-4zk8n,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/,Port:{0 3000 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:30,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/,Port:{0 3000 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:30,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(d40740f9-e8d8-4f46-b8b0-d913a6c33210): ErrImagePull: initializing source docker://registry.redhat.io/ubi9/httpd-24:latest: Requesting bearer token: invalid status code from registry 403 (Forbidden)" logger="UnhandledError" Jan 29 15:52:38 crc kubenswrapper[5008]: E0129 15:52:38.550810 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-httpd\" with ErrImagePull: \"initializing source docker://registry.redhat.io/ubi9/httpd-24:latest: Requesting bearer token: invalid status code from registry 403 (Forbidden)\"" pod="openstack/ceilometer-0" podUID="d40740f9-e8d8-4f46-b8b0-d913a6c33210" Jan 29 15:52:38 crc kubenswrapper[5008]: I0129 15:52:38.571726 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/65ae154d-9b35-408c-bcdb-8b9601be71c8-config\") pod \"65ae154d-9b35-408c-bcdb-8b9601be71c8\" (UID: \"65ae154d-9b35-408c-bcdb-8b9601be71c8\") " Jan 29 15:52:38 crc kubenswrapper[5008]: I0129 15:52:38.571813 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/65ae154d-9b35-408c-bcdb-8b9601be71c8-dns-swift-storage-0\") pod \"65ae154d-9b35-408c-bcdb-8b9601be71c8\" (UID: \"65ae154d-9b35-408c-bcdb-8b9601be71c8\") " Jan 29 15:52:38 crc kubenswrapper[5008]: I0129 15:52:38.571976 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/65ae154d-9b35-408c-bcdb-8b9601be71c8-ovsdbserver-nb\") pod \"65ae154d-9b35-408c-bcdb-8b9601be71c8\" (UID: \"65ae154d-9b35-408c-bcdb-8b9601be71c8\") " Jan 29 15:52:38 crc kubenswrapper[5008]: I0129 15:52:38.572073 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/65ae154d-9b35-408c-bcdb-8b9601be71c8-dns-svc\") pod \"65ae154d-9b35-408c-bcdb-8b9601be71c8\" (UID: \"65ae154d-9b35-408c-bcdb-8b9601be71c8\") " Jan 29 15:52:38 crc kubenswrapper[5008]: I0129 15:52:38.572106 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/65ae154d-9b35-408c-bcdb-8b9601be71c8-ovsdbserver-sb\") pod \"65ae154d-9b35-408c-bcdb-8b9601be71c8\" (UID: \"65ae154d-9b35-408c-bcdb-8b9601be71c8\") " Jan 29 15:52:38 crc kubenswrapper[5008]: I0129 15:52:38.572185 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c2ns2\" (UniqueName: \"kubernetes.io/projected/65ae154d-9b35-408c-bcdb-8b9601be71c8-kube-api-access-c2ns2\") pod \"65ae154d-9b35-408c-bcdb-8b9601be71c8\" (UID: \"65ae154d-9b35-408c-bcdb-8b9601be71c8\") " Jan 29 15:52:38 crc kubenswrapper[5008]: I0129 15:52:38.590418 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/65ae154d-9b35-408c-bcdb-8b9601be71c8-kube-api-access-c2ns2" (OuterVolumeSpecName: "kube-api-access-c2ns2") pod "65ae154d-9b35-408c-bcdb-8b9601be71c8" (UID: "65ae154d-9b35-408c-bcdb-8b9601be71c8"). InnerVolumeSpecName "kube-api-access-c2ns2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:52:38 crc kubenswrapper[5008]: I0129 15:52:38.645525 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/65ae154d-9b35-408c-bcdb-8b9601be71c8-config" (OuterVolumeSpecName: "config") pod "65ae154d-9b35-408c-bcdb-8b9601be71c8" (UID: "65ae154d-9b35-408c-bcdb-8b9601be71c8"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:52:38 crc kubenswrapper[5008]: I0129 15:52:38.652277 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/65ae154d-9b35-408c-bcdb-8b9601be71c8-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "65ae154d-9b35-408c-bcdb-8b9601be71c8" (UID: "65ae154d-9b35-408c-bcdb-8b9601be71c8"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:52:38 crc kubenswrapper[5008]: I0129 15:52:38.658287 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/65ae154d-9b35-408c-bcdb-8b9601be71c8-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "65ae154d-9b35-408c-bcdb-8b9601be71c8" (UID: "65ae154d-9b35-408c-bcdb-8b9601be71c8"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:52:38 crc kubenswrapper[5008]: I0129 15:52:38.663003 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/65ae154d-9b35-408c-bcdb-8b9601be71c8-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "65ae154d-9b35-408c-bcdb-8b9601be71c8" (UID: "65ae154d-9b35-408c-bcdb-8b9601be71c8"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:52:38 crc kubenswrapper[5008]: I0129 15:52:38.672336 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/65ae154d-9b35-408c-bcdb-8b9601be71c8-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "65ae154d-9b35-408c-bcdb-8b9601be71c8" (UID: "65ae154d-9b35-408c-bcdb-8b9601be71c8"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:52:38 crc kubenswrapper[5008]: I0129 15:52:38.673731 5008 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/65ae154d-9b35-408c-bcdb-8b9601be71c8-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 29 15:52:38 crc kubenswrapper[5008]: I0129 15:52:38.673772 5008 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/65ae154d-9b35-408c-bcdb-8b9601be71c8-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 29 15:52:38 crc kubenswrapper[5008]: I0129 15:52:38.673802 5008 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c2ns2\" (UniqueName: \"kubernetes.io/projected/65ae154d-9b35-408c-bcdb-8b9601be71c8-kube-api-access-c2ns2\") on node \"crc\" DevicePath \"\"" Jan 29 15:52:38 crc kubenswrapper[5008]: I0129 15:52:38.673815 5008 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/65ae154d-9b35-408c-bcdb-8b9601be71c8-config\") on node \"crc\" DevicePath \"\"" Jan 29 15:52:38 crc kubenswrapper[5008]: I0129 15:52:38.673825 5008 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/65ae154d-9b35-408c-bcdb-8b9601be71c8-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 29 15:52:38 crc kubenswrapper[5008]: I0129 15:52:38.673844 5008 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/65ae154d-9b35-408c-bcdb-8b9601be71c8-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 29 15:52:38 crc kubenswrapper[5008]: I0129 15:52:38.711911 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-bccf8f775-xx5z4" event={"ID":"65ae154d-9b35-408c-bcdb-8b9601be71c8","Type":"ContainerDied","Data":"30bedbc0bc93f8ca5f3511d1081097f8182d9fc6d6457e41dfa4a6a23655328a"} Jan 29 15:52:38 crc kubenswrapper[5008]: I0129 15:52:38.711967 5008 scope.go:117] "RemoveContainer" containerID="1d607350ffbc24ef275435eb4ae5dec525e6f42db8162f7bae09094480df98a3" Jan 29 15:52:38 crc kubenswrapper[5008]: I0129 15:52:38.712105 5008 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-bccf8f775-xx5z4" Jan 29 15:52:38 crc kubenswrapper[5008]: I0129 15:52:38.718976 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-k4msd" event={"ID":"dfacde84-7d28-464b-8854-622fd127956c","Type":"ContainerStarted","Data":"0bd2718859e8227e4d8612c327ecd5f34368bcc87d5e43cf15084febf3a519cd"} Jan 29 15:52:38 crc kubenswrapper[5008]: I0129 15:52:38.719006 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-k4msd" event={"ID":"dfacde84-7d28-464b-8854-622fd127956c","Type":"ContainerStarted","Data":"60f6f2d51c09764cec2183e64ffad97ab37cff7efd3bc98a46ccc51e42738f09"} Jan 29 15:52:38 crc kubenswrapper[5008]: I0129 15:52:38.722665 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d40740f9-e8d8-4f46-b8b0-d913a6c33210","Type":"ContainerStarted","Data":"94c1a4df24e57801e6f811a20fbda55d2b2aa44f90464614f709fcc1c7771571"} Jan 29 15:52:38 crc kubenswrapper[5008]: E0129 15:52:38.724998 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-httpd\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/ubi9/httpd-24:latest\\\"\"" pod="openstack/ceilometer-0" podUID="d40740f9-e8d8-4f46-b8b0-d913a6c33210" Jan 29 15:52:38 crc kubenswrapper[5008]: I0129 15:52:38.736549 5008 scope.go:117] "RemoveContainer" containerID="60289a7b443137e8ea46321b53a131c528f20b282f9018e51ed60f8d48fdfbaa" Jan 29 15:52:38 crc kubenswrapper[5008]: I0129 15:52:38.756375 5008 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-cell-mapping-k4msd" podStartSLOduration=2.756358693 podStartE2EDuration="2.756358693s" podCreationTimestamp="2026-01-29 15:52:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 15:52:38.736096979 +0000 UTC m=+1502.408951226" watchObservedRunningTime="2026-01-29 15:52:38.756358693 +0000 UTC m=+1502.429212930" Jan 29 15:52:38 crc kubenswrapper[5008]: I0129 15:52:38.767539 5008 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-bccf8f775-xx5z4"] Jan 29 15:52:38 crc kubenswrapper[5008]: I0129 15:52:38.777228 5008 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-bccf8f775-xx5z4"] Jan 29 15:52:39 crc kubenswrapper[5008]: I0129 15:52:39.334273 5008 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="65ae154d-9b35-408c-bcdb-8b9601be71c8" path="/var/lib/kubelet/pods/65ae154d-9b35-408c-bcdb-8b9601be71c8/volumes" Jan 29 15:52:39 crc kubenswrapper[5008]: E0129 15:52:39.733341 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-httpd\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/ubi9/httpd-24:latest\\\"\"" pod="openstack/ceilometer-0" podUID="d40740f9-e8d8-4f46-b8b0-d913a6c33210" Jan 29 15:52:43 crc kubenswrapper[5008]: I0129 15:52:43.791029 5008 generic.go:334] "Generic (PLEG): container finished" podID="dfacde84-7d28-464b-8854-622fd127956c" containerID="0bd2718859e8227e4d8612c327ecd5f34368bcc87d5e43cf15084febf3a519cd" exitCode=0 Jan 29 15:52:43 crc kubenswrapper[5008]: I0129 15:52:43.791078 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-k4msd" event={"ID":"dfacde84-7d28-464b-8854-622fd127956c","Type":"ContainerDied","Data":"0bd2718859e8227e4d8612c327ecd5f34368bcc87d5e43cf15084febf3a519cd"} Jan 29 15:52:43 crc kubenswrapper[5008]: I0129 15:52:43.990472 5008 patch_prober.go:28] interesting pod/machine-config-daemon-gk9q8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 15:52:43 crc kubenswrapper[5008]: I0129 15:52:43.990552 5008 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-gk9q8" podUID="ca0fcb2d-733d-4bde-9bbf-3f7082d0e244" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 15:52:43 crc kubenswrapper[5008]: I0129 15:52:43.990618 5008 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-gk9q8" Jan 29 15:52:43 crc kubenswrapper[5008]: I0129 15:52:43.991688 5008 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"65ae63639c2ed32e45710e52e6b068b2f105163d6a00247deb197db6c3e0b41c"} pod="openshift-machine-config-operator/machine-config-daemon-gk9q8" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 29 15:52:43 crc kubenswrapper[5008]: I0129 15:52:43.991815 5008 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-gk9q8" podUID="ca0fcb2d-733d-4bde-9bbf-3f7082d0e244" containerName="machine-config-daemon" containerID="cri-o://65ae63639c2ed32e45710e52e6b068b2f105163d6a00247deb197db6c3e0b41c" gracePeriod=600 Jan 29 15:52:44 crc kubenswrapper[5008]: I0129 15:52:44.066923 5008 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 29 15:52:44 crc kubenswrapper[5008]: I0129 15:52:44.067000 5008 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 29 15:52:44 crc kubenswrapper[5008]: I0129 15:52:44.815203 5008 generic.go:334] "Generic (PLEG): container finished" podID="ca0fcb2d-733d-4bde-9bbf-3f7082d0e244" containerID="65ae63639c2ed32e45710e52e6b068b2f105163d6a00247deb197db6c3e0b41c" exitCode=0 Jan 29 15:52:44 crc kubenswrapper[5008]: I0129 15:52:44.815426 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-gk9q8" event={"ID":"ca0fcb2d-733d-4bde-9bbf-3f7082d0e244","Type":"ContainerDied","Data":"65ae63639c2ed32e45710e52e6b068b2f105163d6a00247deb197db6c3e0b41c"} Jan 29 15:52:44 crc kubenswrapper[5008]: I0129 15:52:44.815550 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-gk9q8" event={"ID":"ca0fcb2d-733d-4bde-9bbf-3f7082d0e244","Type":"ContainerStarted","Data":"1c8349b7c34277b7122a478ebda273749cae45969c3cfbb565f71a131de59c19"} Jan 29 15:52:44 crc kubenswrapper[5008]: I0129 15:52:44.815588 5008 scope.go:117] "RemoveContainer" containerID="afcf72806e2f44481eaccbb425ccc0452067f0e28ee8224a454fe6d6fab03a1b" Jan 29 15:52:45 crc kubenswrapper[5008]: I0129 15:52:45.094923 5008 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="f3e5f6eb-04c4-4797-9a4a-e4a2a710bcb6" containerName="nova-api-api" probeResult="failure" output="Get \"https://10.217.0.203:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 29 15:52:45 crc kubenswrapper[5008]: I0129 15:52:45.095277 5008 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="f3e5f6eb-04c4-4797-9a4a-e4a2a710bcb6" containerName="nova-api-log" probeResult="failure" output="Get \"https://10.217.0.203:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 29 15:52:45 crc kubenswrapper[5008]: I0129 15:52:45.241439 5008 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-k4msd" Jan 29 15:52:45 crc kubenswrapper[5008]: I0129 15:52:45.295643 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dfacde84-7d28-464b-8854-622fd127956c-scripts\") pod \"dfacde84-7d28-464b-8854-622fd127956c\" (UID: \"dfacde84-7d28-464b-8854-622fd127956c\") " Jan 29 15:52:45 crc kubenswrapper[5008]: I0129 15:52:45.295815 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dfacde84-7d28-464b-8854-622fd127956c-config-data\") pod \"dfacde84-7d28-464b-8854-622fd127956c\" (UID: \"dfacde84-7d28-464b-8854-622fd127956c\") " Jan 29 15:52:45 crc kubenswrapper[5008]: I0129 15:52:45.295891 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4kn6j\" (UniqueName: \"kubernetes.io/projected/dfacde84-7d28-464b-8854-622fd127956c-kube-api-access-4kn6j\") pod \"dfacde84-7d28-464b-8854-622fd127956c\" (UID: \"dfacde84-7d28-464b-8854-622fd127956c\") " Jan 29 15:52:45 crc kubenswrapper[5008]: I0129 15:52:45.295940 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dfacde84-7d28-464b-8854-622fd127956c-combined-ca-bundle\") pod \"dfacde84-7d28-464b-8854-622fd127956c\" (UID: \"dfacde84-7d28-464b-8854-622fd127956c\") " Jan 29 15:52:45 crc kubenswrapper[5008]: I0129 15:52:45.311963 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dfacde84-7d28-464b-8854-622fd127956c-kube-api-access-4kn6j" (OuterVolumeSpecName: "kube-api-access-4kn6j") pod "dfacde84-7d28-464b-8854-622fd127956c" (UID: "dfacde84-7d28-464b-8854-622fd127956c"). InnerVolumeSpecName "kube-api-access-4kn6j". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:52:45 crc kubenswrapper[5008]: I0129 15:52:45.316283 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dfacde84-7d28-464b-8854-622fd127956c-scripts" (OuterVolumeSpecName: "scripts") pod "dfacde84-7d28-464b-8854-622fd127956c" (UID: "dfacde84-7d28-464b-8854-622fd127956c"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:52:45 crc kubenswrapper[5008]: I0129 15:52:45.349582 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dfacde84-7d28-464b-8854-622fd127956c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "dfacde84-7d28-464b-8854-622fd127956c" (UID: "dfacde84-7d28-464b-8854-622fd127956c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:52:45 crc kubenswrapper[5008]: I0129 15:52:45.353416 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dfacde84-7d28-464b-8854-622fd127956c-config-data" (OuterVolumeSpecName: "config-data") pod "dfacde84-7d28-464b-8854-622fd127956c" (UID: "dfacde84-7d28-464b-8854-622fd127956c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:52:45 crc kubenswrapper[5008]: I0129 15:52:45.398873 5008 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dfacde84-7d28-464b-8854-622fd127956c-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 15:52:45 crc kubenswrapper[5008]: I0129 15:52:45.399062 5008 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4kn6j\" (UniqueName: \"kubernetes.io/projected/dfacde84-7d28-464b-8854-622fd127956c-kube-api-access-4kn6j\") on node \"crc\" DevicePath \"\"" Jan 29 15:52:45 crc kubenswrapper[5008]: I0129 15:52:45.399168 5008 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dfacde84-7d28-464b-8854-622fd127956c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 15:52:45 crc kubenswrapper[5008]: I0129 15:52:45.399267 5008 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dfacde84-7d28-464b-8854-622fd127956c-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 15:52:45 crc kubenswrapper[5008]: I0129 15:52:45.826757 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-k4msd" event={"ID":"dfacde84-7d28-464b-8854-622fd127956c","Type":"ContainerDied","Data":"60f6f2d51c09764cec2183e64ffad97ab37cff7efd3bc98a46ccc51e42738f09"} Jan 29 15:52:45 crc kubenswrapper[5008]: I0129 15:52:45.826811 5008 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-k4msd" Jan 29 15:52:45 crc kubenswrapper[5008]: I0129 15:52:45.826820 5008 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="60f6f2d51c09764cec2183e64ffad97ab37cff7efd3bc98a46ccc51e42738f09" Jan 29 15:52:46 crc kubenswrapper[5008]: I0129 15:52:46.025688 5008 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Jan 29 15:52:46 crc kubenswrapper[5008]: I0129 15:52:46.026327 5008 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="2fb31b59-3f31-4c28-ab5c-e2248ed9fd68" containerName="nova-scheduler-scheduler" containerID="cri-o://17b2938a300945d89c2e820081e7b2a24c3ca3bec8b7edb3be53cf8c5bdf2768" gracePeriod=30 Jan 29 15:52:46 crc kubenswrapper[5008]: I0129 15:52:46.045066 5008 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 29 15:52:46 crc kubenswrapper[5008]: I0129 15:52:46.045385 5008 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="f3e5f6eb-04c4-4797-9a4a-e4a2a710bcb6" containerName="nova-api-log" containerID="cri-o://bcb62e0a30103f70c2e23448f433250c8f5931d78a534a384a1188d58be16119" gracePeriod=30 Jan 29 15:52:46 crc kubenswrapper[5008]: I0129 15:52:46.046377 5008 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="f3e5f6eb-04c4-4797-9a4a-e4a2a710bcb6" containerName="nova-api-api" containerID="cri-o://f79f38ff0afa3885296e624a49ae42810a26d27a384ceccb3214269c19350348" gracePeriod=30 Jan 29 15:52:46 crc kubenswrapper[5008]: I0129 15:52:46.194913 5008 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 29 15:52:46 crc kubenswrapper[5008]: I0129 15:52:46.195203 5008 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="038b9a46-5128-497b-8073-557e8f3542fb" containerName="nova-metadata-log" containerID="cri-o://b1cb4fe0e965ed395741ca05d4744c778b350ee5b58ae99ed0af4f4789b2408e" gracePeriod=30 Jan 29 15:52:46 crc kubenswrapper[5008]: I0129 15:52:46.195248 5008 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="038b9a46-5128-497b-8073-557e8f3542fb" containerName="nova-metadata-metadata" containerID="cri-o://951b0f36fd6a684d8c30fa21487872b1f27e31c08947dd98a725b29af452b297" gracePeriod=30 Jan 29 15:52:46 crc kubenswrapper[5008]: I0129 15:52:46.844175 5008 generic.go:334] "Generic (PLEG): container finished" podID="038b9a46-5128-497b-8073-557e8f3542fb" containerID="b1cb4fe0e965ed395741ca05d4744c778b350ee5b58ae99ed0af4f4789b2408e" exitCode=143 Jan 29 15:52:46 crc kubenswrapper[5008]: I0129 15:52:46.844247 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"038b9a46-5128-497b-8073-557e8f3542fb","Type":"ContainerDied","Data":"b1cb4fe0e965ed395741ca05d4744c778b350ee5b58ae99ed0af4f4789b2408e"} Jan 29 15:52:46 crc kubenswrapper[5008]: I0129 15:52:46.846720 5008 generic.go:334] "Generic (PLEG): container finished" podID="f3e5f6eb-04c4-4797-9a4a-e4a2a710bcb6" containerID="bcb62e0a30103f70c2e23448f433250c8f5931d78a534a384a1188d58be16119" exitCode=143 Jan 29 15:52:46 crc kubenswrapper[5008]: I0129 15:52:46.846749 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"f3e5f6eb-04c4-4797-9a4a-e4a2a710bcb6","Type":"ContainerDied","Data":"bcb62e0a30103f70c2e23448f433250c8f5931d78a534a384a1188d58be16119"} Jan 29 15:52:47 crc kubenswrapper[5008]: E0129 15:52:47.829459 5008 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 17b2938a300945d89c2e820081e7b2a24c3ca3bec8b7edb3be53cf8c5bdf2768 is running failed: container process not found" containerID="17b2938a300945d89c2e820081e7b2a24c3ca3bec8b7edb3be53cf8c5bdf2768" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 29 15:52:47 crc kubenswrapper[5008]: E0129 15:52:47.830534 5008 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 17b2938a300945d89c2e820081e7b2a24c3ca3bec8b7edb3be53cf8c5bdf2768 is running failed: container process not found" containerID="17b2938a300945d89c2e820081e7b2a24c3ca3bec8b7edb3be53cf8c5bdf2768" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 29 15:52:47 crc kubenswrapper[5008]: E0129 15:52:47.831062 5008 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 17b2938a300945d89c2e820081e7b2a24c3ca3bec8b7edb3be53cf8c5bdf2768 is running failed: container process not found" containerID="17b2938a300945d89c2e820081e7b2a24c3ca3bec8b7edb3be53cf8c5bdf2768" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 29 15:52:47 crc kubenswrapper[5008]: E0129 15:52:47.831123 5008 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 17b2938a300945d89c2e820081e7b2a24c3ca3bec8b7edb3be53cf8c5bdf2768 is running failed: container process not found" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="2fb31b59-3f31-4c28-ab5c-e2248ed9fd68" containerName="nova-scheduler-scheduler" Jan 29 15:52:48 crc kubenswrapper[5008]: I0129 15:52:48.910238 5008 generic.go:334] "Generic (PLEG): container finished" podID="2fb31b59-3f31-4c28-ab5c-e2248ed9fd68" containerID="17b2938a300945d89c2e820081e7b2a24c3ca3bec8b7edb3be53cf8c5bdf2768" exitCode=0 Jan 29 15:52:48 crc kubenswrapper[5008]: I0129 15:52:48.910314 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"2fb31b59-3f31-4c28-ab5c-e2248ed9fd68","Type":"ContainerDied","Data":"17b2938a300945d89c2e820081e7b2a24c3ca3bec8b7edb3be53cf8c5bdf2768"} Jan 29 15:52:49 crc kubenswrapper[5008]: I0129 15:52:49.558816 5008 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 29 15:52:49 crc kubenswrapper[5008]: I0129 15:52:49.584338 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2fb31b59-3f31-4c28-ab5c-e2248ed9fd68-combined-ca-bundle\") pod \"2fb31b59-3f31-4c28-ab5c-e2248ed9fd68\" (UID: \"2fb31b59-3f31-4c28-ab5c-e2248ed9fd68\") " Jan 29 15:52:49 crc kubenswrapper[5008]: I0129 15:52:49.584422 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fr65m\" (UniqueName: \"kubernetes.io/projected/2fb31b59-3f31-4c28-ab5c-e2248ed9fd68-kube-api-access-fr65m\") pod \"2fb31b59-3f31-4c28-ab5c-e2248ed9fd68\" (UID: \"2fb31b59-3f31-4c28-ab5c-e2248ed9fd68\") " Jan 29 15:52:49 crc kubenswrapper[5008]: I0129 15:52:49.614897 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2fb31b59-3f31-4c28-ab5c-e2248ed9fd68-kube-api-access-fr65m" (OuterVolumeSpecName: "kube-api-access-fr65m") pod "2fb31b59-3f31-4c28-ab5c-e2248ed9fd68" (UID: "2fb31b59-3f31-4c28-ab5c-e2248ed9fd68"). InnerVolumeSpecName "kube-api-access-fr65m". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:52:49 crc kubenswrapper[5008]: I0129 15:52:49.621385 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2fb31b59-3f31-4c28-ab5c-e2248ed9fd68-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2fb31b59-3f31-4c28-ab5c-e2248ed9fd68" (UID: "2fb31b59-3f31-4c28-ab5c-e2248ed9fd68"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:52:49 crc kubenswrapper[5008]: I0129 15:52:49.685809 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2fb31b59-3f31-4c28-ab5c-e2248ed9fd68-config-data\") pod \"2fb31b59-3f31-4c28-ab5c-e2248ed9fd68\" (UID: \"2fb31b59-3f31-4c28-ab5c-e2248ed9fd68\") " Jan 29 15:52:49 crc kubenswrapper[5008]: I0129 15:52:49.686084 5008 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2fb31b59-3f31-4c28-ab5c-e2248ed9fd68-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 15:52:49 crc kubenswrapper[5008]: I0129 15:52:49.686116 5008 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fr65m\" (UniqueName: \"kubernetes.io/projected/2fb31b59-3f31-4c28-ab5c-e2248ed9fd68-kube-api-access-fr65m\") on node \"crc\" DevicePath \"\"" Jan 29 15:52:49 crc kubenswrapper[5008]: I0129 15:52:49.712347 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2fb31b59-3f31-4c28-ab5c-e2248ed9fd68-config-data" (OuterVolumeSpecName: "config-data") pod "2fb31b59-3f31-4c28-ab5c-e2248ed9fd68" (UID: "2fb31b59-3f31-4c28-ab5c-e2248ed9fd68"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:52:49 crc kubenswrapper[5008]: I0129 15:52:49.787986 5008 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2fb31b59-3f31-4c28-ab5c-e2248ed9fd68-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 15:52:49 crc kubenswrapper[5008]: I0129 15:52:49.921152 5008 generic.go:334] "Generic (PLEG): container finished" podID="038b9a46-5128-497b-8073-557e8f3542fb" containerID="951b0f36fd6a684d8c30fa21487872b1f27e31c08947dd98a725b29af452b297" exitCode=0 Jan 29 15:52:49 crc kubenswrapper[5008]: I0129 15:52:49.921234 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"038b9a46-5128-497b-8073-557e8f3542fb","Type":"ContainerDied","Data":"951b0f36fd6a684d8c30fa21487872b1f27e31c08947dd98a725b29af452b297"} Jan 29 15:52:49 crc kubenswrapper[5008]: I0129 15:52:49.924299 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"2fb31b59-3f31-4c28-ab5c-e2248ed9fd68","Type":"ContainerDied","Data":"389259437b307b4cfc4471206316ecc9ba9f12cd3bf0806c91536ddba10b92db"} Jan 29 15:52:49 crc kubenswrapper[5008]: I0129 15:52:49.924459 5008 scope.go:117] "RemoveContainer" containerID="17b2938a300945d89c2e820081e7b2a24c3ca3bec8b7edb3be53cf8c5bdf2768" Jan 29 15:52:49 crc kubenswrapper[5008]: I0129 15:52:49.924346 5008 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 29 15:52:49 crc kubenswrapper[5008]: I0129 15:52:49.960240 5008 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Jan 29 15:52:49 crc kubenswrapper[5008]: I0129 15:52:49.968577 5008 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Jan 29 15:52:49 crc kubenswrapper[5008]: I0129 15:52:49.993104 5008 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Jan 29 15:52:49 crc kubenswrapper[5008]: E0129 15:52:49.993483 5008 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dfacde84-7d28-464b-8854-622fd127956c" containerName="nova-manage" Jan 29 15:52:49 crc kubenswrapper[5008]: I0129 15:52:49.993501 5008 state_mem.go:107] "Deleted CPUSet assignment" podUID="dfacde84-7d28-464b-8854-622fd127956c" containerName="nova-manage" Jan 29 15:52:49 crc kubenswrapper[5008]: E0129 15:52:49.993510 5008 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="65ae154d-9b35-408c-bcdb-8b9601be71c8" containerName="init" Jan 29 15:52:49 crc kubenswrapper[5008]: I0129 15:52:49.993516 5008 state_mem.go:107] "Deleted CPUSet assignment" podUID="65ae154d-9b35-408c-bcdb-8b9601be71c8" containerName="init" Jan 29 15:52:49 crc kubenswrapper[5008]: E0129 15:52:49.993534 5008 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2fb31b59-3f31-4c28-ab5c-e2248ed9fd68" containerName="nova-scheduler-scheduler" Jan 29 15:52:49 crc kubenswrapper[5008]: I0129 15:52:49.993541 5008 state_mem.go:107] "Deleted CPUSet assignment" podUID="2fb31b59-3f31-4c28-ab5c-e2248ed9fd68" containerName="nova-scheduler-scheduler" Jan 29 15:52:49 crc kubenswrapper[5008]: E0129 15:52:49.993565 5008 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="65ae154d-9b35-408c-bcdb-8b9601be71c8" containerName="dnsmasq-dns" Jan 29 15:52:49 crc kubenswrapper[5008]: I0129 15:52:49.993570 5008 state_mem.go:107] "Deleted CPUSet assignment" podUID="65ae154d-9b35-408c-bcdb-8b9601be71c8" containerName="dnsmasq-dns" Jan 29 15:52:49 crc kubenswrapper[5008]: I0129 15:52:49.993754 5008 memory_manager.go:354] "RemoveStaleState removing state" podUID="2fb31b59-3f31-4c28-ab5c-e2248ed9fd68" containerName="nova-scheduler-scheduler" Jan 29 15:52:49 crc kubenswrapper[5008]: I0129 15:52:49.993776 5008 memory_manager.go:354] "RemoveStaleState removing state" podUID="dfacde84-7d28-464b-8854-622fd127956c" containerName="nova-manage" Jan 29 15:52:49 crc kubenswrapper[5008]: I0129 15:52:49.993789 5008 memory_manager.go:354] "RemoveStaleState removing state" podUID="65ae154d-9b35-408c-bcdb-8b9601be71c8" containerName="dnsmasq-dns" Jan 29 15:52:49 crc kubenswrapper[5008]: I0129 15:52:49.994358 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 29 15:52:49 crc kubenswrapper[5008]: I0129 15:52:49.995772 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Jan 29 15:52:50 crc kubenswrapper[5008]: I0129 15:52:50.002983 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 29 15:52:50 crc kubenswrapper[5008]: I0129 15:52:50.093889 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f6caa062-78b8-42ad-a655-6828f63a7e8f-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"f6caa062-78b8-42ad-a655-6828f63a7e8f\") " pod="openstack/nova-scheduler-0" Jan 29 15:52:50 crc kubenswrapper[5008]: I0129 15:52:50.094261 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f6caa062-78b8-42ad-a655-6828f63a7e8f-config-data\") pod \"nova-scheduler-0\" (UID: \"f6caa062-78b8-42ad-a655-6828f63a7e8f\") " pod="openstack/nova-scheduler-0" Jan 29 15:52:50 crc kubenswrapper[5008]: I0129 15:52:50.094294 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-858gf\" (UniqueName: \"kubernetes.io/projected/f6caa062-78b8-42ad-a655-6828f63a7e8f-kube-api-access-858gf\") pod \"nova-scheduler-0\" (UID: \"f6caa062-78b8-42ad-a655-6828f63a7e8f\") " pod="openstack/nova-scheduler-0" Jan 29 15:52:50 crc kubenswrapper[5008]: I0129 15:52:50.195762 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f6caa062-78b8-42ad-a655-6828f63a7e8f-config-data\") pod \"nova-scheduler-0\" (UID: \"f6caa062-78b8-42ad-a655-6828f63a7e8f\") " pod="openstack/nova-scheduler-0" Jan 29 15:52:50 crc kubenswrapper[5008]: I0129 15:52:50.195846 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-858gf\" (UniqueName: \"kubernetes.io/projected/f6caa062-78b8-42ad-a655-6828f63a7e8f-kube-api-access-858gf\") pod \"nova-scheduler-0\" (UID: \"f6caa062-78b8-42ad-a655-6828f63a7e8f\") " pod="openstack/nova-scheduler-0" Jan 29 15:52:50 crc kubenswrapper[5008]: I0129 15:52:50.195909 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f6caa062-78b8-42ad-a655-6828f63a7e8f-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"f6caa062-78b8-42ad-a655-6828f63a7e8f\") " pod="openstack/nova-scheduler-0" Jan 29 15:52:50 crc kubenswrapper[5008]: I0129 15:52:50.200823 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f6caa062-78b8-42ad-a655-6828f63a7e8f-config-data\") pod \"nova-scheduler-0\" (UID: \"f6caa062-78b8-42ad-a655-6828f63a7e8f\") " pod="openstack/nova-scheduler-0" Jan 29 15:52:50 crc kubenswrapper[5008]: I0129 15:52:50.201015 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f6caa062-78b8-42ad-a655-6828f63a7e8f-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"f6caa062-78b8-42ad-a655-6828f63a7e8f\") " pod="openstack/nova-scheduler-0" Jan 29 15:52:50 crc kubenswrapper[5008]: I0129 15:52:50.212132 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-858gf\" (UniqueName: \"kubernetes.io/projected/f6caa062-78b8-42ad-a655-6828f63a7e8f-kube-api-access-858gf\") pod \"nova-scheduler-0\" (UID: \"f6caa062-78b8-42ad-a655-6828f63a7e8f\") " pod="openstack/nova-scheduler-0" Jan 29 15:52:50 crc kubenswrapper[5008]: I0129 15:52:50.255645 5008 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 29 15:52:50 crc kubenswrapper[5008]: I0129 15:52:50.297043 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l5fvq\" (UniqueName: \"kubernetes.io/projected/038b9a46-5128-497b-8073-557e8f3542fb-kube-api-access-l5fvq\") pod \"038b9a46-5128-497b-8073-557e8f3542fb\" (UID: \"038b9a46-5128-497b-8073-557e8f3542fb\") " Jan 29 15:52:50 crc kubenswrapper[5008]: I0129 15:52:50.297113 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/038b9a46-5128-497b-8073-557e8f3542fb-combined-ca-bundle\") pod \"038b9a46-5128-497b-8073-557e8f3542fb\" (UID: \"038b9a46-5128-497b-8073-557e8f3542fb\") " Jan 29 15:52:50 crc kubenswrapper[5008]: I0129 15:52:50.297152 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/038b9a46-5128-497b-8073-557e8f3542fb-logs\") pod \"038b9a46-5128-497b-8073-557e8f3542fb\" (UID: \"038b9a46-5128-497b-8073-557e8f3542fb\") " Jan 29 15:52:50 crc kubenswrapper[5008]: I0129 15:52:50.297184 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/038b9a46-5128-497b-8073-557e8f3542fb-config-data\") pod \"038b9a46-5128-497b-8073-557e8f3542fb\" (UID: \"038b9a46-5128-497b-8073-557e8f3542fb\") " Jan 29 15:52:50 crc kubenswrapper[5008]: I0129 15:52:50.297233 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/038b9a46-5128-497b-8073-557e8f3542fb-nova-metadata-tls-certs\") pod \"038b9a46-5128-497b-8073-557e8f3542fb\" (UID: \"038b9a46-5128-497b-8073-557e8f3542fb\") " Jan 29 15:52:50 crc kubenswrapper[5008]: I0129 15:52:50.297794 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/038b9a46-5128-497b-8073-557e8f3542fb-logs" (OuterVolumeSpecName: "logs") pod "038b9a46-5128-497b-8073-557e8f3542fb" (UID: "038b9a46-5128-497b-8073-557e8f3542fb"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 15:52:50 crc kubenswrapper[5008]: I0129 15:52:50.300023 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/038b9a46-5128-497b-8073-557e8f3542fb-kube-api-access-l5fvq" (OuterVolumeSpecName: "kube-api-access-l5fvq") pod "038b9a46-5128-497b-8073-557e8f3542fb" (UID: "038b9a46-5128-497b-8073-557e8f3542fb"). InnerVolumeSpecName "kube-api-access-l5fvq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:52:50 crc kubenswrapper[5008]: I0129 15:52:50.319486 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 29 15:52:50 crc kubenswrapper[5008]: I0129 15:52:50.326764 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/038b9a46-5128-497b-8073-557e8f3542fb-config-data" (OuterVolumeSpecName: "config-data") pod "038b9a46-5128-497b-8073-557e8f3542fb" (UID: "038b9a46-5128-497b-8073-557e8f3542fb"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:52:50 crc kubenswrapper[5008]: I0129 15:52:50.337501 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/038b9a46-5128-497b-8073-557e8f3542fb-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "038b9a46-5128-497b-8073-557e8f3542fb" (UID: "038b9a46-5128-497b-8073-557e8f3542fb"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:52:50 crc kubenswrapper[5008]: I0129 15:52:50.398890 5008 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l5fvq\" (UniqueName: \"kubernetes.io/projected/038b9a46-5128-497b-8073-557e8f3542fb-kube-api-access-l5fvq\") on node \"crc\" DevicePath \"\"" Jan 29 15:52:50 crc kubenswrapper[5008]: I0129 15:52:50.398925 5008 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/038b9a46-5128-497b-8073-557e8f3542fb-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 15:52:50 crc kubenswrapper[5008]: I0129 15:52:50.398935 5008 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/038b9a46-5128-497b-8073-557e8f3542fb-logs\") on node \"crc\" DevicePath \"\"" Jan 29 15:52:50 crc kubenswrapper[5008]: I0129 15:52:50.398944 5008 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/038b9a46-5128-497b-8073-557e8f3542fb-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 15:52:50 crc kubenswrapper[5008]: I0129 15:52:50.412827 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/038b9a46-5128-497b-8073-557e8f3542fb-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "038b9a46-5128-497b-8073-557e8f3542fb" (UID: "038b9a46-5128-497b-8073-557e8f3542fb"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:52:50 crc kubenswrapper[5008]: E0129 15:52:50.493960 5008 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://registry.redhat.io/ubi9/httpd-24:latest: Requesting bearer token: invalid status code from registry 403 (Forbidden)" image="registry.redhat.io/ubi9/httpd-24:latest" Jan 29 15:52:50 crc kubenswrapper[5008]: E0129 15:52:50.494160 5008 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:proxy-httpd,Image:registry.redhat.io/ubi9/httpd-24:latest,Command:[/usr/sbin/httpd],Args:[-DFOREGROUND],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:proxy-httpd,HostPort:0,ContainerPort:3000,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/httpd/conf/httpd.conf,SubPath:httpd.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/httpd/conf.d/ssl.conf,SubPath:ssl.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:run-httpd,ReadOnly:false,MountPath:/run/httpd,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:log-httpd,ReadOnly:false,MountPath:/var/log/httpd,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-4zk8n,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/,Port:{0 3000 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:30,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/,Port:{0 3000 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:30,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(d40740f9-e8d8-4f46-b8b0-d913a6c33210): ErrImagePull: initializing source docker://registry.redhat.io/ubi9/httpd-24:latest: Requesting bearer token: invalid status code from registry 403 (Forbidden)" logger="UnhandledError" Jan 29 15:52:50 crc kubenswrapper[5008]: E0129 15:52:50.495654 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-httpd\" with ErrImagePull: \"initializing source docker://registry.redhat.io/ubi9/httpd-24:latest: Requesting bearer token: invalid status code from registry 403 (Forbidden)\"" pod="openstack/ceilometer-0" podUID="d40740f9-e8d8-4f46-b8b0-d913a6c33210" Jan 29 15:52:50 crc kubenswrapper[5008]: I0129 15:52:50.501214 5008 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/038b9a46-5128-497b-8073-557e8f3542fb-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 29 15:52:50 crc kubenswrapper[5008]: W0129 15:52:50.829488 5008 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf6caa062_78b8_42ad_a655_6828f63a7e8f.slice/crio-8020386dfa58fb41e827835a90030dcb286ee2dc46d4f86024db8938474553ee WatchSource:0}: Error finding container 8020386dfa58fb41e827835a90030dcb286ee2dc46d4f86024db8938474553ee: Status 404 returned error can't find the container with id 8020386dfa58fb41e827835a90030dcb286ee2dc46d4f86024db8938474553ee Jan 29 15:52:50 crc kubenswrapper[5008]: I0129 15:52:50.831627 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 29 15:52:50 crc kubenswrapper[5008]: I0129 15:52:50.943712 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"f6caa062-78b8-42ad-a655-6828f63a7e8f","Type":"ContainerStarted","Data":"8020386dfa58fb41e827835a90030dcb286ee2dc46d4f86024db8938474553ee"} Jan 29 15:52:50 crc kubenswrapper[5008]: I0129 15:52:50.948314 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"038b9a46-5128-497b-8073-557e8f3542fb","Type":"ContainerDied","Data":"f54ae340e3e9e95461e8dd7339317d96f2c608cdca914d4ca65b81b43814916d"} Jan 29 15:52:50 crc kubenswrapper[5008]: I0129 15:52:50.948376 5008 scope.go:117] "RemoveContainer" containerID="951b0f36fd6a684d8c30fa21487872b1f27e31c08947dd98a725b29af452b297" Jan 29 15:52:50 crc kubenswrapper[5008]: I0129 15:52:50.948716 5008 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 29 15:52:50 crc kubenswrapper[5008]: I0129 15:52:50.961023 5008 generic.go:334] "Generic (PLEG): container finished" podID="f3e5f6eb-04c4-4797-9a4a-e4a2a710bcb6" containerID="f79f38ff0afa3885296e624a49ae42810a26d27a384ceccb3214269c19350348" exitCode=0 Jan 29 15:52:50 crc kubenswrapper[5008]: I0129 15:52:50.961078 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"f3e5f6eb-04c4-4797-9a4a-e4a2a710bcb6","Type":"ContainerDied","Data":"f79f38ff0afa3885296e624a49ae42810a26d27a384ceccb3214269c19350348"} Jan 29 15:52:50 crc kubenswrapper[5008]: I0129 15:52:50.961112 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"f3e5f6eb-04c4-4797-9a4a-e4a2a710bcb6","Type":"ContainerDied","Data":"f95804822e24c4b9f3caf2c4f8e60772c884987c449b1013ddd08314002b1592"} Jan 29 15:52:50 crc kubenswrapper[5008]: I0129 15:52:50.961128 5008 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f95804822e24c4b9f3caf2c4f8e60772c884987c449b1013ddd08314002b1592" Jan 29 15:52:50 crc kubenswrapper[5008]: I0129 15:52:50.970654 5008 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 29 15:52:50 crc kubenswrapper[5008]: I0129 15:52:50.982109 5008 scope.go:117] "RemoveContainer" containerID="b1cb4fe0e965ed395741ca05d4744c778b350ee5b58ae99ed0af4f4789b2408e" Jan 29 15:52:51 crc kubenswrapper[5008]: I0129 15:52:51.018143 5008 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 29 15:52:51 crc kubenswrapper[5008]: I0129 15:52:51.027727 5008 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Jan 29 15:52:51 crc kubenswrapper[5008]: I0129 15:52:51.052326 5008 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Jan 29 15:52:51 crc kubenswrapper[5008]: E0129 15:52:51.052803 5008 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="038b9a46-5128-497b-8073-557e8f3542fb" containerName="nova-metadata-metadata" Jan 29 15:52:51 crc kubenswrapper[5008]: I0129 15:52:51.052841 5008 state_mem.go:107] "Deleted CPUSet assignment" podUID="038b9a46-5128-497b-8073-557e8f3542fb" containerName="nova-metadata-metadata" Jan 29 15:52:51 crc kubenswrapper[5008]: E0129 15:52:51.052866 5008 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f3e5f6eb-04c4-4797-9a4a-e4a2a710bcb6" containerName="nova-api-log" Jan 29 15:52:51 crc kubenswrapper[5008]: I0129 15:52:51.052875 5008 state_mem.go:107] "Deleted CPUSet assignment" podUID="f3e5f6eb-04c4-4797-9a4a-e4a2a710bcb6" containerName="nova-api-log" Jan 29 15:52:51 crc kubenswrapper[5008]: E0129 15:52:51.052890 5008 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f3e5f6eb-04c4-4797-9a4a-e4a2a710bcb6" containerName="nova-api-api" Jan 29 15:52:51 crc kubenswrapper[5008]: I0129 15:52:51.052898 5008 state_mem.go:107] "Deleted CPUSet assignment" podUID="f3e5f6eb-04c4-4797-9a4a-e4a2a710bcb6" containerName="nova-api-api" Jan 29 15:52:51 crc kubenswrapper[5008]: E0129 15:52:51.052919 5008 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="038b9a46-5128-497b-8073-557e8f3542fb" containerName="nova-metadata-log" Jan 29 15:52:51 crc kubenswrapper[5008]: I0129 15:52:51.052928 5008 state_mem.go:107] "Deleted CPUSet assignment" podUID="038b9a46-5128-497b-8073-557e8f3542fb" containerName="nova-metadata-log" Jan 29 15:52:51 crc kubenswrapper[5008]: I0129 15:52:51.053142 5008 memory_manager.go:354] "RemoveStaleState removing state" podUID="f3e5f6eb-04c4-4797-9a4a-e4a2a710bcb6" containerName="nova-api-log" Jan 29 15:52:51 crc kubenswrapper[5008]: I0129 15:52:51.053165 5008 memory_manager.go:354] "RemoveStaleState removing state" podUID="038b9a46-5128-497b-8073-557e8f3542fb" containerName="nova-metadata-log" Jan 29 15:52:51 crc kubenswrapper[5008]: I0129 15:52:51.053183 5008 memory_manager.go:354] "RemoveStaleState removing state" podUID="038b9a46-5128-497b-8073-557e8f3542fb" containerName="nova-metadata-metadata" Jan 29 15:52:51 crc kubenswrapper[5008]: I0129 15:52:51.053198 5008 memory_manager.go:354] "RemoveStaleState removing state" podUID="f3e5f6eb-04c4-4797-9a4a-e4a2a710bcb6" containerName="nova-api-api" Jan 29 15:52:51 crc kubenswrapper[5008]: I0129 15:52:51.054363 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 29 15:52:51 crc kubenswrapper[5008]: I0129 15:52:51.057016 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Jan 29 15:52:51 crc kubenswrapper[5008]: I0129 15:52:51.058436 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Jan 29 15:52:51 crc kubenswrapper[5008]: I0129 15:52:51.060958 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 29 15:52:51 crc kubenswrapper[5008]: I0129 15:52:51.112818 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f3e5f6eb-04c4-4797-9a4a-e4a2a710bcb6-logs\") pod \"f3e5f6eb-04c4-4797-9a4a-e4a2a710bcb6\" (UID: \"f3e5f6eb-04c4-4797-9a4a-e4a2a710bcb6\") " Jan 29 15:52:51 crc kubenswrapper[5008]: I0129 15:52:51.112906 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f3e5f6eb-04c4-4797-9a4a-e4a2a710bcb6-config-data\") pod \"f3e5f6eb-04c4-4797-9a4a-e4a2a710bcb6\" (UID: \"f3e5f6eb-04c4-4797-9a4a-e4a2a710bcb6\") " Jan 29 15:52:51 crc kubenswrapper[5008]: I0129 15:52:51.112946 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ns2nh\" (UniqueName: \"kubernetes.io/projected/f3e5f6eb-04c4-4797-9a4a-e4a2a710bcb6-kube-api-access-ns2nh\") pod \"f3e5f6eb-04c4-4797-9a4a-e4a2a710bcb6\" (UID: \"f3e5f6eb-04c4-4797-9a4a-e4a2a710bcb6\") " Jan 29 15:52:51 crc kubenswrapper[5008]: I0129 15:52:51.112989 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f3e5f6eb-04c4-4797-9a4a-e4a2a710bcb6-public-tls-certs\") pod \"f3e5f6eb-04c4-4797-9a4a-e4a2a710bcb6\" (UID: \"f3e5f6eb-04c4-4797-9a4a-e4a2a710bcb6\") " Jan 29 15:52:51 crc kubenswrapper[5008]: I0129 15:52:51.113032 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f3e5f6eb-04c4-4797-9a4a-e4a2a710bcb6-internal-tls-certs\") pod \"f3e5f6eb-04c4-4797-9a4a-e4a2a710bcb6\" (UID: \"f3e5f6eb-04c4-4797-9a4a-e4a2a710bcb6\") " Jan 29 15:52:51 crc kubenswrapper[5008]: I0129 15:52:51.113123 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f3e5f6eb-04c4-4797-9a4a-e4a2a710bcb6-combined-ca-bundle\") pod \"f3e5f6eb-04c4-4797-9a4a-e4a2a710bcb6\" (UID: \"f3e5f6eb-04c4-4797-9a4a-e4a2a710bcb6\") " Jan 29 15:52:51 crc kubenswrapper[5008]: I0129 15:52:51.113384 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f3e5f6eb-04c4-4797-9a4a-e4a2a710bcb6-logs" (OuterVolumeSpecName: "logs") pod "f3e5f6eb-04c4-4797-9a4a-e4a2a710bcb6" (UID: "f3e5f6eb-04c4-4797-9a4a-e4a2a710bcb6"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 15:52:51 crc kubenswrapper[5008]: I0129 15:52:51.113691 5008 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f3e5f6eb-04c4-4797-9a4a-e4a2a710bcb6-logs\") on node \"crc\" DevicePath \"\"" Jan 29 15:52:51 crc kubenswrapper[5008]: I0129 15:52:51.116751 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f3e5f6eb-04c4-4797-9a4a-e4a2a710bcb6-kube-api-access-ns2nh" (OuterVolumeSpecName: "kube-api-access-ns2nh") pod "f3e5f6eb-04c4-4797-9a4a-e4a2a710bcb6" (UID: "f3e5f6eb-04c4-4797-9a4a-e4a2a710bcb6"). InnerVolumeSpecName "kube-api-access-ns2nh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:52:51 crc kubenswrapper[5008]: I0129 15:52:51.139531 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f3e5f6eb-04c4-4797-9a4a-e4a2a710bcb6-config-data" (OuterVolumeSpecName: "config-data") pod "f3e5f6eb-04c4-4797-9a4a-e4a2a710bcb6" (UID: "f3e5f6eb-04c4-4797-9a4a-e4a2a710bcb6"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:52:51 crc kubenswrapper[5008]: I0129 15:52:51.145607 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f3e5f6eb-04c4-4797-9a4a-e4a2a710bcb6-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f3e5f6eb-04c4-4797-9a4a-e4a2a710bcb6" (UID: "f3e5f6eb-04c4-4797-9a4a-e4a2a710bcb6"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:52:51 crc kubenswrapper[5008]: I0129 15:52:51.159128 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f3e5f6eb-04c4-4797-9a4a-e4a2a710bcb6-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "f3e5f6eb-04c4-4797-9a4a-e4a2a710bcb6" (UID: "f3e5f6eb-04c4-4797-9a4a-e4a2a710bcb6"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:52:51 crc kubenswrapper[5008]: I0129 15:52:51.171051 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f3e5f6eb-04c4-4797-9a4a-e4a2a710bcb6-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "f3e5f6eb-04c4-4797-9a4a-e4a2a710bcb6" (UID: "f3e5f6eb-04c4-4797-9a4a-e4a2a710bcb6"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:52:51 crc kubenswrapper[5008]: I0129 15:52:51.215210 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a4470533-b658-46fe-8749-f371b22703b2-config-data\") pod \"nova-metadata-0\" (UID: \"a4470533-b658-46fe-8749-f371b22703b2\") " pod="openstack/nova-metadata-0" Jan 29 15:52:51 crc kubenswrapper[5008]: I0129 15:52:51.215400 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a4470533-b658-46fe-8749-f371b22703b2-logs\") pod \"nova-metadata-0\" (UID: \"a4470533-b658-46fe-8749-f371b22703b2\") " pod="openstack/nova-metadata-0" Jan 29 15:52:51 crc kubenswrapper[5008]: I0129 15:52:51.215424 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ccwzd\" (UniqueName: \"kubernetes.io/projected/a4470533-b658-46fe-8749-f371b22703b2-kube-api-access-ccwzd\") pod \"nova-metadata-0\" (UID: \"a4470533-b658-46fe-8749-f371b22703b2\") " pod="openstack/nova-metadata-0" Jan 29 15:52:51 crc kubenswrapper[5008]: I0129 15:52:51.215447 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/a4470533-b658-46fe-8749-f371b22703b2-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"a4470533-b658-46fe-8749-f371b22703b2\") " pod="openstack/nova-metadata-0" Jan 29 15:52:51 crc kubenswrapper[5008]: I0129 15:52:51.215467 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a4470533-b658-46fe-8749-f371b22703b2-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"a4470533-b658-46fe-8749-f371b22703b2\") " pod="openstack/nova-metadata-0" Jan 29 15:52:51 crc kubenswrapper[5008]: I0129 15:52:51.215508 5008 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f3e5f6eb-04c4-4797-9a4a-e4a2a710bcb6-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 15:52:51 crc kubenswrapper[5008]: I0129 15:52:51.215520 5008 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ns2nh\" (UniqueName: \"kubernetes.io/projected/f3e5f6eb-04c4-4797-9a4a-e4a2a710bcb6-kube-api-access-ns2nh\") on node \"crc\" DevicePath \"\"" Jan 29 15:52:51 crc kubenswrapper[5008]: I0129 15:52:51.215530 5008 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f3e5f6eb-04c4-4797-9a4a-e4a2a710bcb6-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 29 15:52:51 crc kubenswrapper[5008]: I0129 15:52:51.215541 5008 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f3e5f6eb-04c4-4797-9a4a-e4a2a710bcb6-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 29 15:52:51 crc kubenswrapper[5008]: I0129 15:52:51.215549 5008 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f3e5f6eb-04c4-4797-9a4a-e4a2a710bcb6-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 15:52:51 crc kubenswrapper[5008]: I0129 15:52:51.317428 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a4470533-b658-46fe-8749-f371b22703b2-logs\") pod \"nova-metadata-0\" (UID: \"a4470533-b658-46fe-8749-f371b22703b2\") " pod="openstack/nova-metadata-0" Jan 29 15:52:51 crc kubenswrapper[5008]: I0129 15:52:51.317483 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ccwzd\" (UniqueName: \"kubernetes.io/projected/a4470533-b658-46fe-8749-f371b22703b2-kube-api-access-ccwzd\") pod \"nova-metadata-0\" (UID: \"a4470533-b658-46fe-8749-f371b22703b2\") " pod="openstack/nova-metadata-0" Jan 29 15:52:51 crc kubenswrapper[5008]: I0129 15:52:51.317509 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/a4470533-b658-46fe-8749-f371b22703b2-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"a4470533-b658-46fe-8749-f371b22703b2\") " pod="openstack/nova-metadata-0" Jan 29 15:52:51 crc kubenswrapper[5008]: I0129 15:52:51.317528 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a4470533-b658-46fe-8749-f371b22703b2-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"a4470533-b658-46fe-8749-f371b22703b2\") " pod="openstack/nova-metadata-0" Jan 29 15:52:51 crc kubenswrapper[5008]: I0129 15:52:51.317555 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a4470533-b658-46fe-8749-f371b22703b2-config-data\") pod \"nova-metadata-0\" (UID: \"a4470533-b658-46fe-8749-f371b22703b2\") " pod="openstack/nova-metadata-0" Jan 29 15:52:51 crc kubenswrapper[5008]: I0129 15:52:51.318369 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a4470533-b658-46fe-8749-f371b22703b2-logs\") pod \"nova-metadata-0\" (UID: \"a4470533-b658-46fe-8749-f371b22703b2\") " pod="openstack/nova-metadata-0" Jan 29 15:52:51 crc kubenswrapper[5008]: I0129 15:52:51.321630 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/a4470533-b658-46fe-8749-f371b22703b2-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"a4470533-b658-46fe-8749-f371b22703b2\") " pod="openstack/nova-metadata-0" Jan 29 15:52:51 crc kubenswrapper[5008]: I0129 15:52:51.322414 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a4470533-b658-46fe-8749-f371b22703b2-config-data\") pod \"nova-metadata-0\" (UID: \"a4470533-b658-46fe-8749-f371b22703b2\") " pod="openstack/nova-metadata-0" Jan 29 15:52:51 crc kubenswrapper[5008]: I0129 15:52:51.322436 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a4470533-b658-46fe-8749-f371b22703b2-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"a4470533-b658-46fe-8749-f371b22703b2\") " pod="openstack/nova-metadata-0" Jan 29 15:52:51 crc kubenswrapper[5008]: I0129 15:52:51.335161 5008 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="038b9a46-5128-497b-8073-557e8f3542fb" path="/var/lib/kubelet/pods/038b9a46-5128-497b-8073-557e8f3542fb/volumes" Jan 29 15:52:51 crc kubenswrapper[5008]: I0129 15:52:51.335725 5008 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2fb31b59-3f31-4c28-ab5c-e2248ed9fd68" path="/var/lib/kubelet/pods/2fb31b59-3f31-4c28-ab5c-e2248ed9fd68/volumes" Jan 29 15:52:51 crc kubenswrapper[5008]: I0129 15:52:51.349679 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ccwzd\" (UniqueName: \"kubernetes.io/projected/a4470533-b658-46fe-8749-f371b22703b2-kube-api-access-ccwzd\") pod \"nova-metadata-0\" (UID: \"a4470533-b658-46fe-8749-f371b22703b2\") " pod="openstack/nova-metadata-0" Jan 29 15:52:51 crc kubenswrapper[5008]: I0129 15:52:51.377352 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 29 15:52:51 crc kubenswrapper[5008]: I0129 15:52:51.863139 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 29 15:52:51 crc kubenswrapper[5008]: W0129 15:52:51.873081 5008 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda4470533_b658_46fe_8749_f371b22703b2.slice/crio-db2b948aaf75b189f6aa9901094640e00f4da0eb1988661aafbcf7eb4dd51063 WatchSource:0}: Error finding container db2b948aaf75b189f6aa9901094640e00f4da0eb1988661aafbcf7eb4dd51063: Status 404 returned error can't find the container with id db2b948aaf75b189f6aa9901094640e00f4da0eb1988661aafbcf7eb4dd51063 Jan 29 15:52:51 crc kubenswrapper[5008]: I0129 15:52:51.972124 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"f6caa062-78b8-42ad-a655-6828f63a7e8f","Type":"ContainerStarted","Data":"12209b0fe0deeb6852eef600b29008eb94a8ee68d3ebcc1d302584865f889359"} Jan 29 15:52:51 crc kubenswrapper[5008]: I0129 15:52:51.977676 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"a4470533-b658-46fe-8749-f371b22703b2","Type":"ContainerStarted","Data":"db2b948aaf75b189f6aa9901094640e00f4da0eb1988661aafbcf7eb4dd51063"} Jan 29 15:52:51 crc kubenswrapper[5008]: I0129 15:52:51.979177 5008 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 29 15:52:51 crc kubenswrapper[5008]: I0129 15:52:51.996560 5008 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.996534278 podStartE2EDuration="2.996534278s" podCreationTimestamp="2026-01-29 15:52:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 15:52:51.993001553 +0000 UTC m=+1515.665855800" watchObservedRunningTime="2026-01-29 15:52:51.996534278 +0000 UTC m=+1515.669388515" Jan 29 15:52:52 crc kubenswrapper[5008]: I0129 15:52:52.021843 5008 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 29 15:52:52 crc kubenswrapper[5008]: I0129 15:52:52.031966 5008 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Jan 29 15:52:52 crc kubenswrapper[5008]: I0129 15:52:52.046057 5008 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Jan 29 15:52:52 crc kubenswrapper[5008]: I0129 15:52:52.047862 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 29 15:52:52 crc kubenswrapper[5008]: I0129 15:52:52.050717 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Jan 29 15:52:52 crc kubenswrapper[5008]: I0129 15:52:52.052078 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Jan 29 15:52:52 crc kubenswrapper[5008]: I0129 15:52:52.053286 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Jan 29 15:52:52 crc kubenswrapper[5008]: I0129 15:52:52.056191 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 29 15:52:52 crc kubenswrapper[5008]: I0129 15:52:52.234403 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ffff5fc1-f4be-4fad-bfa8-890ea58d2a00-internal-tls-certs\") pod \"nova-api-0\" (UID: \"ffff5fc1-f4be-4fad-bfa8-890ea58d2a00\") " pod="openstack/nova-api-0" Jan 29 15:52:52 crc kubenswrapper[5008]: I0129 15:52:52.234721 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ffff5fc1-f4be-4fad-bfa8-890ea58d2a00-config-data\") pod \"nova-api-0\" (UID: \"ffff5fc1-f4be-4fad-bfa8-890ea58d2a00\") " pod="openstack/nova-api-0" Jan 29 15:52:52 crc kubenswrapper[5008]: I0129 15:52:52.234828 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ffff5fc1-f4be-4fad-bfa8-890ea58d2a00-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"ffff5fc1-f4be-4fad-bfa8-890ea58d2a00\") " pod="openstack/nova-api-0" Jan 29 15:52:52 crc kubenswrapper[5008]: I0129 15:52:52.234864 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zxfnr\" (UniqueName: \"kubernetes.io/projected/ffff5fc1-f4be-4fad-bfa8-890ea58d2a00-kube-api-access-zxfnr\") pod \"nova-api-0\" (UID: \"ffff5fc1-f4be-4fad-bfa8-890ea58d2a00\") " pod="openstack/nova-api-0" Jan 29 15:52:52 crc kubenswrapper[5008]: I0129 15:52:52.234894 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ffff5fc1-f4be-4fad-bfa8-890ea58d2a00-public-tls-certs\") pod \"nova-api-0\" (UID: \"ffff5fc1-f4be-4fad-bfa8-890ea58d2a00\") " pod="openstack/nova-api-0" Jan 29 15:52:52 crc kubenswrapper[5008]: I0129 15:52:52.234922 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ffff5fc1-f4be-4fad-bfa8-890ea58d2a00-logs\") pod \"nova-api-0\" (UID: \"ffff5fc1-f4be-4fad-bfa8-890ea58d2a00\") " pod="openstack/nova-api-0" Jan 29 15:52:52 crc kubenswrapper[5008]: I0129 15:52:52.336609 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ffff5fc1-f4be-4fad-bfa8-890ea58d2a00-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"ffff5fc1-f4be-4fad-bfa8-890ea58d2a00\") " pod="openstack/nova-api-0" Jan 29 15:52:52 crc kubenswrapper[5008]: I0129 15:52:52.336675 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zxfnr\" (UniqueName: \"kubernetes.io/projected/ffff5fc1-f4be-4fad-bfa8-890ea58d2a00-kube-api-access-zxfnr\") pod \"nova-api-0\" (UID: \"ffff5fc1-f4be-4fad-bfa8-890ea58d2a00\") " pod="openstack/nova-api-0" Jan 29 15:52:52 crc kubenswrapper[5008]: I0129 15:52:52.336707 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ffff5fc1-f4be-4fad-bfa8-890ea58d2a00-public-tls-certs\") pod \"nova-api-0\" (UID: \"ffff5fc1-f4be-4fad-bfa8-890ea58d2a00\") " pod="openstack/nova-api-0" Jan 29 15:52:52 crc kubenswrapper[5008]: I0129 15:52:52.336739 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ffff5fc1-f4be-4fad-bfa8-890ea58d2a00-logs\") pod \"nova-api-0\" (UID: \"ffff5fc1-f4be-4fad-bfa8-890ea58d2a00\") " pod="openstack/nova-api-0" Jan 29 15:52:52 crc kubenswrapper[5008]: I0129 15:52:52.336784 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ffff5fc1-f4be-4fad-bfa8-890ea58d2a00-internal-tls-certs\") pod \"nova-api-0\" (UID: \"ffff5fc1-f4be-4fad-bfa8-890ea58d2a00\") " pod="openstack/nova-api-0" Jan 29 15:52:52 crc kubenswrapper[5008]: I0129 15:52:52.336803 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ffff5fc1-f4be-4fad-bfa8-890ea58d2a00-config-data\") pod \"nova-api-0\" (UID: \"ffff5fc1-f4be-4fad-bfa8-890ea58d2a00\") " pod="openstack/nova-api-0" Jan 29 15:52:52 crc kubenswrapper[5008]: I0129 15:52:52.337748 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ffff5fc1-f4be-4fad-bfa8-890ea58d2a00-logs\") pod \"nova-api-0\" (UID: \"ffff5fc1-f4be-4fad-bfa8-890ea58d2a00\") " pod="openstack/nova-api-0" Jan 29 15:52:52 crc kubenswrapper[5008]: I0129 15:52:52.341675 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ffff5fc1-f4be-4fad-bfa8-890ea58d2a00-internal-tls-certs\") pod \"nova-api-0\" (UID: \"ffff5fc1-f4be-4fad-bfa8-890ea58d2a00\") " pod="openstack/nova-api-0" Jan 29 15:52:52 crc kubenswrapper[5008]: I0129 15:52:52.343330 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ffff5fc1-f4be-4fad-bfa8-890ea58d2a00-public-tls-certs\") pod \"nova-api-0\" (UID: \"ffff5fc1-f4be-4fad-bfa8-890ea58d2a00\") " pod="openstack/nova-api-0" Jan 29 15:52:52 crc kubenswrapper[5008]: I0129 15:52:52.343614 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ffff5fc1-f4be-4fad-bfa8-890ea58d2a00-config-data\") pod \"nova-api-0\" (UID: \"ffff5fc1-f4be-4fad-bfa8-890ea58d2a00\") " pod="openstack/nova-api-0" Jan 29 15:52:52 crc kubenswrapper[5008]: I0129 15:52:52.344120 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ffff5fc1-f4be-4fad-bfa8-890ea58d2a00-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"ffff5fc1-f4be-4fad-bfa8-890ea58d2a00\") " pod="openstack/nova-api-0" Jan 29 15:52:52 crc kubenswrapper[5008]: I0129 15:52:52.354533 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zxfnr\" (UniqueName: \"kubernetes.io/projected/ffff5fc1-f4be-4fad-bfa8-890ea58d2a00-kube-api-access-zxfnr\") pod \"nova-api-0\" (UID: \"ffff5fc1-f4be-4fad-bfa8-890ea58d2a00\") " pod="openstack/nova-api-0" Jan 29 15:52:52 crc kubenswrapper[5008]: I0129 15:52:52.374185 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 29 15:52:52 crc kubenswrapper[5008]: I0129 15:52:52.825469 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 29 15:52:52 crc kubenswrapper[5008]: W0129 15:52:52.841857 5008 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podffff5fc1_f4be_4fad_bfa8_890ea58d2a00.slice/crio-6505b865f270ca95990e7849ff3eb462da8e847d4c5dea8c3d31acc4aa357430 WatchSource:0}: Error finding container 6505b865f270ca95990e7849ff3eb462da8e847d4c5dea8c3d31acc4aa357430: Status 404 returned error can't find the container with id 6505b865f270ca95990e7849ff3eb462da8e847d4c5dea8c3d31acc4aa357430 Jan 29 15:52:52 crc kubenswrapper[5008]: I0129 15:52:52.987875 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"ffff5fc1-f4be-4fad-bfa8-890ea58d2a00","Type":"ContainerStarted","Data":"6505b865f270ca95990e7849ff3eb462da8e847d4c5dea8c3d31acc4aa357430"} Jan 29 15:52:52 crc kubenswrapper[5008]: I0129 15:52:52.994290 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"a4470533-b658-46fe-8749-f371b22703b2","Type":"ContainerStarted","Data":"e3f2b0eb5709a441e3aaf944c5a5bd7f9e69fcf4f51df6104efd6bfbf194d4e5"} Jan 29 15:52:52 crc kubenswrapper[5008]: I0129 15:52:52.994342 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"a4470533-b658-46fe-8749-f371b22703b2","Type":"ContainerStarted","Data":"c202e9718bd598f0b9777b933a454710906e6dc6c784b287c488720233bc854a"} Jan 29 15:52:53 crc kubenswrapper[5008]: I0129 15:52:53.024880 5008 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=3.024844895 podStartE2EDuration="3.024844895s" podCreationTimestamp="2026-01-29 15:52:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 15:52:53.013509554 +0000 UTC m=+1516.686363831" watchObservedRunningTime="2026-01-29 15:52:53.024844895 +0000 UTC m=+1516.697699132" Jan 29 15:52:53 crc kubenswrapper[5008]: I0129 15:52:53.336349 5008 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f3e5f6eb-04c4-4797-9a4a-e4a2a710bcb6" path="/var/lib/kubelet/pods/f3e5f6eb-04c4-4797-9a4a-e4a2a710bcb6/volumes" Jan 29 15:52:54 crc kubenswrapper[5008]: I0129 15:52:54.012313 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"ffff5fc1-f4be-4fad-bfa8-890ea58d2a00","Type":"ContainerStarted","Data":"328db75da770e9e18ad52014a71d74596e17fa2e4fa8662790336cfc18e63783"} Jan 29 15:52:54 crc kubenswrapper[5008]: I0129 15:52:54.012365 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"ffff5fc1-f4be-4fad-bfa8-890ea58d2a00","Type":"ContainerStarted","Data":"af9398ca33cdf40db434083cb15e2dbcc32be3c6c714d1798ebce279aef34ce5"} Jan 29 15:52:54 crc kubenswrapper[5008]: I0129 15:52:54.048762 5008 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.048744007 podStartE2EDuration="2.048744007s" podCreationTimestamp="2026-01-29 15:52:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 15:52:54.042181641 +0000 UTC m=+1517.715035898" watchObservedRunningTime="2026-01-29 15:52:54.048744007 +0000 UTC m=+1517.721598244" Jan 29 15:52:55 crc kubenswrapper[5008]: I0129 15:52:55.321052 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Jan 29 15:52:56 crc kubenswrapper[5008]: I0129 15:52:56.378295 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 29 15:52:56 crc kubenswrapper[5008]: I0129 15:52:56.378765 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 29 15:53:00 crc kubenswrapper[5008]: I0129 15:53:00.321207 5008 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Jan 29 15:53:00 crc kubenswrapper[5008]: I0129 15:53:00.349152 5008 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Jan 29 15:53:01 crc kubenswrapper[5008]: I0129 15:53:01.130492 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Jan 29 15:53:01 crc kubenswrapper[5008]: I0129 15:53:01.378383 5008 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Jan 29 15:53:01 crc kubenswrapper[5008]: I0129 15:53:01.378468 5008 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Jan 29 15:53:02 crc kubenswrapper[5008]: E0129 15:53:02.328243 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-httpd\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/ubi9/httpd-24:latest\\\"\"" pod="openstack/ceilometer-0" podUID="d40740f9-e8d8-4f46-b8b0-d913a6c33210" Jan 29 15:53:02 crc kubenswrapper[5008]: I0129 15:53:02.375041 5008 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 29 15:53:02 crc kubenswrapper[5008]: I0129 15:53:02.375089 5008 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 29 15:53:02 crc kubenswrapper[5008]: I0129 15:53:02.396956 5008 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="a4470533-b658-46fe-8749-f371b22703b2" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.206:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 29 15:53:02 crc kubenswrapper[5008]: I0129 15:53:02.396991 5008 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="a4470533-b658-46fe-8749-f371b22703b2" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.206:8775/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 29 15:53:03 crc kubenswrapper[5008]: I0129 15:53:03.457998 5008 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="ffff5fc1-f4be-4fad-bfa8-890ea58d2a00" containerName="nova-api-log" probeResult="failure" output="Get \"https://10.217.0.207:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 29 15:53:03 crc kubenswrapper[5008]: I0129 15:53:03.458055 5008 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="ffff5fc1-f4be-4fad-bfa8-890ea58d2a00" containerName="nova-api-api" probeResult="failure" output="Get \"https://10.217.0.207:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 29 15:53:11 crc kubenswrapper[5008]: I0129 15:53:11.385710 5008 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Jan 29 15:53:11 crc kubenswrapper[5008]: I0129 15:53:11.386615 5008 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Jan 29 15:53:11 crc kubenswrapper[5008]: I0129 15:53:11.395101 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Jan 29 15:53:11 crc kubenswrapper[5008]: I0129 15:53:11.397546 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Jan 29 15:53:12 crc kubenswrapper[5008]: I0129 15:53:12.384558 5008 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Jan 29 15:53:12 crc kubenswrapper[5008]: I0129 15:53:12.385118 5008 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Jan 29 15:53:12 crc kubenswrapper[5008]: I0129 15:53:12.385567 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Jan 29 15:53:12 crc kubenswrapper[5008]: I0129 15:53:12.385640 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Jan 29 15:53:12 crc kubenswrapper[5008]: I0129 15:53:12.397881 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Jan 29 15:53:12 crc kubenswrapper[5008]: I0129 15:53:12.399924 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Jan 29 15:53:15 crc kubenswrapper[5008]: E0129 15:53:15.460097 5008 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://registry.redhat.io/ubi9/httpd-24:latest: Requesting bearer token: invalid status code from registry 403 (Forbidden)" image="registry.redhat.io/ubi9/httpd-24:latest" Jan 29 15:53:15 crc kubenswrapper[5008]: E0129 15:53:15.460941 5008 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:proxy-httpd,Image:registry.redhat.io/ubi9/httpd-24:latest,Command:[/usr/sbin/httpd],Args:[-DFOREGROUND],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:proxy-httpd,HostPort:0,ContainerPort:3000,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/httpd/conf/httpd.conf,SubPath:httpd.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/httpd/conf.d/ssl.conf,SubPath:ssl.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:run-httpd,ReadOnly:false,MountPath:/run/httpd,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:log-httpd,ReadOnly:false,MountPath:/var/log/httpd,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-4zk8n,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/,Port:{0 3000 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:30,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/,Port:{0 3000 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:30,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(d40740f9-e8d8-4f46-b8b0-d913a6c33210): ErrImagePull: initializing source docker://registry.redhat.io/ubi9/httpd-24:latest: Requesting bearer token: invalid status code from registry 403 (Forbidden)" logger="UnhandledError" Jan 29 15:53:15 crc kubenswrapper[5008]: E0129 15:53:15.462154 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-httpd\" with ErrImagePull: \"initializing source docker://registry.redhat.io/ubi9/httpd-24:latest: Requesting bearer token: invalid status code from registry 403 (Forbidden)\"" pod="openstack/ceilometer-0" podUID="d40740f9-e8d8-4f46-b8b0-d913a6c33210" Jan 29 15:53:24 crc kubenswrapper[5008]: I0129 15:53:24.169494 5008 scope.go:117] "RemoveContainer" containerID="1545206f415995f8be0b1d78b3af14329c9b33899a9464b3994d4df802ea1766" Jan 29 15:53:24 crc kubenswrapper[5008]: I0129 15:53:24.210816 5008 scope.go:117] "RemoveContainer" containerID="e93e17f1bada8f9ceb5d734c0b57f087df79c0ad461fa0d4048a7875532ded1d" Jan 29 15:53:29 crc kubenswrapper[5008]: E0129 15:53:29.327700 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-httpd\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/ubi9/httpd-24:latest\\\"\"" pod="openstack/ceilometer-0" podUID="d40740f9-e8d8-4f46-b8b0-d913a6c33210" Jan 29 15:53:43 crc kubenswrapper[5008]: E0129 15:53:43.330546 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-httpd\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/ubi9/httpd-24:latest\\\"\"" pod="openstack/ceilometer-0" podUID="d40740f9-e8d8-4f46-b8b0-d913a6c33210" Jan 29 15:53:56 crc kubenswrapper[5008]: E0129 15:53:56.580611 5008 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://registry.redhat.io/ubi9/httpd-24:latest: Requesting bearer token: invalid status code from registry 403 (Forbidden)" image="registry.redhat.io/ubi9/httpd-24:latest" Jan 29 15:53:56 crc kubenswrapper[5008]: E0129 15:53:56.581167 5008 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:proxy-httpd,Image:registry.redhat.io/ubi9/httpd-24:latest,Command:[/usr/sbin/httpd],Args:[-DFOREGROUND],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:proxy-httpd,HostPort:0,ContainerPort:3000,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/httpd/conf/httpd.conf,SubPath:httpd.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/httpd/conf.d/ssl.conf,SubPath:ssl.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:run-httpd,ReadOnly:false,MountPath:/run/httpd,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:log-httpd,ReadOnly:false,MountPath:/var/log/httpd,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-4zk8n,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/,Port:{0 3000 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:30,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/,Port:{0 3000 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:30,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(d40740f9-e8d8-4f46-b8b0-d913a6c33210): ErrImagePull: initializing source docker://registry.redhat.io/ubi9/httpd-24:latest: Requesting bearer token: invalid status code from registry 403 (Forbidden)" logger="UnhandledError" Jan 29 15:53:56 crc kubenswrapper[5008]: E0129 15:53:56.582392 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-httpd\" with ErrImagePull: \"initializing source docker://registry.redhat.io/ubi9/httpd-24:latest: Requesting bearer token: invalid status code from registry 403 (Forbidden)\"" pod="openstack/ceilometer-0" podUID="d40740f9-e8d8-4f46-b8b0-d913a6c33210" Jan 29 15:54:08 crc kubenswrapper[5008]: E0129 15:54:08.327402 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-httpd\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/ubi9/httpd-24:latest\\\"\"" pod="openstack/ceilometer-0" podUID="d40740f9-e8d8-4f46-b8b0-d913a6c33210" Jan 29 15:54:23 crc kubenswrapper[5008]: E0129 15:54:23.326711 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-httpd\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/ubi9/httpd-24:latest\\\"\"" pod="openstack/ceilometer-0" podUID="d40740f9-e8d8-4f46-b8b0-d913a6c33210" Jan 29 15:54:24 crc kubenswrapper[5008]: I0129 15:54:24.432620 5008 scope.go:117] "RemoveContainer" containerID="82015428914e1b8d83489174480b3a04643dbd25b377d65c00407eb4dfbc5a91" Jan 29 15:54:38 crc kubenswrapper[5008]: E0129 15:54:38.326990 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-httpd\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/ubi9/httpd-24:latest\\\"\"" pod="openstack/ceilometer-0" podUID="d40740f9-e8d8-4f46-b8b0-d913a6c33210" Jan 29 15:54:49 crc kubenswrapper[5008]: E0129 15:54:49.325532 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-httpd\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/ubi9/httpd-24:latest\\\"\"" pod="openstack/ceilometer-0" podUID="d40740f9-e8d8-4f46-b8b0-d913a6c33210" Jan 29 15:55:01 crc kubenswrapper[5008]: E0129 15:55:01.328123 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-httpd\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/ubi9/httpd-24:latest\\\"\"" pod="openstack/ceilometer-0" podUID="d40740f9-e8d8-4f46-b8b0-d913a6c33210" Jan 29 15:55:13 crc kubenswrapper[5008]: I0129 15:55:13.991339 5008 patch_prober.go:28] interesting pod/machine-config-daemon-gk9q8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 15:55:13 crc kubenswrapper[5008]: I0129 15:55:13.995040 5008 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-gk9q8" podUID="ca0fcb2d-733d-4bde-9bbf-3f7082d0e244" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 15:55:15 crc kubenswrapper[5008]: E0129 15:55:15.327727 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-httpd\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/ubi9/httpd-24:latest\\\"\"" pod="openstack/ceilometer-0" podUID="d40740f9-e8d8-4f46-b8b0-d913a6c33210" Jan 29 15:55:30 crc kubenswrapper[5008]: I0129 15:55:30.326887 5008 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 29 15:55:30 crc kubenswrapper[5008]: E0129 15:55:30.461769 5008 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://registry.redhat.io/ubi9/httpd-24:latest: Requesting bearer token: invalid status code from registry 403 (Forbidden)" image="registry.redhat.io/ubi9/httpd-24:latest" Jan 29 15:55:30 crc kubenswrapper[5008]: E0129 15:55:30.462003 5008 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:proxy-httpd,Image:registry.redhat.io/ubi9/httpd-24:latest,Command:[/usr/sbin/httpd],Args:[-DFOREGROUND],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:proxy-httpd,HostPort:0,ContainerPort:3000,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/httpd/conf/httpd.conf,SubPath:httpd.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/httpd/conf.d/ssl.conf,SubPath:ssl.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:run-httpd,ReadOnly:false,MountPath:/run/httpd,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:log-httpd,ReadOnly:false,MountPath:/var/log/httpd,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-4zk8n,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/,Port:{0 3000 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:30,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/,Port:{0 3000 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:30,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(d40740f9-e8d8-4f46-b8b0-d913a6c33210): ErrImagePull: initializing source docker://registry.redhat.io/ubi9/httpd-24:latest: Requesting bearer token: invalid status code from registry 403 (Forbidden)" logger="UnhandledError" Jan 29 15:55:30 crc kubenswrapper[5008]: E0129 15:55:30.463220 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-httpd\" with ErrImagePull: \"initializing source docker://registry.redhat.io/ubi9/httpd-24:latest: Requesting bearer token: invalid status code from registry 403 (Forbidden)\"" pod="openstack/ceilometer-0" podUID="d40740f9-e8d8-4f46-b8b0-d913a6c33210" Jan 29 15:55:42 crc kubenswrapper[5008]: E0129 15:55:42.327394 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-httpd\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/ubi9/httpd-24:latest\\\"\"" pod="openstack/ceilometer-0" podUID="d40740f9-e8d8-4f46-b8b0-d913a6c33210" Jan 29 15:55:43 crc kubenswrapper[5008]: I0129 15:55:43.990271 5008 patch_prober.go:28] interesting pod/machine-config-daemon-gk9q8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 15:55:43 crc kubenswrapper[5008]: I0129 15:55:43.990675 5008 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-gk9q8" podUID="ca0fcb2d-733d-4bde-9bbf-3f7082d0e244" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 15:55:57 crc kubenswrapper[5008]: E0129 15:55:57.330674 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-httpd\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/ubi9/httpd-24:latest\\\"\"" pod="openstack/ceilometer-0" podUID="d40740f9-e8d8-4f46-b8b0-d913a6c33210" Jan 29 15:56:10 crc kubenswrapper[5008]: E0129 15:56:10.327043 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-httpd\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/ubi9/httpd-24:latest\\\"\"" pod="openstack/ceilometer-0" podUID="d40740f9-e8d8-4f46-b8b0-d913a6c33210" Jan 29 15:56:13 crc kubenswrapper[5008]: I0129 15:56:13.990896 5008 patch_prober.go:28] interesting pod/machine-config-daemon-gk9q8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 15:56:13 crc kubenswrapper[5008]: I0129 15:56:13.991275 5008 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-gk9q8" podUID="ca0fcb2d-733d-4bde-9bbf-3f7082d0e244" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 15:56:13 crc kubenswrapper[5008]: I0129 15:56:13.991333 5008 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-gk9q8" Jan 29 15:56:13 crc kubenswrapper[5008]: I0129 15:56:13.992347 5008 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"1c8349b7c34277b7122a478ebda273749cae45969c3cfbb565f71a131de59c19"} pod="openshift-machine-config-operator/machine-config-daemon-gk9q8" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 29 15:56:13 crc kubenswrapper[5008]: I0129 15:56:13.992448 5008 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-gk9q8" podUID="ca0fcb2d-733d-4bde-9bbf-3f7082d0e244" containerName="machine-config-daemon" containerID="cri-o://1c8349b7c34277b7122a478ebda273749cae45969c3cfbb565f71a131de59c19" gracePeriod=600 Jan 29 15:56:14 crc kubenswrapper[5008]: I0129 15:56:14.243911 5008 generic.go:334] "Generic (PLEG): container finished" podID="ca0fcb2d-733d-4bde-9bbf-3f7082d0e244" containerID="1c8349b7c34277b7122a478ebda273749cae45969c3cfbb565f71a131de59c19" exitCode=0 Jan 29 15:56:14 crc kubenswrapper[5008]: I0129 15:56:14.243955 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-gk9q8" event={"ID":"ca0fcb2d-733d-4bde-9bbf-3f7082d0e244","Type":"ContainerDied","Data":"1c8349b7c34277b7122a478ebda273749cae45969c3cfbb565f71a131de59c19"} Jan 29 15:56:14 crc kubenswrapper[5008]: I0129 15:56:14.243988 5008 scope.go:117] "RemoveContainer" containerID="65ae63639c2ed32e45710e52e6b068b2f105163d6a00247deb197db6c3e0b41c" Jan 29 15:56:14 crc kubenswrapper[5008]: E0129 15:56:14.291850 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gk9q8_openshift-machine-config-operator(ca0fcb2d-733d-4bde-9bbf-3f7082d0e244)\"" pod="openshift-machine-config-operator/machine-config-daemon-gk9q8" podUID="ca0fcb2d-733d-4bde-9bbf-3f7082d0e244" Jan 29 15:56:15 crc kubenswrapper[5008]: I0129 15:56:15.256960 5008 scope.go:117] "RemoveContainer" containerID="1c8349b7c34277b7122a478ebda273749cae45969c3cfbb565f71a131de59c19" Jan 29 15:56:15 crc kubenswrapper[5008]: E0129 15:56:15.257595 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gk9q8_openshift-machine-config-operator(ca0fcb2d-733d-4bde-9bbf-3f7082d0e244)\"" pod="openshift-machine-config-operator/machine-config-daemon-gk9q8" podUID="ca0fcb2d-733d-4bde-9bbf-3f7082d0e244" Jan 29 15:56:23 crc kubenswrapper[5008]: E0129 15:56:23.326971 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-httpd\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/ubi9/httpd-24:latest\\\"\"" pod="openstack/ceilometer-0" podUID="d40740f9-e8d8-4f46-b8b0-d913a6c33210" Jan 29 15:56:30 crc kubenswrapper[5008]: I0129 15:56:30.324177 5008 scope.go:117] "RemoveContainer" containerID="1c8349b7c34277b7122a478ebda273749cae45969c3cfbb565f71a131de59c19" Jan 29 15:56:30 crc kubenswrapper[5008]: E0129 15:56:30.325139 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gk9q8_openshift-machine-config-operator(ca0fcb2d-733d-4bde-9bbf-3f7082d0e244)\"" pod="openshift-machine-config-operator/machine-config-daemon-gk9q8" podUID="ca0fcb2d-733d-4bde-9bbf-3f7082d0e244" Jan 29 15:56:36 crc kubenswrapper[5008]: E0129 15:56:36.327723 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-httpd\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/ubi9/httpd-24:latest\\\"\"" pod="openstack/ceilometer-0" podUID="d40740f9-e8d8-4f46-b8b0-d913a6c33210" Jan 29 15:56:41 crc kubenswrapper[5008]: I0129 15:56:41.914359 5008 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-lc24f"] Jan 29 15:56:41 crc kubenswrapper[5008]: I0129 15:56:41.916768 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-lc24f" Jan 29 15:56:41 crc kubenswrapper[5008]: I0129 15:56:41.937830 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-lc24f"] Jan 29 15:56:41 crc kubenswrapper[5008]: I0129 15:56:41.961116 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/91204902-80fb-472a-b67c-1d290bd97368-utilities\") pod \"community-operators-lc24f\" (UID: \"91204902-80fb-472a-b67c-1d290bd97368\") " pod="openshift-marketplace/community-operators-lc24f" Jan 29 15:56:41 crc kubenswrapper[5008]: I0129 15:56:41.961375 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vnd9j\" (UniqueName: \"kubernetes.io/projected/91204902-80fb-472a-b67c-1d290bd97368-kube-api-access-vnd9j\") pod \"community-operators-lc24f\" (UID: \"91204902-80fb-472a-b67c-1d290bd97368\") " pod="openshift-marketplace/community-operators-lc24f" Jan 29 15:56:41 crc kubenswrapper[5008]: I0129 15:56:41.961440 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/91204902-80fb-472a-b67c-1d290bd97368-catalog-content\") pod \"community-operators-lc24f\" (UID: \"91204902-80fb-472a-b67c-1d290bd97368\") " pod="openshift-marketplace/community-operators-lc24f" Jan 29 15:56:42 crc kubenswrapper[5008]: I0129 15:56:42.063443 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/91204902-80fb-472a-b67c-1d290bd97368-catalog-content\") pod \"community-operators-lc24f\" (UID: \"91204902-80fb-472a-b67c-1d290bd97368\") " pod="openshift-marketplace/community-operators-lc24f" Jan 29 15:56:42 crc kubenswrapper[5008]: I0129 15:56:42.063555 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/91204902-80fb-472a-b67c-1d290bd97368-utilities\") pod \"community-operators-lc24f\" (UID: \"91204902-80fb-472a-b67c-1d290bd97368\") " pod="openshift-marketplace/community-operators-lc24f" Jan 29 15:56:42 crc kubenswrapper[5008]: I0129 15:56:42.063684 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vnd9j\" (UniqueName: \"kubernetes.io/projected/91204902-80fb-472a-b67c-1d290bd97368-kube-api-access-vnd9j\") pod \"community-operators-lc24f\" (UID: \"91204902-80fb-472a-b67c-1d290bd97368\") " pod="openshift-marketplace/community-operators-lc24f" Jan 29 15:56:42 crc kubenswrapper[5008]: I0129 15:56:42.064053 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/91204902-80fb-472a-b67c-1d290bd97368-catalog-content\") pod \"community-operators-lc24f\" (UID: \"91204902-80fb-472a-b67c-1d290bd97368\") " pod="openshift-marketplace/community-operators-lc24f" Jan 29 15:56:42 crc kubenswrapper[5008]: I0129 15:56:42.064070 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/91204902-80fb-472a-b67c-1d290bd97368-utilities\") pod \"community-operators-lc24f\" (UID: \"91204902-80fb-472a-b67c-1d290bd97368\") " pod="openshift-marketplace/community-operators-lc24f" Jan 29 15:56:42 crc kubenswrapper[5008]: I0129 15:56:42.090579 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vnd9j\" (UniqueName: \"kubernetes.io/projected/91204902-80fb-472a-b67c-1d290bd97368-kube-api-access-vnd9j\") pod \"community-operators-lc24f\" (UID: \"91204902-80fb-472a-b67c-1d290bd97368\") " pod="openshift-marketplace/community-operators-lc24f" Jan 29 15:56:42 crc kubenswrapper[5008]: I0129 15:56:42.251725 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-lc24f" Jan 29 15:56:42 crc kubenswrapper[5008]: I0129 15:56:42.323616 5008 scope.go:117] "RemoveContainer" containerID="1c8349b7c34277b7122a478ebda273749cae45969c3cfbb565f71a131de59c19" Jan 29 15:56:42 crc kubenswrapper[5008]: E0129 15:56:42.324146 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gk9q8_openshift-machine-config-operator(ca0fcb2d-733d-4bde-9bbf-3f7082d0e244)\"" pod="openshift-machine-config-operator/machine-config-daemon-gk9q8" podUID="ca0fcb2d-733d-4bde-9bbf-3f7082d0e244" Jan 29 15:56:42 crc kubenswrapper[5008]: I0129 15:56:42.811820 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-lc24f"] Jan 29 15:56:43 crc kubenswrapper[5008]: I0129 15:56:43.507623 5008 generic.go:334] "Generic (PLEG): container finished" podID="91204902-80fb-472a-b67c-1d290bd97368" containerID="7d4815761a9d2f556ee06bbf98cf1b6c8cec425b4632da102c9fe10b76949770" exitCode=0 Jan 29 15:56:43 crc kubenswrapper[5008]: I0129 15:56:43.508172 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-lc24f" event={"ID":"91204902-80fb-472a-b67c-1d290bd97368","Type":"ContainerDied","Data":"7d4815761a9d2f556ee06bbf98cf1b6c8cec425b4632da102c9fe10b76949770"} Jan 29 15:56:43 crc kubenswrapper[5008]: I0129 15:56:43.508202 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-lc24f" event={"ID":"91204902-80fb-472a-b67c-1d290bd97368","Type":"ContainerStarted","Data":"e4c065692be9ea478648ae1adb8036fa6a548911ddca69f2ffc651d85a0ff9b8"} Jan 29 15:56:46 crc kubenswrapper[5008]: I0129 15:56:46.537115 5008 generic.go:334] "Generic (PLEG): container finished" podID="91204902-80fb-472a-b67c-1d290bd97368" containerID="4c7f1c035bf93e990a09127ab0239b9dd8fb171aad0406e2e4f471771073ce20" exitCode=0 Jan 29 15:56:46 crc kubenswrapper[5008]: I0129 15:56:46.537145 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-lc24f" event={"ID":"91204902-80fb-472a-b67c-1d290bd97368","Type":"ContainerDied","Data":"4c7f1c035bf93e990a09127ab0239b9dd8fb171aad0406e2e4f471771073ce20"} Jan 29 15:56:47 crc kubenswrapper[5008]: E0129 15:56:47.335481 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-httpd\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/ubi9/httpd-24:latest\\\"\"" pod="openstack/ceilometer-0" podUID="d40740f9-e8d8-4f46-b8b0-d913a6c33210" Jan 29 15:56:48 crc kubenswrapper[5008]: I0129 15:56:48.557247 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-lc24f" event={"ID":"91204902-80fb-472a-b67c-1d290bd97368","Type":"ContainerStarted","Data":"fd5b906760d69a40cedcc9755fc25288bec9129c3fde13b9ce243cf6e009d4c4"} Jan 29 15:56:51 crc kubenswrapper[5008]: I0129 15:56:51.034256 5008 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-lc24f" podStartSLOduration=6.081057289 podStartE2EDuration="10.034231822s" podCreationTimestamp="2026-01-29 15:56:41 +0000 UTC" firstStartedPulling="2026-01-29 15:56:43.510142204 +0000 UTC m=+1747.182996441" lastFinishedPulling="2026-01-29 15:56:47.463316737 +0000 UTC m=+1751.136170974" observedRunningTime="2026-01-29 15:56:48.577026083 +0000 UTC m=+1752.249880310" watchObservedRunningTime="2026-01-29 15:56:51.034231822 +0000 UTC m=+1754.707086089" Jan 29 15:56:51 crc kubenswrapper[5008]: I0129 15:56:51.051042 5008 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-create-pggzk"] Jan 29 15:56:51 crc kubenswrapper[5008]: I0129 15:56:51.065615 5008 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-e4e6-account-create-update-6vxmr"] Jan 29 15:56:51 crc kubenswrapper[5008]: I0129 15:56:51.073674 5008 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-0e02-account-create-update-7n7jw"] Jan 29 15:56:51 crc kubenswrapper[5008]: I0129 15:56:51.080974 5008 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-create-8tpqs"] Jan 29 15:56:51 crc kubenswrapper[5008]: I0129 15:56:51.089841 5008 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-4a04-account-create-update-2cfml"] Jan 29 15:56:51 crc kubenswrapper[5008]: I0129 15:56:51.097302 5008 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-create-pggzk"] Jan 29 15:56:51 crc kubenswrapper[5008]: I0129 15:56:51.105602 5008 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-e4e6-account-create-update-6vxmr"] Jan 29 15:56:51 crc kubenswrapper[5008]: I0129 15:56:51.112741 5008 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-0e02-account-create-update-7n7jw"] Jan 29 15:56:51 crc kubenswrapper[5008]: I0129 15:56:51.119178 5008 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-create-8tpqs"] Jan 29 15:56:51 crc kubenswrapper[5008]: I0129 15:56:51.125610 5008 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-4a04-account-create-update-2cfml"] Jan 29 15:56:51 crc kubenswrapper[5008]: I0129 15:56:51.334916 5008 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="08da0630-8fe2-4a33-be0c-d81bba67c32c" path="/var/lib/kubelet/pods/08da0630-8fe2-4a33-be0c-d81bba67c32c/volumes" Jan 29 15:56:51 crc kubenswrapper[5008]: I0129 15:56:51.335566 5008 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="232739d0-09f9-4843-8c9f-fc19bc53763f" path="/var/lib/kubelet/pods/232739d0-09f9-4843-8c9f-fc19bc53763f/volumes" Jan 29 15:56:51 crc kubenswrapper[5008]: I0129 15:56:51.336096 5008 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="30bc21a6-d1eb-4200-add0-523a33ffb2ff" path="/var/lib/kubelet/pods/30bc21a6-d1eb-4200-add0-523a33ffb2ff/volumes" Jan 29 15:56:51 crc kubenswrapper[5008]: I0129 15:56:51.336637 5008 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="328d3758-78bd-4a08-b91f-f2f4c9b8b645" path="/var/lib/kubelet/pods/328d3758-78bd-4a08-b91f-f2f4c9b8b645/volumes" Jan 29 15:56:51 crc kubenswrapper[5008]: I0129 15:56:51.337976 5008 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6fd141cd-e623-4692-892c-cf683275d378" path="/var/lib/kubelet/pods/6fd141cd-e623-4692-892c-cf683275d378/volumes" Jan 29 15:56:52 crc kubenswrapper[5008]: I0129 15:56:52.037541 5008 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-create-rvpz6"] Jan 29 15:56:52 crc kubenswrapper[5008]: I0129 15:56:52.048053 5008 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-create-rvpz6"] Jan 29 15:56:52 crc kubenswrapper[5008]: I0129 15:56:52.252242 5008 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-lc24f" Jan 29 15:56:52 crc kubenswrapper[5008]: I0129 15:56:52.252302 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-lc24f" Jan 29 15:56:52 crc kubenswrapper[5008]: I0129 15:56:52.296629 5008 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-lc24f" Jan 29 15:56:52 crc kubenswrapper[5008]: I0129 15:56:52.646146 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-lc24f" Jan 29 15:56:52 crc kubenswrapper[5008]: I0129 15:56:52.706773 5008 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-lc24f"] Jan 29 15:56:53 crc kubenswrapper[5008]: I0129 15:56:53.333186 5008 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="207579aa-feff-4069-8fcb-02c5b9cd107f" path="/var/lib/kubelet/pods/207579aa-feff-4069-8fcb-02c5b9cd107f/volumes" Jan 29 15:56:54 crc kubenswrapper[5008]: I0129 15:56:54.324445 5008 scope.go:117] "RemoveContainer" containerID="1c8349b7c34277b7122a478ebda273749cae45969c3cfbb565f71a131de59c19" Jan 29 15:56:54 crc kubenswrapper[5008]: E0129 15:56:54.324766 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gk9q8_openshift-machine-config-operator(ca0fcb2d-733d-4bde-9bbf-3f7082d0e244)\"" pod="openshift-machine-config-operator/machine-config-daemon-gk9q8" podUID="ca0fcb2d-733d-4bde-9bbf-3f7082d0e244" Jan 29 15:56:54 crc kubenswrapper[5008]: I0129 15:56:54.610528 5008 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-lc24f" podUID="91204902-80fb-472a-b67c-1d290bd97368" containerName="registry-server" containerID="cri-o://fd5b906760d69a40cedcc9755fc25288bec9129c3fde13b9ce243cf6e009d4c4" gracePeriod=2 Jan 29 15:56:55 crc kubenswrapper[5008]: I0129 15:56:55.622909 5008 generic.go:334] "Generic (PLEG): container finished" podID="91204902-80fb-472a-b67c-1d290bd97368" containerID="fd5b906760d69a40cedcc9755fc25288bec9129c3fde13b9ce243cf6e009d4c4" exitCode=0 Jan 29 15:56:55 crc kubenswrapper[5008]: I0129 15:56:55.623019 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-lc24f" event={"ID":"91204902-80fb-472a-b67c-1d290bd97368","Type":"ContainerDied","Data":"fd5b906760d69a40cedcc9755fc25288bec9129c3fde13b9ce243cf6e009d4c4"} Jan 29 15:56:55 crc kubenswrapper[5008]: I0129 15:56:55.624418 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-lc24f" event={"ID":"91204902-80fb-472a-b67c-1d290bd97368","Type":"ContainerDied","Data":"e4c065692be9ea478648ae1adb8036fa6a548911ddca69f2ffc651d85a0ff9b8"} Jan 29 15:56:55 crc kubenswrapper[5008]: I0129 15:56:55.624506 5008 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e4c065692be9ea478648ae1adb8036fa6a548911ddca69f2ffc651d85a0ff9b8" Jan 29 15:56:55 crc kubenswrapper[5008]: I0129 15:56:55.655227 5008 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-lc24f" Jan 29 15:56:55 crc kubenswrapper[5008]: I0129 15:56:55.671350 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vnd9j\" (UniqueName: \"kubernetes.io/projected/91204902-80fb-472a-b67c-1d290bd97368-kube-api-access-vnd9j\") pod \"91204902-80fb-472a-b67c-1d290bd97368\" (UID: \"91204902-80fb-472a-b67c-1d290bd97368\") " Jan 29 15:56:55 crc kubenswrapper[5008]: I0129 15:56:55.671480 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/91204902-80fb-472a-b67c-1d290bd97368-utilities\") pod \"91204902-80fb-472a-b67c-1d290bd97368\" (UID: \"91204902-80fb-472a-b67c-1d290bd97368\") " Jan 29 15:56:55 crc kubenswrapper[5008]: I0129 15:56:55.671708 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/91204902-80fb-472a-b67c-1d290bd97368-catalog-content\") pod \"91204902-80fb-472a-b67c-1d290bd97368\" (UID: \"91204902-80fb-472a-b67c-1d290bd97368\") " Jan 29 15:56:55 crc kubenswrapper[5008]: I0129 15:56:55.673082 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/91204902-80fb-472a-b67c-1d290bd97368-utilities" (OuterVolumeSpecName: "utilities") pod "91204902-80fb-472a-b67c-1d290bd97368" (UID: "91204902-80fb-472a-b67c-1d290bd97368"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 15:56:55 crc kubenswrapper[5008]: I0129 15:56:55.685009 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/91204902-80fb-472a-b67c-1d290bd97368-kube-api-access-vnd9j" (OuterVolumeSpecName: "kube-api-access-vnd9j") pod "91204902-80fb-472a-b67c-1d290bd97368" (UID: "91204902-80fb-472a-b67c-1d290bd97368"). InnerVolumeSpecName "kube-api-access-vnd9j". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:56:55 crc kubenswrapper[5008]: I0129 15:56:55.733496 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/91204902-80fb-472a-b67c-1d290bd97368-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "91204902-80fb-472a-b67c-1d290bd97368" (UID: "91204902-80fb-472a-b67c-1d290bd97368"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 15:56:55 crc kubenswrapper[5008]: I0129 15:56:55.774536 5008 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vnd9j\" (UniqueName: \"kubernetes.io/projected/91204902-80fb-472a-b67c-1d290bd97368-kube-api-access-vnd9j\") on node \"crc\" DevicePath \"\"" Jan 29 15:56:55 crc kubenswrapper[5008]: I0129 15:56:55.774595 5008 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/91204902-80fb-472a-b67c-1d290bd97368-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 15:56:55 crc kubenswrapper[5008]: I0129 15:56:55.774609 5008 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/91204902-80fb-472a-b67c-1d290bd97368-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 15:56:56 crc kubenswrapper[5008]: I0129 15:56:56.632136 5008 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-lc24f" Jan 29 15:56:56 crc kubenswrapper[5008]: I0129 15:56:56.666767 5008 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-lc24f"] Jan 29 15:56:56 crc kubenswrapper[5008]: I0129 15:56:56.675948 5008 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-lc24f"] Jan 29 15:56:57 crc kubenswrapper[5008]: I0129 15:56:57.342354 5008 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="91204902-80fb-472a-b67c-1d290bd97368" path="/var/lib/kubelet/pods/91204902-80fb-472a-b67c-1d290bd97368/volumes" Jan 29 15:56:59 crc kubenswrapper[5008]: I0129 15:56:59.068884 5008 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-bxxx2"] Jan 29 15:56:59 crc kubenswrapper[5008]: I0129 15:56:59.091873 5008 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-bxxx2"] Jan 29 15:56:59 crc kubenswrapper[5008]: I0129 15:56:59.335156 5008 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="98c93f6a-d803-4df3-8b35-191cbe683adf" path="/var/lib/kubelet/pods/98c93f6a-d803-4df3-8b35-191cbe683adf/volumes" Jan 29 15:57:01 crc kubenswrapper[5008]: E0129 15:57:01.326184 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-httpd\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/ubi9/httpd-24:latest\\\"\"" pod="openstack/ceilometer-0" podUID="d40740f9-e8d8-4f46-b8b0-d913a6c33210" Jan 29 15:57:09 crc kubenswrapper[5008]: I0129 15:57:09.323181 5008 scope.go:117] "RemoveContainer" containerID="1c8349b7c34277b7122a478ebda273749cae45969c3cfbb565f71a131de59c19" Jan 29 15:57:09 crc kubenswrapper[5008]: E0129 15:57:09.323852 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gk9q8_openshift-machine-config-operator(ca0fcb2d-733d-4bde-9bbf-3f7082d0e244)\"" pod="openshift-machine-config-operator/machine-config-daemon-gk9q8" podUID="ca0fcb2d-733d-4bde-9bbf-3f7082d0e244" Jan 29 15:57:14 crc kubenswrapper[5008]: E0129 15:57:14.326032 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-httpd\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/ubi9/httpd-24:latest\\\"\"" pod="openstack/ceilometer-0" podUID="d40740f9-e8d8-4f46-b8b0-d913a6c33210" Jan 29 15:57:14 crc kubenswrapper[5008]: I0129 15:57:14.619013 5008 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-fsf7n"] Jan 29 15:57:14 crc kubenswrapper[5008]: E0129 15:57:14.620007 5008 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="91204902-80fb-472a-b67c-1d290bd97368" containerName="extract-content" Jan 29 15:57:14 crc kubenswrapper[5008]: I0129 15:57:14.620033 5008 state_mem.go:107] "Deleted CPUSet assignment" podUID="91204902-80fb-472a-b67c-1d290bd97368" containerName="extract-content" Jan 29 15:57:14 crc kubenswrapper[5008]: E0129 15:57:14.620054 5008 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="91204902-80fb-472a-b67c-1d290bd97368" containerName="extract-utilities" Jan 29 15:57:14 crc kubenswrapper[5008]: I0129 15:57:14.620064 5008 state_mem.go:107] "Deleted CPUSet assignment" podUID="91204902-80fb-472a-b67c-1d290bd97368" containerName="extract-utilities" Jan 29 15:57:14 crc kubenswrapper[5008]: E0129 15:57:14.620093 5008 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="91204902-80fb-472a-b67c-1d290bd97368" containerName="registry-server" Jan 29 15:57:14 crc kubenswrapper[5008]: I0129 15:57:14.620101 5008 state_mem.go:107] "Deleted CPUSet assignment" podUID="91204902-80fb-472a-b67c-1d290bd97368" containerName="registry-server" Jan 29 15:57:14 crc kubenswrapper[5008]: I0129 15:57:14.620314 5008 memory_manager.go:354] "RemoveStaleState removing state" podUID="91204902-80fb-472a-b67c-1d290bd97368" containerName="registry-server" Jan 29 15:57:14 crc kubenswrapper[5008]: I0129 15:57:14.622050 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-fsf7n" Jan 29 15:57:14 crc kubenswrapper[5008]: I0129 15:57:14.633625 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-fsf7n"] Jan 29 15:57:14 crc kubenswrapper[5008]: I0129 15:57:14.731487 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/584c809e-d445-45a3-84dc-aebb0ab47f1d-catalog-content\") pod \"redhat-marketplace-fsf7n\" (UID: \"584c809e-d445-45a3-84dc-aebb0ab47f1d\") " pod="openshift-marketplace/redhat-marketplace-fsf7n" Jan 29 15:57:14 crc kubenswrapper[5008]: I0129 15:57:14.731649 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/584c809e-d445-45a3-84dc-aebb0ab47f1d-utilities\") pod \"redhat-marketplace-fsf7n\" (UID: \"584c809e-d445-45a3-84dc-aebb0ab47f1d\") " pod="openshift-marketplace/redhat-marketplace-fsf7n" Jan 29 15:57:14 crc kubenswrapper[5008]: I0129 15:57:14.731754 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s47h5\" (UniqueName: \"kubernetes.io/projected/584c809e-d445-45a3-84dc-aebb0ab47f1d-kube-api-access-s47h5\") pod \"redhat-marketplace-fsf7n\" (UID: \"584c809e-d445-45a3-84dc-aebb0ab47f1d\") " pod="openshift-marketplace/redhat-marketplace-fsf7n" Jan 29 15:57:14 crc kubenswrapper[5008]: I0129 15:57:14.833454 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/584c809e-d445-45a3-84dc-aebb0ab47f1d-catalog-content\") pod \"redhat-marketplace-fsf7n\" (UID: \"584c809e-d445-45a3-84dc-aebb0ab47f1d\") " pod="openshift-marketplace/redhat-marketplace-fsf7n" Jan 29 15:57:14 crc kubenswrapper[5008]: I0129 15:57:14.833541 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/584c809e-d445-45a3-84dc-aebb0ab47f1d-utilities\") pod \"redhat-marketplace-fsf7n\" (UID: \"584c809e-d445-45a3-84dc-aebb0ab47f1d\") " pod="openshift-marketplace/redhat-marketplace-fsf7n" Jan 29 15:57:14 crc kubenswrapper[5008]: I0129 15:57:14.833614 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s47h5\" (UniqueName: \"kubernetes.io/projected/584c809e-d445-45a3-84dc-aebb0ab47f1d-kube-api-access-s47h5\") pod \"redhat-marketplace-fsf7n\" (UID: \"584c809e-d445-45a3-84dc-aebb0ab47f1d\") " pod="openshift-marketplace/redhat-marketplace-fsf7n" Jan 29 15:57:14 crc kubenswrapper[5008]: I0129 15:57:14.834215 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/584c809e-d445-45a3-84dc-aebb0ab47f1d-catalog-content\") pod \"redhat-marketplace-fsf7n\" (UID: \"584c809e-d445-45a3-84dc-aebb0ab47f1d\") " pod="openshift-marketplace/redhat-marketplace-fsf7n" Jan 29 15:57:14 crc kubenswrapper[5008]: I0129 15:57:14.834272 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/584c809e-d445-45a3-84dc-aebb0ab47f1d-utilities\") pod \"redhat-marketplace-fsf7n\" (UID: \"584c809e-d445-45a3-84dc-aebb0ab47f1d\") " pod="openshift-marketplace/redhat-marketplace-fsf7n" Jan 29 15:57:14 crc kubenswrapper[5008]: I0129 15:57:14.860281 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s47h5\" (UniqueName: \"kubernetes.io/projected/584c809e-d445-45a3-84dc-aebb0ab47f1d-kube-api-access-s47h5\") pod \"redhat-marketplace-fsf7n\" (UID: \"584c809e-d445-45a3-84dc-aebb0ab47f1d\") " pod="openshift-marketplace/redhat-marketplace-fsf7n" Jan 29 15:57:14 crc kubenswrapper[5008]: I0129 15:57:14.939439 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-fsf7n" Jan 29 15:57:16 crc kubenswrapper[5008]: I0129 15:57:16.118337 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-fsf7n"] Jan 29 15:57:16 crc kubenswrapper[5008]: W0129 15:57:16.122801 5008 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod584c809e_d445_45a3_84dc_aebb0ab47f1d.slice/crio-d44daad273b27258030b31ac11a07a7227997b86e2d6579d418d8d86b1a6359c WatchSource:0}: Error finding container d44daad273b27258030b31ac11a07a7227997b86e2d6579d418d8d86b1a6359c: Status 404 returned error can't find the container with id d44daad273b27258030b31ac11a07a7227997b86e2d6579d418d8d86b1a6359c Jan 29 15:57:16 crc kubenswrapper[5008]: I0129 15:57:16.810351 5008 generic.go:334] "Generic (PLEG): container finished" podID="584c809e-d445-45a3-84dc-aebb0ab47f1d" containerID="105b9a43249e6967af25433d63396c59e60e556a090d580d57d9d70ee4546248" exitCode=0 Jan 29 15:57:16 crc kubenswrapper[5008]: I0129 15:57:16.810483 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fsf7n" event={"ID":"584c809e-d445-45a3-84dc-aebb0ab47f1d","Type":"ContainerDied","Data":"105b9a43249e6967af25433d63396c59e60e556a090d580d57d9d70ee4546248"} Jan 29 15:57:16 crc kubenswrapper[5008]: I0129 15:57:16.810660 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fsf7n" event={"ID":"584c809e-d445-45a3-84dc-aebb0ab47f1d","Type":"ContainerStarted","Data":"d44daad273b27258030b31ac11a07a7227997b86e2d6579d418d8d86b1a6359c"} Jan 29 15:57:18 crc kubenswrapper[5008]: I0129 15:57:18.830057 5008 generic.go:334] "Generic (PLEG): container finished" podID="584c809e-d445-45a3-84dc-aebb0ab47f1d" containerID="e9bb0bb4b88e5113680d7a705c1a4e73f76938c8a06828dd6b4734e57b5342fa" exitCode=0 Jan 29 15:57:18 crc kubenswrapper[5008]: I0129 15:57:18.830111 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fsf7n" event={"ID":"584c809e-d445-45a3-84dc-aebb0ab47f1d","Type":"ContainerDied","Data":"e9bb0bb4b88e5113680d7a705c1a4e73f76938c8a06828dd6b4734e57b5342fa"} Jan 29 15:57:19 crc kubenswrapper[5008]: I0129 15:57:19.014710 5008 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-hcbjg"] Jan 29 15:57:19 crc kubenswrapper[5008]: I0129 15:57:19.017316 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-hcbjg" Jan 29 15:57:19 crc kubenswrapper[5008]: I0129 15:57:19.032854 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-hcbjg"] Jan 29 15:57:19 crc kubenswrapper[5008]: I0129 15:57:19.120577 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2b365eb1-533a-4b4a-92ed-da844f0144ee-catalog-content\") pod \"certified-operators-hcbjg\" (UID: \"2b365eb1-533a-4b4a-92ed-da844f0144ee\") " pod="openshift-marketplace/certified-operators-hcbjg" Jan 29 15:57:19 crc kubenswrapper[5008]: I0129 15:57:19.120867 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x9zlf\" (UniqueName: \"kubernetes.io/projected/2b365eb1-533a-4b4a-92ed-da844f0144ee-kube-api-access-x9zlf\") pod \"certified-operators-hcbjg\" (UID: \"2b365eb1-533a-4b4a-92ed-da844f0144ee\") " pod="openshift-marketplace/certified-operators-hcbjg" Jan 29 15:57:19 crc kubenswrapper[5008]: I0129 15:57:19.120926 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2b365eb1-533a-4b4a-92ed-da844f0144ee-utilities\") pod \"certified-operators-hcbjg\" (UID: \"2b365eb1-533a-4b4a-92ed-da844f0144ee\") " pod="openshift-marketplace/certified-operators-hcbjg" Jan 29 15:57:19 crc kubenswrapper[5008]: I0129 15:57:19.223122 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x9zlf\" (UniqueName: \"kubernetes.io/projected/2b365eb1-533a-4b4a-92ed-da844f0144ee-kube-api-access-x9zlf\") pod \"certified-operators-hcbjg\" (UID: \"2b365eb1-533a-4b4a-92ed-da844f0144ee\") " pod="openshift-marketplace/certified-operators-hcbjg" Jan 29 15:57:19 crc kubenswrapper[5008]: I0129 15:57:19.223384 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2b365eb1-533a-4b4a-92ed-da844f0144ee-utilities\") pod \"certified-operators-hcbjg\" (UID: \"2b365eb1-533a-4b4a-92ed-da844f0144ee\") " pod="openshift-marketplace/certified-operators-hcbjg" Jan 29 15:57:19 crc kubenswrapper[5008]: I0129 15:57:19.223541 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2b365eb1-533a-4b4a-92ed-da844f0144ee-catalog-content\") pod \"certified-operators-hcbjg\" (UID: \"2b365eb1-533a-4b4a-92ed-da844f0144ee\") " pod="openshift-marketplace/certified-operators-hcbjg" Jan 29 15:57:19 crc kubenswrapper[5008]: I0129 15:57:19.224042 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2b365eb1-533a-4b4a-92ed-da844f0144ee-catalog-content\") pod \"certified-operators-hcbjg\" (UID: \"2b365eb1-533a-4b4a-92ed-da844f0144ee\") " pod="openshift-marketplace/certified-operators-hcbjg" Jan 29 15:57:19 crc kubenswrapper[5008]: I0129 15:57:19.224161 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2b365eb1-533a-4b4a-92ed-da844f0144ee-utilities\") pod \"certified-operators-hcbjg\" (UID: \"2b365eb1-533a-4b4a-92ed-da844f0144ee\") " pod="openshift-marketplace/certified-operators-hcbjg" Jan 29 15:57:19 crc kubenswrapper[5008]: I0129 15:57:19.243024 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x9zlf\" (UniqueName: \"kubernetes.io/projected/2b365eb1-533a-4b4a-92ed-da844f0144ee-kube-api-access-x9zlf\") pod \"certified-operators-hcbjg\" (UID: \"2b365eb1-533a-4b4a-92ed-da844f0144ee\") " pod="openshift-marketplace/certified-operators-hcbjg" Jan 29 15:57:19 crc kubenswrapper[5008]: I0129 15:57:19.341074 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-hcbjg" Jan 29 15:57:19 crc kubenswrapper[5008]: I0129 15:57:19.839033 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-hcbjg"] Jan 29 15:57:19 crc kubenswrapper[5008]: W0129 15:57:19.841981 5008 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2b365eb1_533a_4b4a_92ed_da844f0144ee.slice/crio-1a3954329b89cd77b0d39c5680b9b3bda471ad9dcbda4f8cc6a26d7aa2cb934b WatchSource:0}: Error finding container 1a3954329b89cd77b0d39c5680b9b3bda471ad9dcbda4f8cc6a26d7aa2cb934b: Status 404 returned error can't find the container with id 1a3954329b89cd77b0d39c5680b9b3bda471ad9dcbda4f8cc6a26d7aa2cb934b Jan 29 15:57:20 crc kubenswrapper[5008]: I0129 15:57:20.853616 5008 generic.go:334] "Generic (PLEG): container finished" podID="2b365eb1-533a-4b4a-92ed-da844f0144ee" containerID="029122710bec3ead5773dc17d19527fcf835c2079cb3b4366dd751781af68880" exitCode=0 Jan 29 15:57:20 crc kubenswrapper[5008]: I0129 15:57:20.853702 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hcbjg" event={"ID":"2b365eb1-533a-4b4a-92ed-da844f0144ee","Type":"ContainerDied","Data":"029122710bec3ead5773dc17d19527fcf835c2079cb3b4366dd751781af68880"} Jan 29 15:57:20 crc kubenswrapper[5008]: I0129 15:57:20.854064 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hcbjg" event={"ID":"2b365eb1-533a-4b4a-92ed-da844f0144ee","Type":"ContainerStarted","Data":"1a3954329b89cd77b0d39c5680b9b3bda471ad9dcbda4f8cc6a26d7aa2cb934b"} Jan 29 15:57:20 crc kubenswrapper[5008]: I0129 15:57:20.861660 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fsf7n" event={"ID":"584c809e-d445-45a3-84dc-aebb0ab47f1d","Type":"ContainerStarted","Data":"560c4a087d72c5b97173f2148e008364217cf3873e93b9ddf90930a6cb837f82"} Jan 29 15:57:20 crc kubenswrapper[5008]: I0129 15:57:20.906570 5008 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-fsf7n" podStartSLOduration=3.192063329 podStartE2EDuration="6.906550728s" podCreationTimestamp="2026-01-29 15:57:14 +0000 UTC" firstStartedPulling="2026-01-29 15:57:16.814960336 +0000 UTC m=+1780.487814573" lastFinishedPulling="2026-01-29 15:57:20.529447715 +0000 UTC m=+1784.202301972" observedRunningTime="2026-01-29 15:57:20.893158215 +0000 UTC m=+1784.566012482" watchObservedRunningTime="2026-01-29 15:57:20.906550728 +0000 UTC m=+1784.579404955" Jan 29 15:57:21 crc kubenswrapper[5008]: I0129 15:57:21.324580 5008 scope.go:117] "RemoveContainer" containerID="1c8349b7c34277b7122a478ebda273749cae45969c3cfbb565f71a131de59c19" Jan 29 15:57:21 crc kubenswrapper[5008]: E0129 15:57:21.324861 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gk9q8_openshift-machine-config-operator(ca0fcb2d-733d-4bde-9bbf-3f7082d0e244)\"" pod="openshift-machine-config-operator/machine-config-daemon-gk9q8" podUID="ca0fcb2d-733d-4bde-9bbf-3f7082d0e244" Jan 29 15:57:22 crc kubenswrapper[5008]: I0129 15:57:22.884212 5008 generic.go:334] "Generic (PLEG): container finished" podID="2b365eb1-533a-4b4a-92ed-da844f0144ee" containerID="c2bc36fbe8f3e25d7d68a9f461e1ef0730dfb9b9c4a4ac61922941d595122f44" exitCode=0 Jan 29 15:57:22 crc kubenswrapper[5008]: I0129 15:57:22.884380 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hcbjg" event={"ID":"2b365eb1-533a-4b4a-92ed-da844f0144ee","Type":"ContainerDied","Data":"c2bc36fbe8f3e25d7d68a9f461e1ef0730dfb9b9c4a4ac61922941d595122f44"} Jan 29 15:57:23 crc kubenswrapper[5008]: I0129 15:57:23.896374 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hcbjg" event={"ID":"2b365eb1-533a-4b4a-92ed-da844f0144ee","Type":"ContainerStarted","Data":"b2349ea6eb40feb88475ff1a1d63808b9c3d0aa5c899aef5d037351e78d59f1c"} Jan 29 15:57:24 crc kubenswrapper[5008]: I0129 15:57:24.542289 5008 scope.go:117] "RemoveContainer" containerID="08622f8ad03658b22a0476180ef40d122a3ce215734ba57beccde8e385c5d87a" Jan 29 15:57:24 crc kubenswrapper[5008]: I0129 15:57:24.575827 5008 scope.go:117] "RemoveContainer" containerID="d9b41e67155f529dbd273cfba785076257b2721a371f6a0e62d1c4355eb9512a" Jan 29 15:57:24 crc kubenswrapper[5008]: I0129 15:57:24.644035 5008 scope.go:117] "RemoveContainer" containerID="a31808be1fa3bc4b89dfda7f79836da13bf6f5c2671c33471c5061bfc1edc1ea" Jan 29 15:57:24 crc kubenswrapper[5008]: I0129 15:57:24.662175 5008 scope.go:117] "RemoveContainer" containerID="d694dd74760c7fb5bcb25c24900b008d41d6e4127c92f70bb60fd3e6fc52c215" Jan 29 15:57:24 crc kubenswrapper[5008]: I0129 15:57:24.710142 5008 scope.go:117] "RemoveContainer" containerID="88e4435b5bfd1a79780b926cd500b5d39ca87b3e8a648cc8d9d789e4cf17dfd1" Jan 29 15:57:24 crc kubenswrapper[5008]: I0129 15:57:24.758363 5008 scope.go:117] "RemoveContainer" containerID="c12146b73a51a5482b71661513ea3874dfe91fc50f839323c14bf1dbe55d4888" Jan 29 15:57:24 crc kubenswrapper[5008]: I0129 15:57:24.812921 5008 scope.go:117] "RemoveContainer" containerID="9c021d2423056bd1e8f0c03523a2b976398e77dc14de7fa3b22ff99a7e7bf44a" Jan 29 15:57:24 crc kubenswrapper[5008]: I0129 15:57:24.939742 5008 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-hcbjg" podStartSLOduration=4.213876191 podStartE2EDuration="6.939721764s" podCreationTimestamp="2026-01-29 15:57:18 +0000 UTC" firstStartedPulling="2026-01-29 15:57:20.856741719 +0000 UTC m=+1784.529595956" lastFinishedPulling="2026-01-29 15:57:23.582587282 +0000 UTC m=+1787.255441529" observedRunningTime="2026-01-29 15:57:24.932681304 +0000 UTC m=+1788.605535541" watchObservedRunningTime="2026-01-29 15:57:24.939721764 +0000 UTC m=+1788.612576021" Jan 29 15:57:24 crc kubenswrapper[5008]: I0129 15:57:24.939807 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-fsf7n" Jan 29 15:57:24 crc kubenswrapper[5008]: I0129 15:57:24.940468 5008 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-fsf7n" Jan 29 15:57:25 crc kubenswrapper[5008]: I0129 15:57:25.000864 5008 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-fsf7n" Jan 29 15:57:25 crc kubenswrapper[5008]: E0129 15:57:25.326654 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-httpd\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/ubi9/httpd-24:latest\\\"\"" pod="openstack/ceilometer-0" podUID="d40740f9-e8d8-4f46-b8b0-d913a6c33210" Jan 29 15:57:25 crc kubenswrapper[5008]: I0129 15:57:25.989648 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-fsf7n" Jan 29 15:57:27 crc kubenswrapper[5008]: I0129 15:57:27.191214 5008 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-fsf7n"] Jan 29 15:57:27 crc kubenswrapper[5008]: I0129 15:57:27.942742 5008 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-fsf7n" podUID="584c809e-d445-45a3-84dc-aebb0ab47f1d" containerName="registry-server" containerID="cri-o://560c4a087d72c5b97173f2148e008364217cf3873e93b9ddf90930a6cb837f82" gracePeriod=2 Jan 29 15:57:28 crc kubenswrapper[5008]: I0129 15:57:28.958166 5008 generic.go:334] "Generic (PLEG): container finished" podID="584c809e-d445-45a3-84dc-aebb0ab47f1d" containerID="560c4a087d72c5b97173f2148e008364217cf3873e93b9ddf90930a6cb837f82" exitCode=0 Jan 29 15:57:28 crc kubenswrapper[5008]: I0129 15:57:28.958264 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fsf7n" event={"ID":"584c809e-d445-45a3-84dc-aebb0ab47f1d","Type":"ContainerDied","Data":"560c4a087d72c5b97173f2148e008364217cf3873e93b9ddf90930a6cb837f82"} Jan 29 15:57:29 crc kubenswrapper[5008]: I0129 15:57:29.079139 5008 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-2158-account-create-update-pjst9"] Jan 29 15:57:29 crc kubenswrapper[5008]: I0129 15:57:29.103312 5008 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-351a-account-create-update-tbrc5"] Jan 29 15:57:29 crc kubenswrapper[5008]: I0129 15:57:29.111400 5008 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-create-ls2rz"] Jan 29 15:57:29 crc kubenswrapper[5008]: I0129 15:57:29.119806 5008 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-9316-account-create-update-hpxxq"] Jan 29 15:57:29 crc kubenswrapper[5008]: I0129 15:57:29.129692 5008 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-create-8sctv"] Jan 29 15:57:29 crc kubenswrapper[5008]: I0129 15:57:29.136395 5008 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-2158-account-create-update-pjst9"] Jan 29 15:57:29 crc kubenswrapper[5008]: I0129 15:57:29.146380 5008 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-351a-account-create-update-tbrc5"] Jan 29 15:57:29 crc kubenswrapper[5008]: I0129 15:57:29.156677 5008 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-create-ch7lz"] Jan 29 15:57:29 crc kubenswrapper[5008]: I0129 15:57:29.167860 5008 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-create-ls2rz"] Jan 29 15:57:29 crc kubenswrapper[5008]: I0129 15:57:29.172556 5008 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-create-ch7lz"] Jan 29 15:57:29 crc kubenswrapper[5008]: I0129 15:57:29.180938 5008 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-9316-account-create-update-hpxxq"] Jan 29 15:57:29 crc kubenswrapper[5008]: I0129 15:57:29.190956 5008 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-create-8sctv"] Jan 29 15:57:29 crc kubenswrapper[5008]: I0129 15:57:29.333539 5008 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0494524d-f73e-4534-9064-b578d41bea87" path="/var/lib/kubelet/pods/0494524d-f73e-4534-9064-b578d41bea87/volumes" Jan 29 15:57:29 crc kubenswrapper[5008]: I0129 15:57:29.334145 5008 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="36bf973b-f73a-425e-9923-09caa2622a41" path="/var/lib/kubelet/pods/36bf973b-f73a-425e-9923-09caa2622a41/volumes" Jan 29 15:57:29 crc kubenswrapper[5008]: I0129 15:57:29.334652 5008 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4256c8e0-3a7b-43fd-9ad4-23b2495bc92e" path="/var/lib/kubelet/pods/4256c8e0-3a7b-43fd-9ad4-23b2495bc92e/volumes" Jan 29 15:57:29 crc kubenswrapper[5008]: I0129 15:57:29.335170 5008 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="75706daa-3e40-4bbe-bb1b-44120719d48d" path="/var/lib/kubelet/pods/75706daa-3e40-4bbe-bb1b-44120719d48d/volumes" Jan 29 15:57:29 crc kubenswrapper[5008]: I0129 15:57:29.336184 5008 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="826ac6d8-e950-4bd5-b5f4-0d3f5be5b960" path="/var/lib/kubelet/pods/826ac6d8-e950-4bd5-b5f4-0d3f5be5b960/volumes" Jan 29 15:57:29 crc kubenswrapper[5008]: I0129 15:57:29.336673 5008 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bbc0f9ba-13f2-4092-b3e4-a5744ae24174" path="/var/lib/kubelet/pods/bbc0f9ba-13f2-4092-b3e4-a5744ae24174/volumes" Jan 29 15:57:29 crc kubenswrapper[5008]: I0129 15:57:29.342144 5008 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-hcbjg" Jan 29 15:57:29 crc kubenswrapper[5008]: I0129 15:57:29.342185 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-hcbjg" Jan 29 15:57:29 crc kubenswrapper[5008]: I0129 15:57:29.384812 5008 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-hcbjg" Jan 29 15:57:29 crc kubenswrapper[5008]: I0129 15:57:29.970237 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fsf7n" event={"ID":"584c809e-d445-45a3-84dc-aebb0ab47f1d","Type":"ContainerDied","Data":"d44daad273b27258030b31ac11a07a7227997b86e2d6579d418d8d86b1a6359c"} Jan 29 15:57:29 crc kubenswrapper[5008]: I0129 15:57:29.970661 5008 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d44daad273b27258030b31ac11a07a7227997b86e2d6579d418d8d86b1a6359c" Jan 29 15:57:30 crc kubenswrapper[5008]: I0129 15:57:30.016284 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-hcbjg" Jan 29 15:57:30 crc kubenswrapper[5008]: I0129 15:57:30.049091 5008 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-fsf7n" Jan 29 15:57:30 crc kubenswrapper[5008]: I0129 15:57:30.120569 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/584c809e-d445-45a3-84dc-aebb0ab47f1d-utilities\") pod \"584c809e-d445-45a3-84dc-aebb0ab47f1d\" (UID: \"584c809e-d445-45a3-84dc-aebb0ab47f1d\") " Jan 29 15:57:30 crc kubenswrapper[5008]: I0129 15:57:30.120678 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/584c809e-d445-45a3-84dc-aebb0ab47f1d-catalog-content\") pod \"584c809e-d445-45a3-84dc-aebb0ab47f1d\" (UID: \"584c809e-d445-45a3-84dc-aebb0ab47f1d\") " Jan 29 15:57:30 crc kubenswrapper[5008]: I0129 15:57:30.120916 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s47h5\" (UniqueName: \"kubernetes.io/projected/584c809e-d445-45a3-84dc-aebb0ab47f1d-kube-api-access-s47h5\") pod \"584c809e-d445-45a3-84dc-aebb0ab47f1d\" (UID: \"584c809e-d445-45a3-84dc-aebb0ab47f1d\") " Jan 29 15:57:30 crc kubenswrapper[5008]: I0129 15:57:30.121803 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/584c809e-d445-45a3-84dc-aebb0ab47f1d-utilities" (OuterVolumeSpecName: "utilities") pod "584c809e-d445-45a3-84dc-aebb0ab47f1d" (UID: "584c809e-d445-45a3-84dc-aebb0ab47f1d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 15:57:30 crc kubenswrapper[5008]: I0129 15:57:30.127198 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/584c809e-d445-45a3-84dc-aebb0ab47f1d-kube-api-access-s47h5" (OuterVolumeSpecName: "kube-api-access-s47h5") pod "584c809e-d445-45a3-84dc-aebb0ab47f1d" (UID: "584c809e-d445-45a3-84dc-aebb0ab47f1d"). InnerVolumeSpecName "kube-api-access-s47h5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:57:30 crc kubenswrapper[5008]: I0129 15:57:30.149510 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/584c809e-d445-45a3-84dc-aebb0ab47f1d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "584c809e-d445-45a3-84dc-aebb0ab47f1d" (UID: "584c809e-d445-45a3-84dc-aebb0ab47f1d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 15:57:30 crc kubenswrapper[5008]: I0129 15:57:30.223647 5008 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s47h5\" (UniqueName: \"kubernetes.io/projected/584c809e-d445-45a3-84dc-aebb0ab47f1d-kube-api-access-s47h5\") on node \"crc\" DevicePath \"\"" Jan 29 15:57:30 crc kubenswrapper[5008]: I0129 15:57:30.224031 5008 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/584c809e-d445-45a3-84dc-aebb0ab47f1d-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 15:57:30 crc kubenswrapper[5008]: I0129 15:57:30.224123 5008 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/584c809e-d445-45a3-84dc-aebb0ab47f1d-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 15:57:30 crc kubenswrapper[5008]: I0129 15:57:30.985945 5008 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-fsf7n" Jan 29 15:57:30 crc kubenswrapper[5008]: I0129 15:57:30.991907 5008 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-hcbjg"] Jan 29 15:57:31 crc kubenswrapper[5008]: I0129 15:57:31.041847 5008 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-fsf7n"] Jan 29 15:57:31 crc kubenswrapper[5008]: I0129 15:57:31.050406 5008 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-fsf7n"] Jan 29 15:57:31 crc kubenswrapper[5008]: E0129 15:57:31.205866 5008 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod584c809e_d445_45a3_84dc_aebb0ab47f1d.slice\": RecentStats: unable to find data in memory cache]" Jan 29 15:57:31 crc kubenswrapper[5008]: I0129 15:57:31.336752 5008 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="584c809e-d445-45a3-84dc-aebb0ab47f1d" path="/var/lib/kubelet/pods/584c809e-d445-45a3-84dc-aebb0ab47f1d/volumes" Jan 29 15:57:31 crc kubenswrapper[5008]: I0129 15:57:31.996287 5008 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-hcbjg" podUID="2b365eb1-533a-4b4a-92ed-da844f0144ee" containerName="registry-server" containerID="cri-o://b2349ea6eb40feb88475ff1a1d63808b9c3d0aa5c899aef5d037351e78d59f1c" gracePeriod=2 Jan 29 15:57:32 crc kubenswrapper[5008]: I0129 15:57:32.324171 5008 scope.go:117] "RemoveContainer" containerID="1c8349b7c34277b7122a478ebda273749cae45969c3cfbb565f71a131de59c19" Jan 29 15:57:32 crc kubenswrapper[5008]: E0129 15:57:32.324770 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gk9q8_openshift-machine-config-operator(ca0fcb2d-733d-4bde-9bbf-3f7082d0e244)\"" pod="openshift-machine-config-operator/machine-config-daemon-gk9q8" podUID="ca0fcb2d-733d-4bde-9bbf-3f7082d0e244" Jan 29 15:57:33 crc kubenswrapper[5008]: I0129 15:57:33.007895 5008 generic.go:334] "Generic (PLEG): container finished" podID="2b365eb1-533a-4b4a-92ed-da844f0144ee" containerID="b2349ea6eb40feb88475ff1a1d63808b9c3d0aa5c899aef5d037351e78d59f1c" exitCode=0 Jan 29 15:57:33 crc kubenswrapper[5008]: I0129 15:57:33.007933 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hcbjg" event={"ID":"2b365eb1-533a-4b4a-92ed-da844f0144ee","Type":"ContainerDied","Data":"b2349ea6eb40feb88475ff1a1d63808b9c3d0aa5c899aef5d037351e78d59f1c"} Jan 29 15:57:33 crc kubenswrapper[5008]: I0129 15:57:33.007956 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hcbjg" event={"ID":"2b365eb1-533a-4b4a-92ed-da844f0144ee","Type":"ContainerDied","Data":"1a3954329b89cd77b0d39c5680b9b3bda471ad9dcbda4f8cc6a26d7aa2cb934b"} Jan 29 15:57:33 crc kubenswrapper[5008]: I0129 15:57:33.007965 5008 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1a3954329b89cd77b0d39c5680b9b3bda471ad9dcbda4f8cc6a26d7aa2cb934b" Jan 29 15:57:33 crc kubenswrapper[5008]: I0129 15:57:33.045044 5008 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-hcbjg" Jan 29 15:57:33 crc kubenswrapper[5008]: I0129 15:57:33.177210 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x9zlf\" (UniqueName: \"kubernetes.io/projected/2b365eb1-533a-4b4a-92ed-da844f0144ee-kube-api-access-x9zlf\") pod \"2b365eb1-533a-4b4a-92ed-da844f0144ee\" (UID: \"2b365eb1-533a-4b4a-92ed-da844f0144ee\") " Jan 29 15:57:33 crc kubenswrapper[5008]: I0129 15:57:33.177293 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2b365eb1-533a-4b4a-92ed-da844f0144ee-utilities\") pod \"2b365eb1-533a-4b4a-92ed-da844f0144ee\" (UID: \"2b365eb1-533a-4b4a-92ed-da844f0144ee\") " Jan 29 15:57:33 crc kubenswrapper[5008]: I0129 15:57:33.177373 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2b365eb1-533a-4b4a-92ed-da844f0144ee-catalog-content\") pod \"2b365eb1-533a-4b4a-92ed-da844f0144ee\" (UID: \"2b365eb1-533a-4b4a-92ed-da844f0144ee\") " Jan 29 15:57:33 crc kubenswrapper[5008]: I0129 15:57:33.179255 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2b365eb1-533a-4b4a-92ed-da844f0144ee-utilities" (OuterVolumeSpecName: "utilities") pod "2b365eb1-533a-4b4a-92ed-da844f0144ee" (UID: "2b365eb1-533a-4b4a-92ed-da844f0144ee"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 15:57:33 crc kubenswrapper[5008]: I0129 15:57:33.186007 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2b365eb1-533a-4b4a-92ed-da844f0144ee-kube-api-access-x9zlf" (OuterVolumeSpecName: "kube-api-access-x9zlf") pod "2b365eb1-533a-4b4a-92ed-da844f0144ee" (UID: "2b365eb1-533a-4b4a-92ed-da844f0144ee"). InnerVolumeSpecName "kube-api-access-x9zlf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:57:33 crc kubenswrapper[5008]: I0129 15:57:33.279524 5008 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x9zlf\" (UniqueName: \"kubernetes.io/projected/2b365eb1-533a-4b4a-92ed-da844f0144ee-kube-api-access-x9zlf\") on node \"crc\" DevicePath \"\"" Jan 29 15:57:33 crc kubenswrapper[5008]: I0129 15:57:33.279560 5008 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2b365eb1-533a-4b4a-92ed-da844f0144ee-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 15:57:34 crc kubenswrapper[5008]: I0129 15:57:34.016986 5008 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-hcbjg" Jan 29 15:57:35 crc kubenswrapper[5008]: I0129 15:57:35.087849 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2b365eb1-533a-4b4a-92ed-da844f0144ee-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "2b365eb1-533a-4b4a-92ed-da844f0144ee" (UID: "2b365eb1-533a-4b4a-92ed-da844f0144ee"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 15:57:35 crc kubenswrapper[5008]: I0129 15:57:35.117210 5008 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2b365eb1-533a-4b4a-92ed-da844f0144ee-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 15:57:35 crc kubenswrapper[5008]: I0129 15:57:35.260754 5008 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-hcbjg"] Jan 29 15:57:35 crc kubenswrapper[5008]: I0129 15:57:35.267801 5008 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-hcbjg"] Jan 29 15:57:35 crc kubenswrapper[5008]: I0129 15:57:35.335363 5008 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2b365eb1-533a-4b4a-92ed-da844f0144ee" path="/var/lib/kubelet/pods/2b365eb1-533a-4b4a-92ed-da844f0144ee/volumes" Jan 29 15:57:37 crc kubenswrapper[5008]: E0129 15:57:37.331052 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-httpd\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/ubi9/httpd-24:latest\\\"\"" pod="openstack/ceilometer-0" podUID="d40740f9-e8d8-4f46-b8b0-d913a6c33210" Jan 29 15:57:44 crc kubenswrapper[5008]: I0129 15:57:44.323724 5008 scope.go:117] "RemoveContainer" containerID="1c8349b7c34277b7122a478ebda273749cae45969c3cfbb565f71a131de59c19" Jan 29 15:57:44 crc kubenswrapper[5008]: E0129 15:57:44.324523 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gk9q8_openshift-machine-config-operator(ca0fcb2d-733d-4bde-9bbf-3f7082d0e244)\"" pod="openshift-machine-config-operator/machine-config-daemon-gk9q8" podUID="ca0fcb2d-733d-4bde-9bbf-3f7082d0e244" Jan 29 15:57:51 crc kubenswrapper[5008]: E0129 15:57:51.329256 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-httpd\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/ubi9/httpd-24:latest\\\"\"" pod="openstack/ceilometer-0" podUID="d40740f9-e8d8-4f46-b8b0-d913a6c33210" Jan 29 15:57:55 crc kubenswrapper[5008]: I0129 15:57:55.323897 5008 scope.go:117] "RemoveContainer" containerID="1c8349b7c34277b7122a478ebda273749cae45969c3cfbb565f71a131de59c19" Jan 29 15:57:55 crc kubenswrapper[5008]: E0129 15:57:55.324811 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gk9q8_openshift-machine-config-operator(ca0fcb2d-733d-4bde-9bbf-3f7082d0e244)\"" pod="openshift-machine-config-operator/machine-config-daemon-gk9q8" podUID="ca0fcb2d-733d-4bde-9bbf-3f7082d0e244" Jan 29 15:58:03 crc kubenswrapper[5008]: E0129 15:58:03.326423 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-httpd\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/ubi9/httpd-24:latest\\\"\"" pod="openstack/ceilometer-0" podUID="d40740f9-e8d8-4f46-b8b0-d913a6c33210" Jan 29 15:58:06 crc kubenswrapper[5008]: I0129 15:58:06.324488 5008 scope.go:117] "RemoveContainer" containerID="1c8349b7c34277b7122a478ebda273749cae45969c3cfbb565f71a131de59c19" Jan 29 15:58:06 crc kubenswrapper[5008]: E0129 15:58:06.325348 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gk9q8_openshift-machine-config-operator(ca0fcb2d-733d-4bde-9bbf-3f7082d0e244)\"" pod="openshift-machine-config-operator/machine-config-daemon-gk9q8" podUID="ca0fcb2d-733d-4bde-9bbf-3f7082d0e244" Jan 29 15:58:09 crc kubenswrapper[5008]: I0129 15:58:09.061682 5008 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-sync-rdpcb"] Jan 29 15:58:09 crc kubenswrapper[5008]: I0129 15:58:09.072056 5008 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-sync-rdpcb"] Jan 29 15:58:09 crc kubenswrapper[5008]: I0129 15:58:09.335769 5008 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4a79f96d-ad2b-4b69-b9e9-719b1cc0b183" path="/var/lib/kubelet/pods/4a79f96d-ad2b-4b69-b9e9-719b1cc0b183/volumes" Jan 29 15:58:14 crc kubenswrapper[5008]: E0129 15:58:14.454477 5008 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://registry.redhat.io/ubi9/httpd-24:latest: Requesting bearer token: invalid status code from registry 403 (Forbidden)" image="registry.redhat.io/ubi9/httpd-24:latest" Jan 29 15:58:14 crc kubenswrapper[5008]: E0129 15:58:14.455359 5008 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:proxy-httpd,Image:registry.redhat.io/ubi9/httpd-24:latest,Command:[/usr/sbin/httpd],Args:[-DFOREGROUND],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:proxy-httpd,HostPort:0,ContainerPort:3000,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/httpd/conf/httpd.conf,SubPath:httpd.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/httpd/conf.d/ssl.conf,SubPath:ssl.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:run-httpd,ReadOnly:false,MountPath:/run/httpd,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:log-httpd,ReadOnly:false,MountPath:/var/log/httpd,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-4zk8n,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/,Port:{0 3000 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:30,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/,Port:{0 3000 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:30,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(d40740f9-e8d8-4f46-b8b0-d913a6c33210): ErrImagePull: initializing source docker://registry.redhat.io/ubi9/httpd-24:latest: Requesting bearer token: invalid status code from registry 403 (Forbidden)" logger="UnhandledError" Jan 29 15:58:14 crc kubenswrapper[5008]: E0129 15:58:14.456613 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-httpd\" with ErrImagePull: \"initializing source docker://registry.redhat.io/ubi9/httpd-24:latest: Requesting bearer token: invalid status code from registry 403 (Forbidden)\"" pod="openstack/ceilometer-0" podUID="d40740f9-e8d8-4f46-b8b0-d913a6c33210" Jan 29 15:58:21 crc kubenswrapper[5008]: I0129 15:58:21.323914 5008 scope.go:117] "RemoveContainer" containerID="1c8349b7c34277b7122a478ebda273749cae45969c3cfbb565f71a131de59c19" Jan 29 15:58:21 crc kubenswrapper[5008]: E0129 15:58:21.324680 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gk9q8_openshift-machine-config-operator(ca0fcb2d-733d-4bde-9bbf-3f7082d0e244)\"" pod="openshift-machine-config-operator/machine-config-daemon-gk9q8" podUID="ca0fcb2d-733d-4bde-9bbf-3f7082d0e244" Jan 29 15:58:25 crc kubenswrapper[5008]: I0129 15:58:25.082891 5008 scope.go:117] "RemoveContainer" containerID="eacc0139ac8b112a9da7c9f07cae68774d1d37d4498b8a7bcd2ca73c4e6b805f" Jan 29 15:58:25 crc kubenswrapper[5008]: I0129 15:58:25.122445 5008 scope.go:117] "RemoveContainer" containerID="f7337579b0c05cef5036ba373b06ec94f4c86859c74c4cf38a1a6c866cfa3d5e" Jan 29 15:58:25 crc kubenswrapper[5008]: I0129 15:58:25.159151 5008 scope.go:117] "RemoveContainer" containerID="6c61687e12f73c515f558a6a4b2824cb17762d52f0bf2ebbaaed1f1b074de225" Jan 29 15:58:25 crc kubenswrapper[5008]: I0129 15:58:25.210735 5008 scope.go:117] "RemoveContainer" containerID="e3f4a0bf80eb8c9f3329a22ef35badafd100d8a972517b1491615c6612a7b55a" Jan 29 15:58:25 crc kubenswrapper[5008]: I0129 15:58:25.243678 5008 scope.go:117] "RemoveContainer" containerID="ca99078315f1792020893b0155199b35cf28a5d2e22b71f951d215c87d9c1097" Jan 29 15:58:25 crc kubenswrapper[5008]: I0129 15:58:25.317181 5008 scope.go:117] "RemoveContainer" containerID="6f05c53cf48d2a332db38d95de29d8cfb8a983e457e1d6fed6a77e002f9f5183" Jan 29 15:58:25 crc kubenswrapper[5008]: I0129 15:58:25.356317 5008 scope.go:117] "RemoveContainer" containerID="64cf9712b9a6a018d4f38c41a288a8f15705222afe6688de0979f4ea4ab02893" Jan 29 15:58:27 crc kubenswrapper[5008]: E0129 15:58:27.333952 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-httpd\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/ubi9/httpd-24:latest\\\"\"" pod="openstack/ceilometer-0" podUID="d40740f9-e8d8-4f46-b8b0-d913a6c33210" Jan 29 15:58:34 crc kubenswrapper[5008]: I0129 15:58:34.323901 5008 scope.go:117] "RemoveContainer" containerID="1c8349b7c34277b7122a478ebda273749cae45969c3cfbb565f71a131de59c19" Jan 29 15:58:34 crc kubenswrapper[5008]: E0129 15:58:34.324546 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gk9q8_openshift-machine-config-operator(ca0fcb2d-733d-4bde-9bbf-3f7082d0e244)\"" pod="openshift-machine-config-operator/machine-config-daemon-gk9q8" podUID="ca0fcb2d-733d-4bde-9bbf-3f7082d0e244" Jan 29 15:58:42 crc kubenswrapper[5008]: E0129 15:58:42.327049 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-httpd\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/ubi9/httpd-24:latest\\\"\"" pod="openstack/ceilometer-0" podUID="d40740f9-e8d8-4f46-b8b0-d913a6c33210" Jan 29 15:58:46 crc kubenswrapper[5008]: I0129 15:58:46.323760 5008 scope.go:117] "RemoveContainer" containerID="1c8349b7c34277b7122a478ebda273749cae45969c3cfbb565f71a131de59c19" Jan 29 15:58:46 crc kubenswrapper[5008]: E0129 15:58:46.324452 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gk9q8_openshift-machine-config-operator(ca0fcb2d-733d-4bde-9bbf-3f7082d0e244)\"" pod="openshift-machine-config-operator/machine-config-daemon-gk9q8" podUID="ca0fcb2d-733d-4bde-9bbf-3f7082d0e244" Jan 29 15:58:54 crc kubenswrapper[5008]: E0129 15:58:54.326225 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-httpd\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/ubi9/httpd-24:latest\\\"\"" pod="openstack/ceilometer-0" podUID="d40740f9-e8d8-4f46-b8b0-d913a6c33210" Jan 29 15:58:58 crc kubenswrapper[5008]: I0129 15:58:58.323601 5008 scope.go:117] "RemoveContainer" containerID="1c8349b7c34277b7122a478ebda273749cae45969c3cfbb565f71a131de59c19" Jan 29 15:58:58 crc kubenswrapper[5008]: E0129 15:58:58.324052 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gk9q8_openshift-machine-config-operator(ca0fcb2d-733d-4bde-9bbf-3f7082d0e244)\"" pod="openshift-machine-config-operator/machine-config-daemon-gk9q8" podUID="ca0fcb2d-733d-4bde-9bbf-3f7082d0e244" Jan 29 15:59:09 crc kubenswrapper[5008]: E0129 15:59:09.327868 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-httpd\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/ubi9/httpd-24:latest\\\"\"" pod="openstack/ceilometer-0" podUID="d40740f9-e8d8-4f46-b8b0-d913a6c33210" Jan 29 15:59:12 crc kubenswrapper[5008]: I0129 15:59:12.323899 5008 scope.go:117] "RemoveContainer" containerID="1c8349b7c34277b7122a478ebda273749cae45969c3cfbb565f71a131de59c19" Jan 29 15:59:12 crc kubenswrapper[5008]: E0129 15:59:12.324866 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gk9q8_openshift-machine-config-operator(ca0fcb2d-733d-4bde-9bbf-3f7082d0e244)\"" pod="openshift-machine-config-operator/machine-config-daemon-gk9q8" podUID="ca0fcb2d-733d-4bde-9bbf-3f7082d0e244" Jan 29 15:59:24 crc kubenswrapper[5008]: E0129 15:59:24.325943 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-httpd\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/ubi9/httpd-24:latest\\\"\"" pod="openstack/ceilometer-0" podUID="d40740f9-e8d8-4f46-b8b0-d913a6c33210" Jan 29 15:59:25 crc kubenswrapper[5008]: I0129 15:59:25.526419 5008 scope.go:117] "RemoveContainer" containerID="bcb62e0a30103f70c2e23448f433250c8f5931d78a534a384a1188d58be16119" Jan 29 15:59:25 crc kubenswrapper[5008]: I0129 15:59:25.547630 5008 scope.go:117] "RemoveContainer" containerID="f79f38ff0afa3885296e624a49ae42810a26d27a384ceccb3214269c19350348" Jan 29 15:59:26 crc kubenswrapper[5008]: I0129 15:59:26.324411 5008 scope.go:117] "RemoveContainer" containerID="1c8349b7c34277b7122a478ebda273749cae45969c3cfbb565f71a131de59c19" Jan 29 15:59:26 crc kubenswrapper[5008]: E0129 15:59:26.324988 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gk9q8_openshift-machine-config-operator(ca0fcb2d-733d-4bde-9bbf-3f7082d0e244)\"" pod="openshift-machine-config-operator/machine-config-daemon-gk9q8" podUID="ca0fcb2d-733d-4bde-9bbf-3f7082d0e244" Jan 29 15:59:35 crc kubenswrapper[5008]: I0129 15:59:35.052378 5008 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-sync-tqc26"] Jan 29 15:59:35 crc kubenswrapper[5008]: I0129 15:59:35.061928 5008 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-dkqkc"] Jan 29 15:59:35 crc kubenswrapper[5008]: I0129 15:59:35.071210 5008 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-sync-tqc26"] Jan 29 15:59:35 crc kubenswrapper[5008]: I0129 15:59:35.078962 5008 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-dkqkc"] Jan 29 15:59:35 crc kubenswrapper[5008]: I0129 15:59:35.338028 5008 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="39abc131-ba3e-4cd8-916a-520789627dd5" path="/var/lib/kubelet/pods/39abc131-ba3e-4cd8-916a-520789627dd5/volumes" Jan 29 15:59:35 crc kubenswrapper[5008]: I0129 15:59:35.338934 5008 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c3a233d5-bf7f-4906-881c-5e81ea64e0e8" path="/var/lib/kubelet/pods/c3a233d5-bf7f-4906-881c-5e81ea64e0e8/volumes" Jan 29 15:59:37 crc kubenswrapper[5008]: I0129 15:59:37.329427 5008 scope.go:117] "RemoveContainer" containerID="1c8349b7c34277b7122a478ebda273749cae45969c3cfbb565f71a131de59c19" Jan 29 15:59:37 crc kubenswrapper[5008]: E0129 15:59:37.330020 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gk9q8_openshift-machine-config-operator(ca0fcb2d-733d-4bde-9bbf-3f7082d0e244)\"" pod="openshift-machine-config-operator/machine-config-daemon-gk9q8" podUID="ca0fcb2d-733d-4bde-9bbf-3f7082d0e244" Jan 29 15:59:38 crc kubenswrapper[5008]: E0129 15:59:38.327464 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-httpd\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/ubi9/httpd-24:latest\\\"\"" pod="openstack/ceilometer-0" podUID="d40740f9-e8d8-4f46-b8b0-d913a6c33210" Jan 29 15:59:42 crc kubenswrapper[5008]: I0129 15:59:42.032035 5008 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-sync-n7wgw"] Jan 29 15:59:42 crc kubenswrapper[5008]: I0129 15:59:42.043357 5008 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-sync-n7wgw"] Jan 29 15:59:43 crc kubenswrapper[5008]: I0129 15:59:43.337610 5008 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8277eb2b-44f8-4fd9-af92-1832e0272e0e" path="/var/lib/kubelet/pods/8277eb2b-44f8-4fd9-af92-1832e0272e0e/volumes" Jan 29 15:59:48 crc kubenswrapper[5008]: I0129 15:59:48.325484 5008 scope.go:117] "RemoveContainer" containerID="1c8349b7c34277b7122a478ebda273749cae45969c3cfbb565f71a131de59c19" Jan 29 15:59:48 crc kubenswrapper[5008]: E0129 15:59:48.326963 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gk9q8_openshift-machine-config-operator(ca0fcb2d-733d-4bde-9bbf-3f7082d0e244)\"" pod="openshift-machine-config-operator/machine-config-daemon-gk9q8" podUID="ca0fcb2d-733d-4bde-9bbf-3f7082d0e244" Jan 29 15:59:51 crc kubenswrapper[5008]: I0129 15:59:51.034433 5008 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-sync-fwhd5"] Jan 29 15:59:51 crc kubenswrapper[5008]: I0129 15:59:51.044861 5008 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-sync-fwhd5"] Jan 29 15:59:51 crc kubenswrapper[5008]: E0129 15:59:51.327723 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-httpd\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/ubi9/httpd-24:latest\\\"\"" pod="openstack/ceilometer-0" podUID="d40740f9-e8d8-4f46-b8b0-d913a6c33210" Jan 29 15:59:51 crc kubenswrapper[5008]: I0129 15:59:51.342497 5008 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9069f34b-ed91-4ced-8b05-91b83dd02938" path="/var/lib/kubelet/pods/9069f34b-ed91-4ced-8b05-91b83dd02938/volumes" Jan 29 15:59:53 crc kubenswrapper[5008]: I0129 15:59:53.030760 5008 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-sync-4h8lc"] Jan 29 15:59:53 crc kubenswrapper[5008]: I0129 15:59:53.043177 5008 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-sync-4h8lc"] Jan 29 15:59:53 crc kubenswrapper[5008]: I0129 15:59:53.343211 5008 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6c2a1a18-16ff-4419-b233-8649579edbea" path="/var/lib/kubelet/pods/6c2a1a18-16ff-4419-b233-8649579edbea/volumes" Jan 29 15:59:59 crc kubenswrapper[5008]: I0129 15:59:59.325215 5008 scope.go:117] "RemoveContainer" containerID="1c8349b7c34277b7122a478ebda273749cae45969c3cfbb565f71a131de59c19" Jan 29 15:59:59 crc kubenswrapper[5008]: E0129 15:59:59.326431 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gk9q8_openshift-machine-config-operator(ca0fcb2d-733d-4bde-9bbf-3f7082d0e244)\"" pod="openshift-machine-config-operator/machine-config-daemon-gk9q8" podUID="ca0fcb2d-733d-4bde-9bbf-3f7082d0e244" Jan 29 16:00:00 crc kubenswrapper[5008]: I0129 16:00:00.146685 5008 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29495040-n7t6s"] Jan 29 16:00:00 crc kubenswrapper[5008]: E0129 16:00:00.147478 5008 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2b365eb1-533a-4b4a-92ed-da844f0144ee" containerName="extract-utilities" Jan 29 16:00:00 crc kubenswrapper[5008]: I0129 16:00:00.147512 5008 state_mem.go:107] "Deleted CPUSet assignment" podUID="2b365eb1-533a-4b4a-92ed-da844f0144ee" containerName="extract-utilities" Jan 29 16:00:00 crc kubenswrapper[5008]: E0129 16:00:00.147550 5008 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="584c809e-d445-45a3-84dc-aebb0ab47f1d" containerName="extract-content" Jan 29 16:00:00 crc kubenswrapper[5008]: I0129 16:00:00.147562 5008 state_mem.go:107] "Deleted CPUSet assignment" podUID="584c809e-d445-45a3-84dc-aebb0ab47f1d" containerName="extract-content" Jan 29 16:00:00 crc kubenswrapper[5008]: E0129 16:00:00.147583 5008 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2b365eb1-533a-4b4a-92ed-da844f0144ee" containerName="registry-server" Jan 29 16:00:00 crc kubenswrapper[5008]: I0129 16:00:00.147592 5008 state_mem.go:107] "Deleted CPUSet assignment" podUID="2b365eb1-533a-4b4a-92ed-da844f0144ee" containerName="registry-server" Jan 29 16:00:00 crc kubenswrapper[5008]: E0129 16:00:00.147607 5008 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="584c809e-d445-45a3-84dc-aebb0ab47f1d" containerName="extract-utilities" Jan 29 16:00:00 crc kubenswrapper[5008]: I0129 16:00:00.147615 5008 state_mem.go:107] "Deleted CPUSet assignment" podUID="584c809e-d445-45a3-84dc-aebb0ab47f1d" containerName="extract-utilities" Jan 29 16:00:00 crc kubenswrapper[5008]: E0129 16:00:00.147640 5008 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="584c809e-d445-45a3-84dc-aebb0ab47f1d" containerName="registry-server" Jan 29 16:00:00 crc kubenswrapper[5008]: I0129 16:00:00.147646 5008 state_mem.go:107] "Deleted CPUSet assignment" podUID="584c809e-d445-45a3-84dc-aebb0ab47f1d" containerName="registry-server" Jan 29 16:00:00 crc kubenswrapper[5008]: E0129 16:00:00.147655 5008 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2b365eb1-533a-4b4a-92ed-da844f0144ee" containerName="extract-content" Jan 29 16:00:00 crc kubenswrapper[5008]: I0129 16:00:00.147661 5008 state_mem.go:107] "Deleted CPUSet assignment" podUID="2b365eb1-533a-4b4a-92ed-da844f0144ee" containerName="extract-content" Jan 29 16:00:00 crc kubenswrapper[5008]: I0129 16:00:00.147918 5008 memory_manager.go:354] "RemoveStaleState removing state" podUID="2b365eb1-533a-4b4a-92ed-da844f0144ee" containerName="registry-server" Jan 29 16:00:00 crc kubenswrapper[5008]: I0129 16:00:00.147935 5008 memory_manager.go:354] "RemoveStaleState removing state" podUID="584c809e-d445-45a3-84dc-aebb0ab47f1d" containerName="registry-server" Jan 29 16:00:00 crc kubenswrapper[5008]: I0129 16:00:00.148909 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29495040-n7t6s" Jan 29 16:00:00 crc kubenswrapper[5008]: I0129 16:00:00.152279 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 29 16:00:00 crc kubenswrapper[5008]: I0129 16:00:00.152496 5008 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 29 16:00:00 crc kubenswrapper[5008]: I0129 16:00:00.157099 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29495040-n7t6s"] Jan 29 16:00:00 crc kubenswrapper[5008]: I0129 16:00:00.278757 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dsx9t\" (UniqueName: \"kubernetes.io/projected/06e8011a-cb7c-4dea-a014-3053cd43b7a1-kube-api-access-dsx9t\") pod \"collect-profiles-29495040-n7t6s\" (UID: \"06e8011a-cb7c-4dea-a014-3053cd43b7a1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29495040-n7t6s" Jan 29 16:00:00 crc kubenswrapper[5008]: I0129 16:00:00.278852 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/06e8011a-cb7c-4dea-a014-3053cd43b7a1-secret-volume\") pod \"collect-profiles-29495040-n7t6s\" (UID: \"06e8011a-cb7c-4dea-a014-3053cd43b7a1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29495040-n7t6s" Jan 29 16:00:00 crc kubenswrapper[5008]: I0129 16:00:00.278874 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/06e8011a-cb7c-4dea-a014-3053cd43b7a1-config-volume\") pod \"collect-profiles-29495040-n7t6s\" (UID: \"06e8011a-cb7c-4dea-a014-3053cd43b7a1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29495040-n7t6s" Jan 29 16:00:00 crc kubenswrapper[5008]: I0129 16:00:00.380459 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dsx9t\" (UniqueName: \"kubernetes.io/projected/06e8011a-cb7c-4dea-a014-3053cd43b7a1-kube-api-access-dsx9t\") pod \"collect-profiles-29495040-n7t6s\" (UID: \"06e8011a-cb7c-4dea-a014-3053cd43b7a1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29495040-n7t6s" Jan 29 16:00:00 crc kubenswrapper[5008]: I0129 16:00:00.380538 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/06e8011a-cb7c-4dea-a014-3053cd43b7a1-secret-volume\") pod \"collect-profiles-29495040-n7t6s\" (UID: \"06e8011a-cb7c-4dea-a014-3053cd43b7a1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29495040-n7t6s" Jan 29 16:00:00 crc kubenswrapper[5008]: I0129 16:00:00.380566 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/06e8011a-cb7c-4dea-a014-3053cd43b7a1-config-volume\") pod \"collect-profiles-29495040-n7t6s\" (UID: \"06e8011a-cb7c-4dea-a014-3053cd43b7a1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29495040-n7t6s" Jan 29 16:00:00 crc kubenswrapper[5008]: I0129 16:00:00.381601 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/06e8011a-cb7c-4dea-a014-3053cd43b7a1-config-volume\") pod \"collect-profiles-29495040-n7t6s\" (UID: \"06e8011a-cb7c-4dea-a014-3053cd43b7a1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29495040-n7t6s" Jan 29 16:00:00 crc kubenswrapper[5008]: I0129 16:00:00.388439 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/06e8011a-cb7c-4dea-a014-3053cd43b7a1-secret-volume\") pod \"collect-profiles-29495040-n7t6s\" (UID: \"06e8011a-cb7c-4dea-a014-3053cd43b7a1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29495040-n7t6s" Jan 29 16:00:00 crc kubenswrapper[5008]: I0129 16:00:00.396600 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dsx9t\" (UniqueName: \"kubernetes.io/projected/06e8011a-cb7c-4dea-a014-3053cd43b7a1-kube-api-access-dsx9t\") pod \"collect-profiles-29495040-n7t6s\" (UID: \"06e8011a-cb7c-4dea-a014-3053cd43b7a1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29495040-n7t6s" Jan 29 16:00:00 crc kubenswrapper[5008]: I0129 16:00:00.471031 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29495040-n7t6s" Jan 29 16:00:00 crc kubenswrapper[5008]: I0129 16:00:00.894248 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29495040-n7t6s"] Jan 29 16:00:01 crc kubenswrapper[5008]: I0129 16:00:01.494708 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29495040-n7t6s" event={"ID":"06e8011a-cb7c-4dea-a014-3053cd43b7a1","Type":"ContainerStarted","Data":"0e0ab61946d23622a6cb2e540a378fca32853129e8933de727cd54442908ab35"} Jan 29 16:00:01 crc kubenswrapper[5008]: I0129 16:00:01.494766 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29495040-n7t6s" event={"ID":"06e8011a-cb7c-4dea-a014-3053cd43b7a1","Type":"ContainerStarted","Data":"fa37749a9812f438ecdf7408e29b49e813d4f12c3c52d260d88c57b975a68b39"} Jan 29 16:00:01 crc kubenswrapper[5008]: I0129 16:00:01.518187 5008 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29495040-n7t6s" podStartSLOduration=1.518165251 podStartE2EDuration="1.518165251s" podCreationTimestamp="2026-01-29 16:00:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 16:00:01.509149144 +0000 UTC m=+1945.182003401" watchObservedRunningTime="2026-01-29 16:00:01.518165251 +0000 UTC m=+1945.191019478" Jan 29 16:00:02 crc kubenswrapper[5008]: I0129 16:00:02.507548 5008 generic.go:334] "Generic (PLEG): container finished" podID="06e8011a-cb7c-4dea-a014-3053cd43b7a1" containerID="0e0ab61946d23622a6cb2e540a378fca32853129e8933de727cd54442908ab35" exitCode=0 Jan 29 16:00:02 crc kubenswrapper[5008]: I0129 16:00:02.507625 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29495040-n7t6s" event={"ID":"06e8011a-cb7c-4dea-a014-3053cd43b7a1","Type":"ContainerDied","Data":"0e0ab61946d23622a6cb2e540a378fca32853129e8933de727cd54442908ab35"} Jan 29 16:00:03 crc kubenswrapper[5008]: I0129 16:00:03.835318 5008 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29495040-n7t6s" Jan 29 16:00:03 crc kubenswrapper[5008]: I0129 16:00:03.945369 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/06e8011a-cb7c-4dea-a014-3053cd43b7a1-config-volume\") pod \"06e8011a-cb7c-4dea-a014-3053cd43b7a1\" (UID: \"06e8011a-cb7c-4dea-a014-3053cd43b7a1\") " Jan 29 16:00:03 crc kubenswrapper[5008]: I0129 16:00:03.945565 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/06e8011a-cb7c-4dea-a014-3053cd43b7a1-secret-volume\") pod \"06e8011a-cb7c-4dea-a014-3053cd43b7a1\" (UID: \"06e8011a-cb7c-4dea-a014-3053cd43b7a1\") " Jan 29 16:00:03 crc kubenswrapper[5008]: I0129 16:00:03.945642 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dsx9t\" (UniqueName: \"kubernetes.io/projected/06e8011a-cb7c-4dea-a014-3053cd43b7a1-kube-api-access-dsx9t\") pod \"06e8011a-cb7c-4dea-a014-3053cd43b7a1\" (UID: \"06e8011a-cb7c-4dea-a014-3053cd43b7a1\") " Jan 29 16:00:03 crc kubenswrapper[5008]: I0129 16:00:03.946337 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/06e8011a-cb7c-4dea-a014-3053cd43b7a1-config-volume" (OuterVolumeSpecName: "config-volume") pod "06e8011a-cb7c-4dea-a014-3053cd43b7a1" (UID: "06e8011a-cb7c-4dea-a014-3053cd43b7a1"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:00:03 crc kubenswrapper[5008]: I0129 16:00:03.953009 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/06e8011a-cb7c-4dea-a014-3053cd43b7a1-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "06e8011a-cb7c-4dea-a014-3053cd43b7a1" (UID: "06e8011a-cb7c-4dea-a014-3053cd43b7a1"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:00:03 crc kubenswrapper[5008]: I0129 16:00:03.953185 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/06e8011a-cb7c-4dea-a014-3053cd43b7a1-kube-api-access-dsx9t" (OuterVolumeSpecName: "kube-api-access-dsx9t") pod "06e8011a-cb7c-4dea-a014-3053cd43b7a1" (UID: "06e8011a-cb7c-4dea-a014-3053cd43b7a1"). InnerVolumeSpecName "kube-api-access-dsx9t". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:00:04 crc kubenswrapper[5008]: I0129 16:00:04.047648 5008 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/06e8011a-cb7c-4dea-a014-3053cd43b7a1-config-volume\") on node \"crc\" DevicePath \"\"" Jan 29 16:00:04 crc kubenswrapper[5008]: I0129 16:00:04.047916 5008 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/06e8011a-cb7c-4dea-a014-3053cd43b7a1-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 29 16:00:04 crc kubenswrapper[5008]: I0129 16:00:04.047925 5008 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dsx9t\" (UniqueName: \"kubernetes.io/projected/06e8011a-cb7c-4dea-a014-3053cd43b7a1-kube-api-access-dsx9t\") on node \"crc\" DevicePath \"\"" Jan 29 16:00:04 crc kubenswrapper[5008]: E0129 16:00:04.325367 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-httpd\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/ubi9/httpd-24:latest\\\"\"" pod="openstack/ceilometer-0" podUID="d40740f9-e8d8-4f46-b8b0-d913a6c33210" Jan 29 16:00:04 crc kubenswrapper[5008]: I0129 16:00:04.529681 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29495040-n7t6s" event={"ID":"06e8011a-cb7c-4dea-a014-3053cd43b7a1","Type":"ContainerDied","Data":"fa37749a9812f438ecdf7408e29b49e813d4f12c3c52d260d88c57b975a68b39"} Jan 29 16:00:04 crc kubenswrapper[5008]: I0129 16:00:04.529757 5008 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fa37749a9812f438ecdf7408e29b49e813d4f12c3c52d260d88c57b975a68b39" Jan 29 16:00:04 crc kubenswrapper[5008]: I0129 16:00:04.529830 5008 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29495040-n7t6s" Jan 29 16:00:08 crc kubenswrapper[5008]: I0129 16:00:08.048654 5008 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-sync-rcl2z"] Jan 29 16:00:08 crc kubenswrapper[5008]: I0129 16:00:08.058808 5008 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-sync-rcl2z"] Jan 29 16:00:09 crc kubenswrapper[5008]: I0129 16:00:09.430128 5008 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4ec0e696-652d-463e-b97e-dad0065a543b" path="/var/lib/kubelet/pods/4ec0e696-652d-463e-b97e-dad0065a543b/volumes" Jan 29 16:00:14 crc kubenswrapper[5008]: I0129 16:00:14.323950 5008 scope.go:117] "RemoveContainer" containerID="1c8349b7c34277b7122a478ebda273749cae45969c3cfbb565f71a131de59c19" Jan 29 16:00:14 crc kubenswrapper[5008]: E0129 16:00:14.324489 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gk9q8_openshift-machine-config-operator(ca0fcb2d-733d-4bde-9bbf-3f7082d0e244)\"" pod="openshift-machine-config-operator/machine-config-daemon-gk9q8" podUID="ca0fcb2d-733d-4bde-9bbf-3f7082d0e244" Jan 29 16:00:19 crc kubenswrapper[5008]: E0129 16:00:19.328019 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-httpd\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/ubi9/httpd-24:latest\\\"\"" pod="openstack/ceilometer-0" podUID="d40740f9-e8d8-4f46-b8b0-d913a6c33210" Jan 29 16:00:25 crc kubenswrapper[5008]: I0129 16:00:25.323562 5008 scope.go:117] "RemoveContainer" containerID="1c8349b7c34277b7122a478ebda273749cae45969c3cfbb565f71a131de59c19" Jan 29 16:00:25 crc kubenswrapper[5008]: E0129 16:00:25.324583 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gk9q8_openshift-machine-config-operator(ca0fcb2d-733d-4bde-9bbf-3f7082d0e244)\"" pod="openshift-machine-config-operator/machine-config-daemon-gk9q8" podUID="ca0fcb2d-733d-4bde-9bbf-3f7082d0e244" Jan 29 16:00:25 crc kubenswrapper[5008]: I0129 16:00:25.609305 5008 scope.go:117] "RemoveContainer" containerID="d1071455a85ae82bd88cb84ca9e9539c64ca11a3c5fff1412a478114adf32c80" Jan 29 16:00:25 crc kubenswrapper[5008]: I0129 16:00:25.654677 5008 scope.go:117] "RemoveContainer" containerID="0d834ba968e6d63e097a6aef362d3f06eb5d6b998580ed84a27255f328fc86b5" Jan 29 16:00:25 crc kubenswrapper[5008]: I0129 16:00:25.708638 5008 scope.go:117] "RemoveContainer" containerID="bde50669bd65351b30c48ee0e65fb0911aba9f1d7624eae95461658432ebf883" Jan 29 16:00:25 crc kubenswrapper[5008]: I0129 16:00:25.755913 5008 scope.go:117] "RemoveContainer" containerID="4235463096f31772a59e698a0a90916f6b2c055027357bae8128e733c3b9757d" Jan 29 16:00:25 crc kubenswrapper[5008]: I0129 16:00:25.789692 5008 scope.go:117] "RemoveContainer" containerID="9b5824f48cc959e52e85d63863855d59e169e89e7ec31bd5ec6b371bffc34475" Jan 29 16:00:25 crc kubenswrapper[5008]: I0129 16:00:25.837450 5008 scope.go:117] "RemoveContainer" containerID="ea56cb31969ede4dc77690e8380474b589122f4e8ba458f2575d15b6351054fb" Jan 29 16:00:30 crc kubenswrapper[5008]: E0129 16:00:30.326536 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-httpd\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/ubi9/httpd-24:latest\\\"\"" pod="openstack/ceilometer-0" podUID="d40740f9-e8d8-4f46-b8b0-d913a6c33210" Jan 29 16:00:31 crc kubenswrapper[5008]: I0129 16:00:31.266221 5008 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-samples-operator_cluster-samples-operator-665b6dd947-ztdsl_3c5e8be2-fe94-488c-801e-d1a56700bfa5/cluster-samples-operator/0.log" Jan 29 16:00:31 crc kubenswrapper[5008]: I0129 16:00:31.266497 5008 generic.go:334] "Generic (PLEG): container finished" podID="3c5e8be2-fe94-488c-801e-d1a56700bfa5" containerID="100ecffc6cff9494691eabff05729c4d5b7c0766f0e736a4cc1be50aa03aa882" exitCode=2 Jan 29 16:00:31 crc kubenswrapper[5008]: I0129 16:00:31.266529 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-ztdsl" event={"ID":"3c5e8be2-fe94-488c-801e-d1a56700bfa5","Type":"ContainerDied","Data":"100ecffc6cff9494691eabff05729c4d5b7c0766f0e736a4cc1be50aa03aa882"} Jan 29 16:00:31 crc kubenswrapper[5008]: I0129 16:00:31.267156 5008 scope.go:117] "RemoveContainer" containerID="100ecffc6cff9494691eabff05729c4d5b7c0766f0e736a4cc1be50aa03aa882" Jan 29 16:00:32 crc kubenswrapper[5008]: I0129 16:00:32.279874 5008 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-samples-operator_cluster-samples-operator-665b6dd947-ztdsl_3c5e8be2-fe94-488c-801e-d1a56700bfa5/cluster-samples-operator/0.log" Jan 29 16:00:32 crc kubenswrapper[5008]: I0129 16:00:32.280212 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-ztdsl" event={"ID":"3c5e8be2-fe94-488c-801e-d1a56700bfa5","Type":"ContainerStarted","Data":"d4cdaff99bba5504668a15c6176a1c591e22146a445a284bad1d8535fe560b21"} Jan 29 16:00:39 crc kubenswrapper[5008]: I0129 16:00:39.324826 5008 scope.go:117] "RemoveContainer" containerID="1c8349b7c34277b7122a478ebda273749cae45969c3cfbb565f71a131de59c19" Jan 29 16:00:39 crc kubenswrapper[5008]: E0129 16:00:39.326142 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gk9q8_openshift-machine-config-operator(ca0fcb2d-733d-4bde-9bbf-3f7082d0e244)\"" pod="openshift-machine-config-operator/machine-config-daemon-gk9q8" podUID="ca0fcb2d-733d-4bde-9bbf-3f7082d0e244" Jan 29 16:00:42 crc kubenswrapper[5008]: E0129 16:00:42.327151 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-httpd\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/ubi9/httpd-24:latest\\\"\"" pod="openstack/ceilometer-0" podUID="d40740f9-e8d8-4f46-b8b0-d913a6c33210" Jan 29 16:00:44 crc kubenswrapper[5008]: I0129 16:00:44.057362 5008 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-db-create-stxgj"] Jan 29 16:00:44 crc kubenswrapper[5008]: I0129 16:00:44.066869 5008 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-db-create-lmdpk"] Jan 29 16:00:44 crc kubenswrapper[5008]: I0129 16:00:44.076645 5008 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-db-create-stxgj"] Jan 29 16:00:44 crc kubenswrapper[5008]: I0129 16:00:44.107452 5008 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-db-create-lmdpk"] Jan 29 16:00:45 crc kubenswrapper[5008]: I0129 16:00:45.027823 5008 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-db-create-9xnkt"] Jan 29 16:00:45 crc kubenswrapper[5008]: I0129 16:00:45.037943 5008 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-e284-account-create-update-cz9rj"] Jan 29 16:00:45 crc kubenswrapper[5008]: I0129 16:00:45.047858 5008 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-db-create-9xnkt"] Jan 29 16:00:45 crc kubenswrapper[5008]: I0129 16:00:45.057839 5008 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-e284-account-create-update-cz9rj"] Jan 29 16:00:45 crc kubenswrapper[5008]: I0129 16:00:45.334348 5008 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="110f96e6-c230-44f3-9247-90283da8976c" path="/var/lib/kubelet/pods/110f96e6-c230-44f3-9247-90283da8976c/volumes" Jan 29 16:00:45 crc kubenswrapper[5008]: I0129 16:00:45.335289 5008 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7f34f608-b2f8-452e-8f0d-ef600929c36e" path="/var/lib/kubelet/pods/7f34f608-b2f8-452e-8f0d-ef600929c36e/volumes" Jan 29 16:00:45 crc kubenswrapper[5008]: I0129 16:00:45.335908 5008 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ac86c8fe-7377-4407-aef2-ef0c1a6e1c5e" path="/var/lib/kubelet/pods/ac86c8fe-7377-4407-aef2-ef0c1a6e1c5e/volumes" Jan 29 16:00:45 crc kubenswrapper[5008]: I0129 16:00:45.336515 5008 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d6a58042-fefd-43b8-b186-905dcfc7b1af" path="/var/lib/kubelet/pods/d6a58042-fefd-43b8-b186-905dcfc7b1af/volumes" Jan 29 16:00:46 crc kubenswrapper[5008]: I0129 16:00:46.037075 5008 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-fe67-account-create-update-bk5t9"] Jan 29 16:00:46 crc kubenswrapper[5008]: I0129 16:00:46.048679 5008 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-4e36-account-create-update-mthn6"] Jan 29 16:00:46 crc kubenswrapper[5008]: I0129 16:00:46.058031 5008 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-fe67-account-create-update-bk5t9"] Jan 29 16:00:46 crc kubenswrapper[5008]: I0129 16:00:46.066649 5008 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-4e36-account-create-update-mthn6"] Jan 29 16:00:47 crc kubenswrapper[5008]: I0129 16:00:47.341955 5008 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="63f2899c-3ee5-4d2c-ae4f-487783fede07" path="/var/lib/kubelet/pods/63f2899c-3ee5-4d2c-ae4f-487783fede07/volumes" Jan 29 16:00:47 crc kubenswrapper[5008]: I0129 16:00:47.343264 5008 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="804a6c8c-4d3d-4949-adad-bf28d059ac39" path="/var/lib/kubelet/pods/804a6c8c-4d3d-4949-adad-bf28d059ac39/volumes" Jan 29 16:00:54 crc kubenswrapper[5008]: I0129 16:00:54.324182 5008 scope.go:117] "RemoveContainer" containerID="1c8349b7c34277b7122a478ebda273749cae45969c3cfbb565f71a131de59c19" Jan 29 16:00:54 crc kubenswrapper[5008]: E0129 16:00:54.325061 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gk9q8_openshift-machine-config-operator(ca0fcb2d-733d-4bde-9bbf-3f7082d0e244)\"" pod="openshift-machine-config-operator/machine-config-daemon-gk9q8" podUID="ca0fcb2d-733d-4bde-9bbf-3f7082d0e244" Jan 29 16:00:56 crc kubenswrapper[5008]: E0129 16:00:56.326966 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-httpd\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/ubi9/httpd-24:latest\\\"\"" pod="openstack/ceilometer-0" podUID="d40740f9-e8d8-4f46-b8b0-d913a6c33210" Jan 29 16:01:00 crc kubenswrapper[5008]: I0129 16:01:00.175532 5008 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-cron-29495041-5xjnv"] Jan 29 16:01:00 crc kubenswrapper[5008]: E0129 16:01:00.176737 5008 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="06e8011a-cb7c-4dea-a014-3053cd43b7a1" containerName="collect-profiles" Jan 29 16:01:00 crc kubenswrapper[5008]: I0129 16:01:00.176761 5008 state_mem.go:107] "Deleted CPUSet assignment" podUID="06e8011a-cb7c-4dea-a014-3053cd43b7a1" containerName="collect-profiles" Jan 29 16:01:00 crc kubenswrapper[5008]: I0129 16:01:00.177117 5008 memory_manager.go:354] "RemoveStaleState removing state" podUID="06e8011a-cb7c-4dea-a014-3053cd43b7a1" containerName="collect-profiles" Jan 29 16:01:00 crc kubenswrapper[5008]: I0129 16:01:00.178101 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29495041-5xjnv" Jan 29 16:01:00 crc kubenswrapper[5008]: I0129 16:01:00.192158 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29495041-5xjnv"] Jan 29 16:01:00 crc kubenswrapper[5008]: I0129 16:01:00.248550 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3b2cbc69-268a-4c30-b9c0-d1352f380259-combined-ca-bundle\") pod \"keystone-cron-29495041-5xjnv\" (UID: \"3b2cbc69-268a-4c30-b9c0-d1352f380259\") " pod="openstack/keystone-cron-29495041-5xjnv" Jan 29 16:01:00 crc kubenswrapper[5008]: I0129 16:01:00.248841 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/3b2cbc69-268a-4c30-b9c0-d1352f380259-fernet-keys\") pod \"keystone-cron-29495041-5xjnv\" (UID: \"3b2cbc69-268a-4c30-b9c0-d1352f380259\") " pod="openstack/keystone-cron-29495041-5xjnv" Jan 29 16:01:00 crc kubenswrapper[5008]: I0129 16:01:00.249243 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3b2cbc69-268a-4c30-b9c0-d1352f380259-config-data\") pod \"keystone-cron-29495041-5xjnv\" (UID: \"3b2cbc69-268a-4c30-b9c0-d1352f380259\") " pod="openstack/keystone-cron-29495041-5xjnv" Jan 29 16:01:00 crc kubenswrapper[5008]: I0129 16:01:00.249447 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q69lw\" (UniqueName: \"kubernetes.io/projected/3b2cbc69-268a-4c30-b9c0-d1352f380259-kube-api-access-q69lw\") pod \"keystone-cron-29495041-5xjnv\" (UID: \"3b2cbc69-268a-4c30-b9c0-d1352f380259\") " pod="openstack/keystone-cron-29495041-5xjnv" Jan 29 16:01:00 crc kubenswrapper[5008]: I0129 16:01:00.351701 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3b2cbc69-268a-4c30-b9c0-d1352f380259-combined-ca-bundle\") pod \"keystone-cron-29495041-5xjnv\" (UID: \"3b2cbc69-268a-4c30-b9c0-d1352f380259\") " pod="openstack/keystone-cron-29495041-5xjnv" Jan 29 16:01:00 crc kubenswrapper[5008]: I0129 16:01:00.351772 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/3b2cbc69-268a-4c30-b9c0-d1352f380259-fernet-keys\") pod \"keystone-cron-29495041-5xjnv\" (UID: \"3b2cbc69-268a-4c30-b9c0-d1352f380259\") " pod="openstack/keystone-cron-29495041-5xjnv" Jan 29 16:01:00 crc kubenswrapper[5008]: I0129 16:01:00.352014 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3b2cbc69-268a-4c30-b9c0-d1352f380259-config-data\") pod \"keystone-cron-29495041-5xjnv\" (UID: \"3b2cbc69-268a-4c30-b9c0-d1352f380259\") " pod="openstack/keystone-cron-29495041-5xjnv" Jan 29 16:01:00 crc kubenswrapper[5008]: I0129 16:01:00.352063 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q69lw\" (UniqueName: \"kubernetes.io/projected/3b2cbc69-268a-4c30-b9c0-d1352f380259-kube-api-access-q69lw\") pod \"keystone-cron-29495041-5xjnv\" (UID: \"3b2cbc69-268a-4c30-b9c0-d1352f380259\") " pod="openstack/keystone-cron-29495041-5xjnv" Jan 29 16:01:00 crc kubenswrapper[5008]: I0129 16:01:00.357701 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/3b2cbc69-268a-4c30-b9c0-d1352f380259-fernet-keys\") pod \"keystone-cron-29495041-5xjnv\" (UID: \"3b2cbc69-268a-4c30-b9c0-d1352f380259\") " pod="openstack/keystone-cron-29495041-5xjnv" Jan 29 16:01:00 crc kubenswrapper[5008]: I0129 16:01:00.358048 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3b2cbc69-268a-4c30-b9c0-d1352f380259-config-data\") pod \"keystone-cron-29495041-5xjnv\" (UID: \"3b2cbc69-268a-4c30-b9c0-d1352f380259\") " pod="openstack/keystone-cron-29495041-5xjnv" Jan 29 16:01:00 crc kubenswrapper[5008]: I0129 16:01:00.358226 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3b2cbc69-268a-4c30-b9c0-d1352f380259-combined-ca-bundle\") pod \"keystone-cron-29495041-5xjnv\" (UID: \"3b2cbc69-268a-4c30-b9c0-d1352f380259\") " pod="openstack/keystone-cron-29495041-5xjnv" Jan 29 16:01:00 crc kubenswrapper[5008]: I0129 16:01:00.369445 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q69lw\" (UniqueName: \"kubernetes.io/projected/3b2cbc69-268a-4c30-b9c0-d1352f380259-kube-api-access-q69lw\") pod \"keystone-cron-29495041-5xjnv\" (UID: \"3b2cbc69-268a-4c30-b9c0-d1352f380259\") " pod="openstack/keystone-cron-29495041-5xjnv" Jan 29 16:01:00 crc kubenswrapper[5008]: I0129 16:01:00.508128 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29495041-5xjnv" Jan 29 16:01:00 crc kubenswrapper[5008]: I0129 16:01:00.949426 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29495041-5xjnv"] Jan 29 16:01:00 crc kubenswrapper[5008]: W0129 16:01:00.953426 5008 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3b2cbc69_268a_4c30_b9c0_d1352f380259.slice/crio-9ec098a8c5a25784ebfb9adf6dbd1984cb0c733cfe0df9791ed97b64e820c3d5 WatchSource:0}: Error finding container 9ec098a8c5a25784ebfb9adf6dbd1984cb0c733cfe0df9791ed97b64e820c3d5: Status 404 returned error can't find the container with id 9ec098a8c5a25784ebfb9adf6dbd1984cb0c733cfe0df9791ed97b64e820c3d5 Jan 29 16:01:01 crc kubenswrapper[5008]: I0129 16:01:01.530485 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29495041-5xjnv" event={"ID":"3b2cbc69-268a-4c30-b9c0-d1352f380259","Type":"ContainerStarted","Data":"7d30fd222b6ddf98bd917e6ff988a39a08202ad915127d1fd074c5440e004774"} Jan 29 16:01:01 crc kubenswrapper[5008]: I0129 16:01:01.532028 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29495041-5xjnv" event={"ID":"3b2cbc69-268a-4c30-b9c0-d1352f380259","Type":"ContainerStarted","Data":"9ec098a8c5a25784ebfb9adf6dbd1984cb0c733cfe0df9791ed97b64e820c3d5"} Jan 29 16:01:01 crc kubenswrapper[5008]: I0129 16:01:01.556242 5008 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-cron-29495041-5xjnv" podStartSLOduration=1.556213252 podStartE2EDuration="1.556213252s" podCreationTimestamp="2026-01-29 16:01:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 16:01:01.551704764 +0000 UTC m=+2005.224559051" watchObservedRunningTime="2026-01-29 16:01:01.556213252 +0000 UTC m=+2005.229067529" Jan 29 16:01:03 crc kubenswrapper[5008]: I0129 16:01:03.547953 5008 generic.go:334] "Generic (PLEG): container finished" podID="3b2cbc69-268a-4c30-b9c0-d1352f380259" containerID="7d30fd222b6ddf98bd917e6ff988a39a08202ad915127d1fd074c5440e004774" exitCode=0 Jan 29 16:01:03 crc kubenswrapper[5008]: I0129 16:01:03.548071 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29495041-5xjnv" event={"ID":"3b2cbc69-268a-4c30-b9c0-d1352f380259","Type":"ContainerDied","Data":"7d30fd222b6ddf98bd917e6ff988a39a08202ad915127d1fd074c5440e004774"} Jan 29 16:01:04 crc kubenswrapper[5008]: I0129 16:01:04.931517 5008 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29495041-5xjnv" Jan 29 16:01:05 crc kubenswrapper[5008]: I0129 16:01:05.049378 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q69lw\" (UniqueName: \"kubernetes.io/projected/3b2cbc69-268a-4c30-b9c0-d1352f380259-kube-api-access-q69lw\") pod \"3b2cbc69-268a-4c30-b9c0-d1352f380259\" (UID: \"3b2cbc69-268a-4c30-b9c0-d1352f380259\") " Jan 29 16:01:05 crc kubenswrapper[5008]: I0129 16:01:05.049736 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3b2cbc69-268a-4c30-b9c0-d1352f380259-combined-ca-bundle\") pod \"3b2cbc69-268a-4c30-b9c0-d1352f380259\" (UID: \"3b2cbc69-268a-4c30-b9c0-d1352f380259\") " Jan 29 16:01:05 crc kubenswrapper[5008]: I0129 16:01:05.049805 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/3b2cbc69-268a-4c30-b9c0-d1352f380259-fernet-keys\") pod \"3b2cbc69-268a-4c30-b9c0-d1352f380259\" (UID: \"3b2cbc69-268a-4c30-b9c0-d1352f380259\") " Jan 29 16:01:05 crc kubenswrapper[5008]: I0129 16:01:05.049896 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3b2cbc69-268a-4c30-b9c0-d1352f380259-config-data\") pod \"3b2cbc69-268a-4c30-b9c0-d1352f380259\" (UID: \"3b2cbc69-268a-4c30-b9c0-d1352f380259\") " Jan 29 16:01:05 crc kubenswrapper[5008]: I0129 16:01:05.056929 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3b2cbc69-268a-4c30-b9c0-d1352f380259-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "3b2cbc69-268a-4c30-b9c0-d1352f380259" (UID: "3b2cbc69-268a-4c30-b9c0-d1352f380259"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:01:05 crc kubenswrapper[5008]: I0129 16:01:05.057284 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3b2cbc69-268a-4c30-b9c0-d1352f380259-kube-api-access-q69lw" (OuterVolumeSpecName: "kube-api-access-q69lw") pod "3b2cbc69-268a-4c30-b9c0-d1352f380259" (UID: "3b2cbc69-268a-4c30-b9c0-d1352f380259"). InnerVolumeSpecName "kube-api-access-q69lw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:01:05 crc kubenswrapper[5008]: I0129 16:01:05.080456 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3b2cbc69-268a-4c30-b9c0-d1352f380259-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3b2cbc69-268a-4c30-b9c0-d1352f380259" (UID: "3b2cbc69-268a-4c30-b9c0-d1352f380259"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:01:05 crc kubenswrapper[5008]: I0129 16:01:05.101044 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3b2cbc69-268a-4c30-b9c0-d1352f380259-config-data" (OuterVolumeSpecName: "config-data") pod "3b2cbc69-268a-4c30-b9c0-d1352f380259" (UID: "3b2cbc69-268a-4c30-b9c0-d1352f380259"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:01:05 crc kubenswrapper[5008]: I0129 16:01:05.152514 5008 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q69lw\" (UniqueName: \"kubernetes.io/projected/3b2cbc69-268a-4c30-b9c0-d1352f380259-kube-api-access-q69lw\") on node \"crc\" DevicePath \"\"" Jan 29 16:01:05 crc kubenswrapper[5008]: I0129 16:01:05.152561 5008 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/3b2cbc69-268a-4c30-b9c0-d1352f380259-fernet-keys\") on node \"crc\" DevicePath \"\"" Jan 29 16:01:05 crc kubenswrapper[5008]: I0129 16:01:05.152576 5008 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3b2cbc69-268a-4c30-b9c0-d1352f380259-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 16:01:05 crc kubenswrapper[5008]: I0129 16:01:05.152589 5008 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3b2cbc69-268a-4c30-b9c0-d1352f380259-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 16:01:05 crc kubenswrapper[5008]: I0129 16:01:05.571040 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29495041-5xjnv" event={"ID":"3b2cbc69-268a-4c30-b9c0-d1352f380259","Type":"ContainerDied","Data":"9ec098a8c5a25784ebfb9adf6dbd1984cb0c733cfe0df9791ed97b64e820c3d5"} Jan 29 16:01:05 crc kubenswrapper[5008]: I0129 16:01:05.571086 5008 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9ec098a8c5a25784ebfb9adf6dbd1984cb0c733cfe0df9791ed97b64e820c3d5" Jan 29 16:01:05 crc kubenswrapper[5008]: I0129 16:01:05.571157 5008 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29495041-5xjnv" Jan 29 16:01:08 crc kubenswrapper[5008]: E0129 16:01:08.325855 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-httpd\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/ubi9/httpd-24:latest\\\"\"" pod="openstack/ceilometer-0" podUID="d40740f9-e8d8-4f46-b8b0-d913a6c33210" Jan 29 16:01:09 crc kubenswrapper[5008]: I0129 16:01:09.324188 5008 scope.go:117] "RemoveContainer" containerID="1c8349b7c34277b7122a478ebda273749cae45969c3cfbb565f71a131de59c19" Jan 29 16:01:09 crc kubenswrapper[5008]: E0129 16:01:09.325246 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gk9q8_openshift-machine-config-operator(ca0fcb2d-733d-4bde-9bbf-3f7082d0e244)\"" pod="openshift-machine-config-operator/machine-config-daemon-gk9q8" podUID="ca0fcb2d-733d-4bde-9bbf-3f7082d0e244" Jan 29 16:01:16 crc kubenswrapper[5008]: I0129 16:01:16.914649 5008 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-5cmr5"] Jan 29 16:01:16 crc kubenswrapper[5008]: E0129 16:01:16.915540 5008 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3b2cbc69-268a-4c30-b9c0-d1352f380259" containerName="keystone-cron" Jan 29 16:01:16 crc kubenswrapper[5008]: I0129 16:01:16.915555 5008 state_mem.go:107] "Deleted CPUSet assignment" podUID="3b2cbc69-268a-4c30-b9c0-d1352f380259" containerName="keystone-cron" Jan 29 16:01:16 crc kubenswrapper[5008]: I0129 16:01:16.915745 5008 memory_manager.go:354] "RemoveStaleState removing state" podUID="3b2cbc69-268a-4c30-b9c0-d1352f380259" containerName="keystone-cron" Jan 29 16:01:16 crc kubenswrapper[5008]: I0129 16:01:16.917272 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-5cmr5" Jan 29 16:01:16 crc kubenswrapper[5008]: I0129 16:01:16.927043 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-5cmr5"] Jan 29 16:01:16 crc kubenswrapper[5008]: I0129 16:01:16.999542 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l2qcm\" (UniqueName: \"kubernetes.io/projected/9b131575-cb55-4ef5-908d-83b174d165d0-kube-api-access-l2qcm\") pod \"redhat-operators-5cmr5\" (UID: \"9b131575-cb55-4ef5-908d-83b174d165d0\") " pod="openshift-marketplace/redhat-operators-5cmr5" Jan 29 16:01:17 crc kubenswrapper[5008]: I0129 16:01:17.000032 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9b131575-cb55-4ef5-908d-83b174d165d0-catalog-content\") pod \"redhat-operators-5cmr5\" (UID: \"9b131575-cb55-4ef5-908d-83b174d165d0\") " pod="openshift-marketplace/redhat-operators-5cmr5" Jan 29 16:01:17 crc kubenswrapper[5008]: I0129 16:01:17.000148 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9b131575-cb55-4ef5-908d-83b174d165d0-utilities\") pod \"redhat-operators-5cmr5\" (UID: \"9b131575-cb55-4ef5-908d-83b174d165d0\") " pod="openshift-marketplace/redhat-operators-5cmr5" Jan 29 16:01:17 crc kubenswrapper[5008]: I0129 16:01:17.102946 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9b131575-cb55-4ef5-908d-83b174d165d0-catalog-content\") pod \"redhat-operators-5cmr5\" (UID: \"9b131575-cb55-4ef5-908d-83b174d165d0\") " pod="openshift-marketplace/redhat-operators-5cmr5" Jan 29 16:01:17 crc kubenswrapper[5008]: I0129 16:01:17.103151 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9b131575-cb55-4ef5-908d-83b174d165d0-utilities\") pod \"redhat-operators-5cmr5\" (UID: \"9b131575-cb55-4ef5-908d-83b174d165d0\") " pod="openshift-marketplace/redhat-operators-5cmr5" Jan 29 16:01:17 crc kubenswrapper[5008]: I0129 16:01:17.103199 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l2qcm\" (UniqueName: \"kubernetes.io/projected/9b131575-cb55-4ef5-908d-83b174d165d0-kube-api-access-l2qcm\") pod \"redhat-operators-5cmr5\" (UID: \"9b131575-cb55-4ef5-908d-83b174d165d0\") " pod="openshift-marketplace/redhat-operators-5cmr5" Jan 29 16:01:17 crc kubenswrapper[5008]: I0129 16:01:17.103671 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9b131575-cb55-4ef5-908d-83b174d165d0-catalog-content\") pod \"redhat-operators-5cmr5\" (UID: \"9b131575-cb55-4ef5-908d-83b174d165d0\") " pod="openshift-marketplace/redhat-operators-5cmr5" Jan 29 16:01:17 crc kubenswrapper[5008]: I0129 16:01:17.104583 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9b131575-cb55-4ef5-908d-83b174d165d0-utilities\") pod \"redhat-operators-5cmr5\" (UID: \"9b131575-cb55-4ef5-908d-83b174d165d0\") " pod="openshift-marketplace/redhat-operators-5cmr5" Jan 29 16:01:17 crc kubenswrapper[5008]: I0129 16:01:17.127972 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l2qcm\" (UniqueName: \"kubernetes.io/projected/9b131575-cb55-4ef5-908d-83b174d165d0-kube-api-access-l2qcm\") pod \"redhat-operators-5cmr5\" (UID: \"9b131575-cb55-4ef5-908d-83b174d165d0\") " pod="openshift-marketplace/redhat-operators-5cmr5" Jan 29 16:01:17 crc kubenswrapper[5008]: I0129 16:01:17.256065 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-5cmr5" Jan 29 16:01:17 crc kubenswrapper[5008]: I0129 16:01:17.725335 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-5cmr5"] Jan 29 16:01:18 crc kubenswrapper[5008]: I0129 16:01:18.685491 5008 generic.go:334] "Generic (PLEG): container finished" podID="9b131575-cb55-4ef5-908d-83b174d165d0" containerID="6d8ad6dc54431cc9aa0bbcc9ef4eedc90e5721482eacd2703a603cd6f7db4dac" exitCode=0 Jan 29 16:01:18 crc kubenswrapper[5008]: I0129 16:01:18.685603 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5cmr5" event={"ID":"9b131575-cb55-4ef5-908d-83b174d165d0","Type":"ContainerDied","Data":"6d8ad6dc54431cc9aa0bbcc9ef4eedc90e5721482eacd2703a603cd6f7db4dac"} Jan 29 16:01:18 crc kubenswrapper[5008]: I0129 16:01:18.685935 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5cmr5" event={"ID":"9b131575-cb55-4ef5-908d-83b174d165d0","Type":"ContainerStarted","Data":"e18ece1d64640eef6799f2182daa611c9cd47488c0aef34b85d423cbc390275e"} Jan 29 16:01:18 crc kubenswrapper[5008]: I0129 16:01:18.687501 5008 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 29 16:01:20 crc kubenswrapper[5008]: I0129 16:01:20.324032 5008 scope.go:117] "RemoveContainer" containerID="1c8349b7c34277b7122a478ebda273749cae45969c3cfbb565f71a131de59c19" Jan 29 16:01:20 crc kubenswrapper[5008]: I0129 16:01:20.709894 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-gk9q8" event={"ID":"ca0fcb2d-733d-4bde-9bbf-3f7082d0e244","Type":"ContainerStarted","Data":"0dec156c206cdfc740e5715a405a715fb9e2750f61e850f0cbfb19fecfd528cb"} Jan 29 16:01:20 crc kubenswrapper[5008]: I0129 16:01:20.712562 5008 generic.go:334] "Generic (PLEG): container finished" podID="9b131575-cb55-4ef5-908d-83b174d165d0" containerID="0dabaae3422c84e2a30af4f5754f0294df0588db5086c11a216a0c2cf70bd3c8" exitCode=0 Jan 29 16:01:20 crc kubenswrapper[5008]: I0129 16:01:20.712592 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5cmr5" event={"ID":"9b131575-cb55-4ef5-908d-83b174d165d0","Type":"ContainerDied","Data":"0dabaae3422c84e2a30af4f5754f0294df0588db5086c11a216a0c2cf70bd3c8"} Jan 29 16:01:23 crc kubenswrapper[5008]: E0129 16:01:23.328550 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-httpd\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/ubi9/httpd-24:latest\\\"\"" pod="openstack/ceilometer-0" podUID="d40740f9-e8d8-4f46-b8b0-d913a6c33210" Jan 29 16:01:25 crc kubenswrapper[5008]: I0129 16:01:25.757090 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5cmr5" event={"ID":"9b131575-cb55-4ef5-908d-83b174d165d0","Type":"ContainerStarted","Data":"627311f0d2250852e5b1cf1d4db05f25cc50bba5481aa9e9e514e0c6c242df86"} Jan 29 16:01:25 crc kubenswrapper[5008]: I0129 16:01:25.802690 5008 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-5cmr5" podStartSLOduration=3.086401942 podStartE2EDuration="9.802667573s" podCreationTimestamp="2026-01-29 16:01:16 +0000 UTC" firstStartedPulling="2026-01-29 16:01:18.687085464 +0000 UTC m=+2022.359939731" lastFinishedPulling="2026-01-29 16:01:25.403351125 +0000 UTC m=+2029.076205362" observedRunningTime="2026-01-29 16:01:25.787012586 +0000 UTC m=+2029.459866823" watchObservedRunningTime="2026-01-29 16:01:25.802667573 +0000 UTC m=+2029.475521830" Jan 29 16:01:26 crc kubenswrapper[5008]: I0129 16:01:26.015899 5008 scope.go:117] "RemoveContainer" containerID="4e5d5fbe6f7326436f09c1eeb706af22dd1889f9d31180f26e9f3a4622f566e8" Jan 29 16:01:26 crc kubenswrapper[5008]: I0129 16:01:26.041139 5008 scope.go:117] "RemoveContainer" containerID="9c072e49faa0fcbf14fb26ba5be4f4038a4404627a5b1d14d06a8f9d4347e6b9" Jan 29 16:01:26 crc kubenswrapper[5008]: I0129 16:01:26.095093 5008 scope.go:117] "RemoveContainer" containerID="169df0c3000d56c3aa28fc235cca6494757bead3f467fc3b72cab38160ba66e9" Jan 29 16:01:26 crc kubenswrapper[5008]: I0129 16:01:26.132947 5008 scope.go:117] "RemoveContainer" containerID="84562c9f10ffe2b7193c90030faf995da403e3f35ef68c087bff6d088be04ae5" Jan 29 16:01:26 crc kubenswrapper[5008]: I0129 16:01:26.170963 5008 scope.go:117] "RemoveContainer" containerID="be81fff79545094faefca144ba3c4c81eebfa7419befdbb4509e7d36ea1420d2" Jan 29 16:01:26 crc kubenswrapper[5008]: I0129 16:01:26.231631 5008 scope.go:117] "RemoveContainer" containerID="415c274cf2a73d8ccd9cabf2d49c7d2a9afd104170d6b26b6bc768e4e9246896" Jan 29 16:01:27 crc kubenswrapper[5008]: I0129 16:01:27.256247 5008 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-5cmr5" Jan 29 16:01:27 crc kubenswrapper[5008]: I0129 16:01:27.256582 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-5cmr5" Jan 29 16:01:28 crc kubenswrapper[5008]: I0129 16:01:28.305221 5008 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-5cmr5" podUID="9b131575-cb55-4ef5-908d-83b174d165d0" containerName="registry-server" probeResult="failure" output=< Jan 29 16:01:28 crc kubenswrapper[5008]: timeout: failed to connect service ":50051" within 1s Jan 29 16:01:28 crc kubenswrapper[5008]: > Jan 29 16:01:35 crc kubenswrapper[5008]: E0129 16:01:35.325883 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-httpd\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/ubi9/httpd-24:latest\\\"\"" pod="openstack/ceilometer-0" podUID="d40740f9-e8d8-4f46-b8b0-d913a6c33210" Jan 29 16:01:37 crc kubenswrapper[5008]: I0129 16:01:37.337757 5008 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-5cmr5" Jan 29 16:01:37 crc kubenswrapper[5008]: I0129 16:01:37.404114 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-5cmr5" Jan 29 16:01:37 crc kubenswrapper[5008]: I0129 16:01:37.578493 5008 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-5cmr5"] Jan 29 16:01:38 crc kubenswrapper[5008]: I0129 16:01:38.869926 5008 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-5cmr5" podUID="9b131575-cb55-4ef5-908d-83b174d165d0" containerName="registry-server" containerID="cri-o://627311f0d2250852e5b1cf1d4db05f25cc50bba5481aa9e9e514e0c6c242df86" gracePeriod=2 Jan 29 16:01:39 crc kubenswrapper[5008]: I0129 16:01:39.052826 5008 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-9mffk"] Jan 29 16:01:39 crc kubenswrapper[5008]: I0129 16:01:39.061027 5008 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-9mffk"] Jan 29 16:01:39 crc kubenswrapper[5008]: I0129 16:01:39.319537 5008 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-5cmr5" Jan 29 16:01:39 crc kubenswrapper[5008]: I0129 16:01:39.334653 5008 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="00b42485-f42b-4ca6-8e84-1a795454dd9f" path="/var/lib/kubelet/pods/00b42485-f42b-4ca6-8e84-1a795454dd9f/volumes" Jan 29 16:01:39 crc kubenswrapper[5008]: I0129 16:01:39.429187 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l2qcm\" (UniqueName: \"kubernetes.io/projected/9b131575-cb55-4ef5-908d-83b174d165d0-kube-api-access-l2qcm\") pod \"9b131575-cb55-4ef5-908d-83b174d165d0\" (UID: \"9b131575-cb55-4ef5-908d-83b174d165d0\") " Jan 29 16:01:39 crc kubenswrapper[5008]: I0129 16:01:39.429445 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9b131575-cb55-4ef5-908d-83b174d165d0-utilities\") pod \"9b131575-cb55-4ef5-908d-83b174d165d0\" (UID: \"9b131575-cb55-4ef5-908d-83b174d165d0\") " Jan 29 16:01:39 crc kubenswrapper[5008]: I0129 16:01:39.429699 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9b131575-cb55-4ef5-908d-83b174d165d0-catalog-content\") pod \"9b131575-cb55-4ef5-908d-83b174d165d0\" (UID: \"9b131575-cb55-4ef5-908d-83b174d165d0\") " Jan 29 16:01:39 crc kubenswrapper[5008]: I0129 16:01:39.431566 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9b131575-cb55-4ef5-908d-83b174d165d0-utilities" (OuterVolumeSpecName: "utilities") pod "9b131575-cb55-4ef5-908d-83b174d165d0" (UID: "9b131575-cb55-4ef5-908d-83b174d165d0"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 16:01:39 crc kubenswrapper[5008]: I0129 16:01:39.439957 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9b131575-cb55-4ef5-908d-83b174d165d0-kube-api-access-l2qcm" (OuterVolumeSpecName: "kube-api-access-l2qcm") pod "9b131575-cb55-4ef5-908d-83b174d165d0" (UID: "9b131575-cb55-4ef5-908d-83b174d165d0"). InnerVolumeSpecName "kube-api-access-l2qcm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:01:39 crc kubenswrapper[5008]: I0129 16:01:39.532063 5008 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l2qcm\" (UniqueName: \"kubernetes.io/projected/9b131575-cb55-4ef5-908d-83b174d165d0-kube-api-access-l2qcm\") on node \"crc\" DevicePath \"\"" Jan 29 16:01:39 crc kubenswrapper[5008]: I0129 16:01:39.532093 5008 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9b131575-cb55-4ef5-908d-83b174d165d0-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 16:01:39 crc kubenswrapper[5008]: I0129 16:01:39.541816 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9b131575-cb55-4ef5-908d-83b174d165d0-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "9b131575-cb55-4ef5-908d-83b174d165d0" (UID: "9b131575-cb55-4ef5-908d-83b174d165d0"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 16:01:39 crc kubenswrapper[5008]: I0129 16:01:39.634574 5008 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9b131575-cb55-4ef5-908d-83b174d165d0-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 16:01:39 crc kubenswrapper[5008]: I0129 16:01:39.879624 5008 generic.go:334] "Generic (PLEG): container finished" podID="9b131575-cb55-4ef5-908d-83b174d165d0" containerID="627311f0d2250852e5b1cf1d4db05f25cc50bba5481aa9e9e514e0c6c242df86" exitCode=0 Jan 29 16:01:39 crc kubenswrapper[5008]: I0129 16:01:39.879663 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5cmr5" event={"ID":"9b131575-cb55-4ef5-908d-83b174d165d0","Type":"ContainerDied","Data":"627311f0d2250852e5b1cf1d4db05f25cc50bba5481aa9e9e514e0c6c242df86"} Jan 29 16:01:39 crc kubenswrapper[5008]: I0129 16:01:39.879691 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5cmr5" event={"ID":"9b131575-cb55-4ef5-908d-83b174d165d0","Type":"ContainerDied","Data":"e18ece1d64640eef6799f2182daa611c9cd47488c0aef34b85d423cbc390275e"} Jan 29 16:01:39 crc kubenswrapper[5008]: I0129 16:01:39.879706 5008 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-5cmr5" Jan 29 16:01:39 crc kubenswrapper[5008]: I0129 16:01:39.879709 5008 scope.go:117] "RemoveContainer" containerID="627311f0d2250852e5b1cf1d4db05f25cc50bba5481aa9e9e514e0c6c242df86" Jan 29 16:01:39 crc kubenswrapper[5008]: I0129 16:01:39.917362 5008 scope.go:117] "RemoveContainer" containerID="0dabaae3422c84e2a30af4f5754f0294df0588db5086c11a216a0c2cf70bd3c8" Jan 29 16:01:39 crc kubenswrapper[5008]: I0129 16:01:39.918126 5008 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-5cmr5"] Jan 29 16:01:39 crc kubenswrapper[5008]: I0129 16:01:39.924648 5008 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-5cmr5"] Jan 29 16:01:39 crc kubenswrapper[5008]: I0129 16:01:39.935581 5008 scope.go:117] "RemoveContainer" containerID="6d8ad6dc54431cc9aa0bbcc9ef4eedc90e5721482eacd2703a603cd6f7db4dac" Jan 29 16:01:39 crc kubenswrapper[5008]: I0129 16:01:39.998272 5008 scope.go:117] "RemoveContainer" containerID="627311f0d2250852e5b1cf1d4db05f25cc50bba5481aa9e9e514e0c6c242df86" Jan 29 16:01:39 crc kubenswrapper[5008]: E0129 16:01:39.998977 5008 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"627311f0d2250852e5b1cf1d4db05f25cc50bba5481aa9e9e514e0c6c242df86\": container with ID starting with 627311f0d2250852e5b1cf1d4db05f25cc50bba5481aa9e9e514e0c6c242df86 not found: ID does not exist" containerID="627311f0d2250852e5b1cf1d4db05f25cc50bba5481aa9e9e514e0c6c242df86" Jan 29 16:01:39 crc kubenswrapper[5008]: I0129 16:01:39.999008 5008 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"627311f0d2250852e5b1cf1d4db05f25cc50bba5481aa9e9e514e0c6c242df86"} err="failed to get container status \"627311f0d2250852e5b1cf1d4db05f25cc50bba5481aa9e9e514e0c6c242df86\": rpc error: code = NotFound desc = could not find container \"627311f0d2250852e5b1cf1d4db05f25cc50bba5481aa9e9e514e0c6c242df86\": container with ID starting with 627311f0d2250852e5b1cf1d4db05f25cc50bba5481aa9e9e514e0c6c242df86 not found: ID does not exist" Jan 29 16:01:39 crc kubenswrapper[5008]: I0129 16:01:39.999029 5008 scope.go:117] "RemoveContainer" containerID="0dabaae3422c84e2a30af4f5754f0294df0588db5086c11a216a0c2cf70bd3c8" Jan 29 16:01:39 crc kubenswrapper[5008]: E0129 16:01:39.999698 5008 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0dabaae3422c84e2a30af4f5754f0294df0588db5086c11a216a0c2cf70bd3c8\": container with ID starting with 0dabaae3422c84e2a30af4f5754f0294df0588db5086c11a216a0c2cf70bd3c8 not found: ID does not exist" containerID="0dabaae3422c84e2a30af4f5754f0294df0588db5086c11a216a0c2cf70bd3c8" Jan 29 16:01:39 crc kubenswrapper[5008]: I0129 16:01:39.999832 5008 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0dabaae3422c84e2a30af4f5754f0294df0588db5086c11a216a0c2cf70bd3c8"} err="failed to get container status \"0dabaae3422c84e2a30af4f5754f0294df0588db5086c11a216a0c2cf70bd3c8\": rpc error: code = NotFound desc = could not find container \"0dabaae3422c84e2a30af4f5754f0294df0588db5086c11a216a0c2cf70bd3c8\": container with ID starting with 0dabaae3422c84e2a30af4f5754f0294df0588db5086c11a216a0c2cf70bd3c8 not found: ID does not exist" Jan 29 16:01:39 crc kubenswrapper[5008]: I0129 16:01:39.999922 5008 scope.go:117] "RemoveContainer" containerID="6d8ad6dc54431cc9aa0bbcc9ef4eedc90e5721482eacd2703a603cd6f7db4dac" Jan 29 16:01:40 crc kubenswrapper[5008]: E0129 16:01:40.000296 5008 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6d8ad6dc54431cc9aa0bbcc9ef4eedc90e5721482eacd2703a603cd6f7db4dac\": container with ID starting with 6d8ad6dc54431cc9aa0bbcc9ef4eedc90e5721482eacd2703a603cd6f7db4dac not found: ID does not exist" containerID="6d8ad6dc54431cc9aa0bbcc9ef4eedc90e5721482eacd2703a603cd6f7db4dac" Jan 29 16:01:40 crc kubenswrapper[5008]: I0129 16:01:40.000322 5008 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6d8ad6dc54431cc9aa0bbcc9ef4eedc90e5721482eacd2703a603cd6f7db4dac"} err="failed to get container status \"6d8ad6dc54431cc9aa0bbcc9ef4eedc90e5721482eacd2703a603cd6f7db4dac\": rpc error: code = NotFound desc = could not find container \"6d8ad6dc54431cc9aa0bbcc9ef4eedc90e5721482eacd2703a603cd6f7db4dac\": container with ID starting with 6d8ad6dc54431cc9aa0bbcc9ef4eedc90e5721482eacd2703a603cd6f7db4dac not found: ID does not exist" Jan 29 16:01:41 crc kubenswrapper[5008]: I0129 16:01:41.336767 5008 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9b131575-cb55-4ef5-908d-83b174d165d0" path="/var/lib/kubelet/pods/9b131575-cb55-4ef5-908d-83b174d165d0/volumes" Jan 29 16:01:50 crc kubenswrapper[5008]: E0129 16:01:50.326898 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-httpd\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/ubi9/httpd-24:latest\\\"\"" pod="openstack/ceilometer-0" podUID="d40740f9-e8d8-4f46-b8b0-d913a6c33210" Jan 29 16:01:57 crc kubenswrapper[5008]: I0129 16:01:57.038942 5008 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-cell-mapping-2crqc"] Jan 29 16:01:57 crc kubenswrapper[5008]: I0129 16:01:57.048143 5008 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-cell-mapping-2crqc"] Jan 29 16:01:57 crc kubenswrapper[5008]: I0129 16:01:57.336917 5008 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="eef9ab07-3037-4115-bb8e-954191b169af" path="/var/lib/kubelet/pods/eef9ab07-3037-4115-bb8e-954191b169af/volumes" Jan 29 16:02:04 crc kubenswrapper[5008]: E0129 16:02:04.327547 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-httpd\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/ubi9/httpd-24:latest\\\"\"" pod="openstack/ceilometer-0" podUID="d40740f9-e8d8-4f46-b8b0-d913a6c33210" Jan 29 16:02:14 crc kubenswrapper[5008]: I0129 16:02:14.027974 5008 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-k5vpb"] Jan 29 16:02:14 crc kubenswrapper[5008]: I0129 16:02:14.035721 5008 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-k5vpb"] Jan 29 16:02:15 crc kubenswrapper[5008]: I0129 16:02:15.334609 5008 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a0d0cf25-1253-4f34-91a0-c4381d2e8a3f" path="/var/lib/kubelet/pods/a0d0cf25-1253-4f34-91a0-c4381d2e8a3f/volumes" Jan 29 16:02:19 crc kubenswrapper[5008]: E0129 16:02:19.328007 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-httpd\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/ubi9/httpd-24:latest\\\"\"" pod="openstack/ceilometer-0" podUID="d40740f9-e8d8-4f46-b8b0-d913a6c33210" Jan 29 16:02:26 crc kubenswrapper[5008]: I0129 16:02:26.358719 5008 scope.go:117] "RemoveContainer" containerID="89a0838edd76e8e3384f319feeb4aa997d5c03e52a3680d202106547bff689f7" Jan 29 16:02:26 crc kubenswrapper[5008]: I0129 16:02:26.424044 5008 scope.go:117] "RemoveContainer" containerID="36c4369212a2c18b6f334f104822d0182e207e44849984ff3689c410393720c8" Jan 29 16:02:26 crc kubenswrapper[5008]: I0129 16:02:26.480933 5008 scope.go:117] "RemoveContainer" containerID="cae76da1b19104ec9ac0d79d4c0c18c044c82a9e0fb4665e780db9f6a9a1f05e" Jan 29 16:02:34 crc kubenswrapper[5008]: E0129 16:02:34.327405 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-httpd\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/ubi9/httpd-24:latest\\\"\"" pod="openstack/ceilometer-0" podUID="d40740f9-e8d8-4f46-b8b0-d913a6c33210" Jan 29 16:02:45 crc kubenswrapper[5008]: I0129 16:02:45.061468 5008 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-cell-mapping-k4msd"] Jan 29 16:02:45 crc kubenswrapper[5008]: I0129 16:02:45.072561 5008 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-cell-mapping-k4msd"] Jan 29 16:02:45 crc kubenswrapper[5008]: I0129 16:02:45.338743 5008 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dfacde84-7d28-464b-8854-622fd127956c" path="/var/lib/kubelet/pods/dfacde84-7d28-464b-8854-622fd127956c/volumes" Jan 29 16:02:46 crc kubenswrapper[5008]: E0129 16:02:46.326694 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-httpd\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/ubi9/httpd-24:latest\\\"\"" pod="openstack/ceilometer-0" podUID="d40740f9-e8d8-4f46-b8b0-d913a6c33210" Jan 29 16:02:59 crc kubenswrapper[5008]: E0129 16:02:59.325554 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-httpd\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/ubi9/httpd-24:latest\\\"\"" pod="openstack/ceilometer-0" podUID="d40740f9-e8d8-4f46-b8b0-d913a6c33210" Jan 29 16:03:10 crc kubenswrapper[5008]: E0129 16:03:10.328015 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-httpd\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/ubi9/httpd-24:latest\\\"\"" pod="openstack/ceilometer-0" podUID="d40740f9-e8d8-4f46-b8b0-d913a6c33210" Jan 29 16:03:23 crc kubenswrapper[5008]: E0129 16:03:23.453877 5008 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://registry.redhat.io/ubi9/httpd-24:latest: Requesting bearer token: invalid status code from registry 403 (Forbidden)" image="registry.redhat.io/ubi9/httpd-24:latest" Jan 29 16:03:23 crc kubenswrapper[5008]: E0129 16:03:23.454591 5008 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:proxy-httpd,Image:registry.redhat.io/ubi9/httpd-24:latest,Command:[/usr/sbin/httpd],Args:[-DFOREGROUND],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:proxy-httpd,HostPort:0,ContainerPort:3000,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/httpd/conf/httpd.conf,SubPath:httpd.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/httpd/conf.d/ssl.conf,SubPath:ssl.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:run-httpd,ReadOnly:false,MountPath:/run/httpd,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:log-httpd,ReadOnly:false,MountPath:/var/log/httpd,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-4zk8n,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/,Port:{0 3000 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:30,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/,Port:{0 3000 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:30,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(d40740f9-e8d8-4f46-b8b0-d913a6c33210): ErrImagePull: initializing source docker://registry.redhat.io/ubi9/httpd-24:latest: Requesting bearer token: invalid status code from registry 403 (Forbidden)" logger="UnhandledError" Jan 29 16:03:23 crc kubenswrapper[5008]: E0129 16:03:23.455979 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-httpd\" with ErrImagePull: \"initializing source docker://registry.redhat.io/ubi9/httpd-24:latest: Requesting bearer token: invalid status code from registry 403 (Forbidden)\"" pod="openstack/ceilometer-0" podUID="d40740f9-e8d8-4f46-b8b0-d913a6c33210" Jan 29 16:03:26 crc kubenswrapper[5008]: I0129 16:03:26.624549 5008 scope.go:117] "RemoveContainer" containerID="b2349ea6eb40feb88475ff1a1d63808b9c3d0aa5c899aef5d037351e78d59f1c" Jan 29 16:03:26 crc kubenswrapper[5008]: I0129 16:03:26.652601 5008 scope.go:117] "RemoveContainer" containerID="029122710bec3ead5773dc17d19527fcf835c2079cb3b4366dd751781af68880" Jan 29 16:03:26 crc kubenswrapper[5008]: I0129 16:03:26.678298 5008 scope.go:117] "RemoveContainer" containerID="fd5b906760d69a40cedcc9755fc25288bec9129c3fde13b9ce243cf6e009d4c4" Jan 29 16:03:26 crc kubenswrapper[5008]: I0129 16:03:26.722692 5008 scope.go:117] "RemoveContainer" containerID="105b9a43249e6967af25433d63396c59e60e556a090d580d57d9d70ee4546248" Jan 29 16:03:26 crc kubenswrapper[5008]: I0129 16:03:26.743368 5008 scope.go:117] "RemoveContainer" containerID="560c4a087d72c5b97173f2148e008364217cf3873e93b9ddf90930a6cb837f82" Jan 29 16:03:26 crc kubenswrapper[5008]: I0129 16:03:26.791417 5008 scope.go:117] "RemoveContainer" containerID="0bd2718859e8227e4d8612c327ecd5f34368bcc87d5e43cf15084febf3a519cd" Jan 29 16:03:26 crc kubenswrapper[5008]: I0129 16:03:26.851386 5008 scope.go:117] "RemoveContainer" containerID="e9bb0bb4b88e5113680d7a705c1a4e73f76938c8a06828dd6b4734e57b5342fa" Jan 29 16:03:26 crc kubenswrapper[5008]: I0129 16:03:26.872984 5008 scope.go:117] "RemoveContainer" containerID="c2bc36fbe8f3e25d7d68a9f461e1ef0730dfb9b9c4a4ac61922941d595122f44" Jan 29 16:03:26 crc kubenswrapper[5008]: I0129 16:03:26.920340 5008 scope.go:117] "RemoveContainer" containerID="7d4815761a9d2f556ee06bbf98cf1b6c8cec425b4632da102c9fe10b76949770" Jan 29 16:03:26 crc kubenswrapper[5008]: I0129 16:03:26.960154 5008 scope.go:117] "RemoveContainer" containerID="4c7f1c035bf93e990a09127ab0239b9dd8fb171aad0406e2e4f471771073ce20" Jan 29 16:03:38 crc kubenswrapper[5008]: E0129 16:03:38.325887 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-httpd\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/ubi9/httpd-24:latest\\\"\"" pod="openstack/ceilometer-0" podUID="d40740f9-e8d8-4f46-b8b0-d913a6c33210" Jan 29 16:03:43 crc kubenswrapper[5008]: I0129 16:03:43.991040 5008 patch_prober.go:28] interesting pod/machine-config-daemon-gk9q8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 16:03:43 crc kubenswrapper[5008]: I0129 16:03:43.991620 5008 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-gk9q8" podUID="ca0fcb2d-733d-4bde-9bbf-3f7082d0e244" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 16:03:52 crc kubenswrapper[5008]: E0129 16:03:52.326158 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-httpd\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/ubi9/httpd-24:latest\\\"\"" pod="openstack/ceilometer-0" podUID="d40740f9-e8d8-4f46-b8b0-d913a6c33210" Jan 29 16:04:06 crc kubenswrapper[5008]: E0129 16:04:06.327192 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-httpd\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/ubi9/httpd-24:latest\\\"\"" pod="openstack/ceilometer-0" podUID="d40740f9-e8d8-4f46-b8b0-d913a6c33210" Jan 29 16:04:13 crc kubenswrapper[5008]: I0129 16:04:13.990399 5008 patch_prober.go:28] interesting pod/machine-config-daemon-gk9q8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 16:04:13 crc kubenswrapper[5008]: I0129 16:04:13.990869 5008 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-gk9q8" podUID="ca0fcb2d-733d-4bde-9bbf-3f7082d0e244" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 16:04:17 crc kubenswrapper[5008]: E0129 16:04:17.330910 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-httpd\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/ubi9/httpd-24:latest\\\"\"" pod="openstack/ceilometer-0" podUID="d40740f9-e8d8-4f46-b8b0-d913a6c33210" Jan 29 16:04:31 crc kubenswrapper[5008]: E0129 16:04:31.326273 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-httpd\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/ubi9/httpd-24:latest\\\"\"" pod="openstack/ceilometer-0" podUID="d40740f9-e8d8-4f46-b8b0-d913a6c33210" Jan 29 16:04:43 crc kubenswrapper[5008]: E0129 16:04:43.326604 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-httpd\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/ubi9/httpd-24:latest\\\"\"" pod="openstack/ceilometer-0" podUID="d40740f9-e8d8-4f46-b8b0-d913a6c33210" Jan 29 16:04:43 crc kubenswrapper[5008]: I0129 16:04:43.990865 5008 patch_prober.go:28] interesting pod/machine-config-daemon-gk9q8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 16:04:43 crc kubenswrapper[5008]: I0129 16:04:43.991183 5008 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-gk9q8" podUID="ca0fcb2d-733d-4bde-9bbf-3f7082d0e244" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 16:04:43 crc kubenswrapper[5008]: I0129 16:04:43.991232 5008 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-gk9q8" Jan 29 16:04:43 crc kubenswrapper[5008]: I0129 16:04:43.991928 5008 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"0dec156c206cdfc740e5715a405a715fb9e2750f61e850f0cbfb19fecfd528cb"} pod="openshift-machine-config-operator/machine-config-daemon-gk9q8" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 29 16:04:43 crc kubenswrapper[5008]: I0129 16:04:43.991986 5008 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-gk9q8" podUID="ca0fcb2d-733d-4bde-9bbf-3f7082d0e244" containerName="machine-config-daemon" containerID="cri-o://0dec156c206cdfc740e5715a405a715fb9e2750f61e850f0cbfb19fecfd528cb" gracePeriod=600 Jan 29 16:04:44 crc kubenswrapper[5008]: I0129 16:04:44.875674 5008 generic.go:334] "Generic (PLEG): container finished" podID="ca0fcb2d-733d-4bde-9bbf-3f7082d0e244" containerID="0dec156c206cdfc740e5715a405a715fb9e2750f61e850f0cbfb19fecfd528cb" exitCode=0 Jan 29 16:04:44 crc kubenswrapper[5008]: I0129 16:04:44.876351 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-gk9q8" event={"ID":"ca0fcb2d-733d-4bde-9bbf-3f7082d0e244","Type":"ContainerDied","Data":"0dec156c206cdfc740e5715a405a715fb9e2750f61e850f0cbfb19fecfd528cb"} Jan 29 16:04:44 crc kubenswrapper[5008]: I0129 16:04:44.876443 5008 scope.go:117] "RemoveContainer" containerID="1c8349b7c34277b7122a478ebda273749cae45969c3cfbb565f71a131de59c19" Jan 29 16:04:45 crc kubenswrapper[5008]: I0129 16:04:45.885056 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-gk9q8" event={"ID":"ca0fcb2d-733d-4bde-9bbf-3f7082d0e244","Type":"ContainerStarted","Data":"cb5e6384a544764e5b0e5a38f2e442c3dc79aaa0e3b882c450dadd5dfb981e64"} Jan 29 16:04:55 crc kubenswrapper[5008]: E0129 16:04:55.326729 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-httpd\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/ubi9/httpd-24:latest\\\"\"" pod="openstack/ceilometer-0" podUID="d40740f9-e8d8-4f46-b8b0-d913a6c33210" Jan 29 16:05:06 crc kubenswrapper[5008]: E0129 16:05:06.326421 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-httpd\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/ubi9/httpd-24:latest\\\"\"" pod="openstack/ceilometer-0" podUID="d40740f9-e8d8-4f46-b8b0-d913a6c33210" Jan 29 16:05:21 crc kubenswrapper[5008]: E0129 16:05:21.327321 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-httpd\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/ubi9/httpd-24:latest\\\"\"" pod="openstack/ceilometer-0" podUID="d40740f9-e8d8-4f46-b8b0-d913a6c33210" Jan 29 16:05:33 crc kubenswrapper[5008]: E0129 16:05:33.329392 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-httpd\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/ubi9/httpd-24:latest\\\"\"" pod="openstack/ceilometer-0" podUID="d40740f9-e8d8-4f46-b8b0-d913a6c33210" Jan 29 16:05:47 crc kubenswrapper[5008]: E0129 16:05:47.331827 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-httpd\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/ubi9/httpd-24:latest\\\"\"" pod="openstack/ceilometer-0" podUID="d40740f9-e8d8-4f46-b8b0-d913a6c33210" Jan 29 16:06:02 crc kubenswrapper[5008]: E0129 16:06:02.326991 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-httpd\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/ubi9/httpd-24:latest\\\"\"" pod="openstack/ceilometer-0" podUID="d40740f9-e8d8-4f46-b8b0-d913a6c33210" Jan 29 16:06:15 crc kubenswrapper[5008]: E0129 16:06:15.328361 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-httpd\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/ubi9/httpd-24:latest\\\"\"" pod="openstack/ceilometer-0" podUID="d40740f9-e8d8-4f46-b8b0-d913a6c33210" Jan 29 16:06:29 crc kubenswrapper[5008]: E0129 16:06:29.326149 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-httpd\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/ubi9/httpd-24:latest\\\"\"" pod="openstack/ceilometer-0" podUID="d40740f9-e8d8-4f46-b8b0-d913a6c33210" Jan 29 16:06:44 crc kubenswrapper[5008]: E0129 16:06:44.326993 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-httpd\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/ubi9/httpd-24:latest\\\"\"" pod="openstack/ceilometer-0" podUID="d40740f9-e8d8-4f46-b8b0-d913a6c33210" Jan 29 16:06:55 crc kubenswrapper[5008]: E0129 16:06:55.328410 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-httpd\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/ubi9/httpd-24:latest\\\"\"" pod="openstack/ceilometer-0" podUID="d40740f9-e8d8-4f46-b8b0-d913a6c33210" Jan 29 16:07:06 crc kubenswrapper[5008]: E0129 16:07:06.327337 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-httpd\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/ubi9/httpd-24:latest\\\"\"" pod="openstack/ceilometer-0" podUID="d40740f9-e8d8-4f46-b8b0-d913a6c33210" Jan 29 16:07:13 crc kubenswrapper[5008]: I0129 16:07:13.990763 5008 patch_prober.go:28] interesting pod/machine-config-daemon-gk9q8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 16:07:13 crc kubenswrapper[5008]: I0129 16:07:13.991225 5008 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-gk9q8" podUID="ca0fcb2d-733d-4bde-9bbf-3f7082d0e244" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 16:07:21 crc kubenswrapper[5008]: E0129 16:07:21.326681 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-httpd\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/ubi9/httpd-24:latest\\\"\"" pod="openstack/ceilometer-0" podUID="d40740f9-e8d8-4f46-b8b0-d913a6c33210" Jan 29 16:07:22 crc kubenswrapper[5008]: I0129 16:07:22.722006 5008 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-6qmv7"] Jan 29 16:07:22 crc kubenswrapper[5008]: E0129 16:07:22.722839 5008 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9b131575-cb55-4ef5-908d-83b174d165d0" containerName="extract-content" Jan 29 16:07:22 crc kubenswrapper[5008]: I0129 16:07:22.722856 5008 state_mem.go:107] "Deleted CPUSet assignment" podUID="9b131575-cb55-4ef5-908d-83b174d165d0" containerName="extract-content" Jan 29 16:07:22 crc kubenswrapper[5008]: E0129 16:07:22.722876 5008 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9b131575-cb55-4ef5-908d-83b174d165d0" containerName="extract-utilities" Jan 29 16:07:22 crc kubenswrapper[5008]: I0129 16:07:22.722885 5008 state_mem.go:107] "Deleted CPUSet assignment" podUID="9b131575-cb55-4ef5-908d-83b174d165d0" containerName="extract-utilities" Jan 29 16:07:22 crc kubenswrapper[5008]: E0129 16:07:22.722901 5008 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9b131575-cb55-4ef5-908d-83b174d165d0" containerName="registry-server" Jan 29 16:07:22 crc kubenswrapper[5008]: I0129 16:07:22.722908 5008 state_mem.go:107] "Deleted CPUSet assignment" podUID="9b131575-cb55-4ef5-908d-83b174d165d0" containerName="registry-server" Jan 29 16:07:22 crc kubenswrapper[5008]: I0129 16:07:22.723135 5008 memory_manager.go:354] "RemoveStaleState removing state" podUID="9b131575-cb55-4ef5-908d-83b174d165d0" containerName="registry-server" Jan 29 16:07:22 crc kubenswrapper[5008]: I0129 16:07:22.724841 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-6qmv7" Jan 29 16:07:22 crc kubenswrapper[5008]: I0129 16:07:22.729924 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-6qmv7"] Jan 29 16:07:22 crc kubenswrapper[5008]: I0129 16:07:22.868882 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2ed48245-be09-46c8-97f9-263179717512-utilities\") pod \"certified-operators-6qmv7\" (UID: \"2ed48245-be09-46c8-97f9-263179717512\") " pod="openshift-marketplace/certified-operators-6qmv7" Jan 29 16:07:22 crc kubenswrapper[5008]: I0129 16:07:22.869001 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2ed48245-be09-46c8-97f9-263179717512-catalog-content\") pod \"certified-operators-6qmv7\" (UID: \"2ed48245-be09-46c8-97f9-263179717512\") " pod="openshift-marketplace/certified-operators-6qmv7" Jan 29 16:07:22 crc kubenswrapper[5008]: I0129 16:07:22.869137 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-29db7\" (UniqueName: \"kubernetes.io/projected/2ed48245-be09-46c8-97f9-263179717512-kube-api-access-29db7\") pod \"certified-operators-6qmv7\" (UID: \"2ed48245-be09-46c8-97f9-263179717512\") " pod="openshift-marketplace/certified-operators-6qmv7" Jan 29 16:07:22 crc kubenswrapper[5008]: I0129 16:07:22.971192 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-29db7\" (UniqueName: \"kubernetes.io/projected/2ed48245-be09-46c8-97f9-263179717512-kube-api-access-29db7\") pod \"certified-operators-6qmv7\" (UID: \"2ed48245-be09-46c8-97f9-263179717512\") " pod="openshift-marketplace/certified-operators-6qmv7" Jan 29 16:07:22 crc kubenswrapper[5008]: I0129 16:07:22.971476 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2ed48245-be09-46c8-97f9-263179717512-utilities\") pod \"certified-operators-6qmv7\" (UID: \"2ed48245-be09-46c8-97f9-263179717512\") " pod="openshift-marketplace/certified-operators-6qmv7" Jan 29 16:07:22 crc kubenswrapper[5008]: I0129 16:07:22.971617 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2ed48245-be09-46c8-97f9-263179717512-catalog-content\") pod \"certified-operators-6qmv7\" (UID: \"2ed48245-be09-46c8-97f9-263179717512\") " pod="openshift-marketplace/certified-operators-6qmv7" Jan 29 16:07:22 crc kubenswrapper[5008]: I0129 16:07:22.972139 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2ed48245-be09-46c8-97f9-263179717512-utilities\") pod \"certified-operators-6qmv7\" (UID: \"2ed48245-be09-46c8-97f9-263179717512\") " pod="openshift-marketplace/certified-operators-6qmv7" Jan 29 16:07:22 crc kubenswrapper[5008]: I0129 16:07:22.972217 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2ed48245-be09-46c8-97f9-263179717512-catalog-content\") pod \"certified-operators-6qmv7\" (UID: \"2ed48245-be09-46c8-97f9-263179717512\") " pod="openshift-marketplace/certified-operators-6qmv7" Jan 29 16:07:23 crc kubenswrapper[5008]: I0129 16:07:22.996327 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-29db7\" (UniqueName: \"kubernetes.io/projected/2ed48245-be09-46c8-97f9-263179717512-kube-api-access-29db7\") pod \"certified-operators-6qmv7\" (UID: \"2ed48245-be09-46c8-97f9-263179717512\") " pod="openshift-marketplace/certified-operators-6qmv7" Jan 29 16:07:23 crc kubenswrapper[5008]: I0129 16:07:23.050110 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-6qmv7" Jan 29 16:07:23 crc kubenswrapper[5008]: I0129 16:07:23.555759 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-6qmv7"] Jan 29 16:07:23 crc kubenswrapper[5008]: W0129 16:07:23.567028 5008 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2ed48245_be09_46c8_97f9_263179717512.slice/crio-4e824484315a6e30506a2f7c7fb618d142a68d99bd3176c0a282d8bafa44de26 WatchSource:0}: Error finding container 4e824484315a6e30506a2f7c7fb618d142a68d99bd3176c0a282d8bafa44de26: Status 404 returned error can't find the container with id 4e824484315a6e30506a2f7c7fb618d142a68d99bd3176c0a282d8bafa44de26 Jan 29 16:07:24 crc kubenswrapper[5008]: I0129 16:07:24.253415 5008 generic.go:334] "Generic (PLEG): container finished" podID="2ed48245-be09-46c8-97f9-263179717512" containerID="150149c6a5ab91f06872737ef57f87254f939be1476ab033203541676c958766" exitCode=0 Jan 29 16:07:24 crc kubenswrapper[5008]: I0129 16:07:24.253499 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6qmv7" event={"ID":"2ed48245-be09-46c8-97f9-263179717512","Type":"ContainerDied","Data":"150149c6a5ab91f06872737ef57f87254f939be1476ab033203541676c958766"} Jan 29 16:07:24 crc kubenswrapper[5008]: I0129 16:07:24.253677 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6qmv7" event={"ID":"2ed48245-be09-46c8-97f9-263179717512","Type":"ContainerStarted","Data":"4e824484315a6e30506a2f7c7fb618d142a68d99bd3176c0a282d8bafa44de26"} Jan 29 16:07:24 crc kubenswrapper[5008]: I0129 16:07:24.255468 5008 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 29 16:07:24 crc kubenswrapper[5008]: E0129 16:07:24.388424 5008 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://registry.redhat.io/redhat/certified-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Jan 29 16:07:24 crc kubenswrapper[5008]: E0129 16:07:24.388712 5008 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-29db7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-6qmv7_openshift-marketplace(2ed48245-be09-46c8-97f9-263179717512): ErrImagePull: initializing source docker://registry.redhat.io/redhat/certified-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" logger="UnhandledError" Jan 29 16:07:24 crc kubenswrapper[5008]: E0129 16:07:24.389967 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"initializing source docker://registry.redhat.io/redhat/certified-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)\"" pod="openshift-marketplace/certified-operators-6qmv7" podUID="2ed48245-be09-46c8-97f9-263179717512" Jan 29 16:07:25 crc kubenswrapper[5008]: E0129 16:07:25.262840 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-6qmv7" podUID="2ed48245-be09-46c8-97f9-263179717512" Jan 29 16:07:34 crc kubenswrapper[5008]: E0129 16:07:34.327595 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-httpd\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/ubi9/httpd-24:latest\\\"\"" pod="openstack/ceilometer-0" podUID="d40740f9-e8d8-4f46-b8b0-d913a6c33210" Jan 29 16:07:38 crc kubenswrapper[5008]: E0129 16:07:38.502695 5008 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://registry.redhat.io/redhat/certified-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Jan 29 16:07:38 crc kubenswrapper[5008]: E0129 16:07:38.503210 5008 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-29db7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-6qmv7_openshift-marketplace(2ed48245-be09-46c8-97f9-263179717512): ErrImagePull: initializing source docker://registry.redhat.io/redhat/certified-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" logger="UnhandledError" Jan 29 16:07:38 crc kubenswrapper[5008]: E0129 16:07:38.504373 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"initializing source docker://registry.redhat.io/redhat/certified-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)\"" pod="openshift-marketplace/certified-operators-6qmv7" podUID="2ed48245-be09-46c8-97f9-263179717512" Jan 29 16:07:43 crc kubenswrapper[5008]: I0129 16:07:43.991270 5008 patch_prober.go:28] interesting pod/machine-config-daemon-gk9q8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 16:07:43 crc kubenswrapper[5008]: I0129 16:07:43.991904 5008 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-gk9q8" podUID="ca0fcb2d-733d-4bde-9bbf-3f7082d0e244" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 16:07:45 crc kubenswrapper[5008]: E0129 16:07:45.326163 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-httpd\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/ubi9/httpd-24:latest\\\"\"" pod="openstack/ceilometer-0" podUID="d40740f9-e8d8-4f46-b8b0-d913a6c33210" Jan 29 16:07:53 crc kubenswrapper[5008]: E0129 16:07:53.327582 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-6qmv7" podUID="2ed48245-be09-46c8-97f9-263179717512" Jan 29 16:07:55 crc kubenswrapper[5008]: I0129 16:07:55.277407 5008 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-9lmvr"] Jan 29 16:07:55 crc kubenswrapper[5008]: I0129 16:07:55.280535 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-9lmvr" Jan 29 16:07:55 crc kubenswrapper[5008]: I0129 16:07:55.294641 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-9lmvr"] Jan 29 16:07:55 crc kubenswrapper[5008]: I0129 16:07:55.330063 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0cf4cf5b-529f-49a9-900c-a94b840568d8-catalog-content\") pod \"community-operators-9lmvr\" (UID: \"0cf4cf5b-529f-49a9-900c-a94b840568d8\") " pod="openshift-marketplace/community-operators-9lmvr" Jan 29 16:07:55 crc kubenswrapper[5008]: I0129 16:07:55.330128 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0cf4cf5b-529f-49a9-900c-a94b840568d8-utilities\") pod \"community-operators-9lmvr\" (UID: \"0cf4cf5b-529f-49a9-900c-a94b840568d8\") " pod="openshift-marketplace/community-operators-9lmvr" Jan 29 16:07:55 crc kubenswrapper[5008]: I0129 16:07:55.330149 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gcgl4\" (UniqueName: \"kubernetes.io/projected/0cf4cf5b-529f-49a9-900c-a94b840568d8-kube-api-access-gcgl4\") pod \"community-operators-9lmvr\" (UID: \"0cf4cf5b-529f-49a9-900c-a94b840568d8\") " pod="openshift-marketplace/community-operators-9lmvr" Jan 29 16:07:55 crc kubenswrapper[5008]: I0129 16:07:55.431841 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0cf4cf5b-529f-49a9-900c-a94b840568d8-catalog-content\") pod \"community-operators-9lmvr\" (UID: \"0cf4cf5b-529f-49a9-900c-a94b840568d8\") " pod="openshift-marketplace/community-operators-9lmvr" Jan 29 16:07:55 crc kubenswrapper[5008]: I0129 16:07:55.431921 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0cf4cf5b-529f-49a9-900c-a94b840568d8-utilities\") pod \"community-operators-9lmvr\" (UID: \"0cf4cf5b-529f-49a9-900c-a94b840568d8\") " pod="openshift-marketplace/community-operators-9lmvr" Jan 29 16:07:55 crc kubenswrapper[5008]: I0129 16:07:55.431953 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gcgl4\" (UniqueName: \"kubernetes.io/projected/0cf4cf5b-529f-49a9-900c-a94b840568d8-kube-api-access-gcgl4\") pod \"community-operators-9lmvr\" (UID: \"0cf4cf5b-529f-49a9-900c-a94b840568d8\") " pod="openshift-marketplace/community-operators-9lmvr" Jan 29 16:07:55 crc kubenswrapper[5008]: I0129 16:07:55.432382 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0cf4cf5b-529f-49a9-900c-a94b840568d8-catalog-content\") pod \"community-operators-9lmvr\" (UID: \"0cf4cf5b-529f-49a9-900c-a94b840568d8\") " pod="openshift-marketplace/community-operators-9lmvr" Jan 29 16:07:55 crc kubenswrapper[5008]: I0129 16:07:55.432404 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0cf4cf5b-529f-49a9-900c-a94b840568d8-utilities\") pod \"community-operators-9lmvr\" (UID: \"0cf4cf5b-529f-49a9-900c-a94b840568d8\") " pod="openshift-marketplace/community-operators-9lmvr" Jan 29 16:07:55 crc kubenswrapper[5008]: I0129 16:07:55.452058 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gcgl4\" (UniqueName: \"kubernetes.io/projected/0cf4cf5b-529f-49a9-900c-a94b840568d8-kube-api-access-gcgl4\") pod \"community-operators-9lmvr\" (UID: \"0cf4cf5b-529f-49a9-900c-a94b840568d8\") " pod="openshift-marketplace/community-operators-9lmvr" Jan 29 16:07:55 crc kubenswrapper[5008]: I0129 16:07:55.615499 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-9lmvr" Jan 29 16:07:56 crc kubenswrapper[5008]: I0129 16:07:56.110768 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-9lmvr"] Jan 29 16:07:56 crc kubenswrapper[5008]: I0129 16:07:56.534940 5008 generic.go:334] "Generic (PLEG): container finished" podID="0cf4cf5b-529f-49a9-900c-a94b840568d8" containerID="4afa3ecd1bba399d9d57363e776a21e44e34c2657ea6828efcf74ebcf9e4f108" exitCode=0 Jan 29 16:07:56 crc kubenswrapper[5008]: I0129 16:07:56.535016 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9lmvr" event={"ID":"0cf4cf5b-529f-49a9-900c-a94b840568d8","Type":"ContainerDied","Data":"4afa3ecd1bba399d9d57363e776a21e44e34c2657ea6828efcf74ebcf9e4f108"} Jan 29 16:07:56 crc kubenswrapper[5008]: I0129 16:07:56.535077 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9lmvr" event={"ID":"0cf4cf5b-529f-49a9-900c-a94b840568d8","Type":"ContainerStarted","Data":"3027721e802c941c68316a40edc4f5165c2ccf1c65e058c580444ac3144242da"} Jan 29 16:07:56 crc kubenswrapper[5008]: E0129 16:07:56.687651 5008 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://registry.redhat.io/redhat/community-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" image="registry.redhat.io/redhat/community-operator-index:v4.18" Jan 29 16:07:56 crc kubenswrapper[5008]: E0129 16:07:56.687824 5008 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-gcgl4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-9lmvr_openshift-marketplace(0cf4cf5b-529f-49a9-900c-a94b840568d8): ErrImagePull: initializing source docker://registry.redhat.io/redhat/community-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" logger="UnhandledError" Jan 29 16:07:56 crc kubenswrapper[5008]: E0129 16:07:56.689058 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"initializing source docker://registry.redhat.io/redhat/community-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)\"" pod="openshift-marketplace/community-operators-9lmvr" podUID="0cf4cf5b-529f-49a9-900c-a94b840568d8" Jan 29 16:07:57 crc kubenswrapper[5008]: E0129 16:07:57.546218 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-9lmvr" podUID="0cf4cf5b-529f-49a9-900c-a94b840568d8" Jan 29 16:08:00 crc kubenswrapper[5008]: E0129 16:08:00.324976 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-httpd\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/ubi9/httpd-24:latest\\\"\"" pod="openstack/ceilometer-0" podUID="d40740f9-e8d8-4f46-b8b0-d913a6c33210" Jan 29 16:08:08 crc kubenswrapper[5008]: E0129 16:08:08.473036 5008 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://registry.redhat.io/redhat/certified-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Jan 29 16:08:08 crc kubenswrapper[5008]: E0129 16:08:08.473640 5008 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-29db7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-6qmv7_openshift-marketplace(2ed48245-be09-46c8-97f9-263179717512): ErrImagePull: initializing source docker://registry.redhat.io/redhat/certified-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" logger="UnhandledError" Jan 29 16:08:08 crc kubenswrapper[5008]: E0129 16:08:08.474845 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"initializing source docker://registry.redhat.io/redhat/certified-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)\"" pod="openshift-marketplace/certified-operators-6qmv7" podUID="2ed48245-be09-46c8-97f9-263179717512" Jan 29 16:08:11 crc kubenswrapper[5008]: E0129 16:08:11.522368 5008 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://registry.redhat.io/redhat/community-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" image="registry.redhat.io/redhat/community-operator-index:v4.18" Jan 29 16:08:11 crc kubenswrapper[5008]: E0129 16:08:11.522768 5008 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-gcgl4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-9lmvr_openshift-marketplace(0cf4cf5b-529f-49a9-900c-a94b840568d8): ErrImagePull: initializing source docker://registry.redhat.io/redhat/community-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" logger="UnhandledError" Jan 29 16:08:11 crc kubenswrapper[5008]: E0129 16:08:11.524029 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"initializing source docker://registry.redhat.io/redhat/community-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)\"" pod="openshift-marketplace/community-operators-9lmvr" podUID="0cf4cf5b-529f-49a9-900c-a94b840568d8" Jan 29 16:08:12 crc kubenswrapper[5008]: E0129 16:08:12.325917 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-httpd\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/ubi9/httpd-24:latest\\\"\"" pod="openstack/ceilometer-0" podUID="d40740f9-e8d8-4f46-b8b0-d913a6c33210" Jan 29 16:08:13 crc kubenswrapper[5008]: I0129 16:08:13.991071 5008 patch_prober.go:28] interesting pod/machine-config-daemon-gk9q8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 16:08:13 crc kubenswrapper[5008]: I0129 16:08:13.991359 5008 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-gk9q8" podUID="ca0fcb2d-733d-4bde-9bbf-3f7082d0e244" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 16:08:13 crc kubenswrapper[5008]: I0129 16:08:13.991400 5008 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-gk9q8" Jan 29 16:08:13 crc kubenswrapper[5008]: I0129 16:08:13.992105 5008 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"cb5e6384a544764e5b0e5a38f2e442c3dc79aaa0e3b882c450dadd5dfb981e64"} pod="openshift-machine-config-operator/machine-config-daemon-gk9q8" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 29 16:08:13 crc kubenswrapper[5008]: I0129 16:08:13.992156 5008 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-gk9q8" podUID="ca0fcb2d-733d-4bde-9bbf-3f7082d0e244" containerName="machine-config-daemon" containerID="cri-o://cb5e6384a544764e5b0e5a38f2e442c3dc79aaa0e3b882c450dadd5dfb981e64" gracePeriod=600 Jan 29 16:08:14 crc kubenswrapper[5008]: E0129 16:08:14.160299 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gk9q8_openshift-machine-config-operator(ca0fcb2d-733d-4bde-9bbf-3f7082d0e244)\"" pod="openshift-machine-config-operator/machine-config-daemon-gk9q8" podUID="ca0fcb2d-733d-4bde-9bbf-3f7082d0e244" Jan 29 16:08:14 crc kubenswrapper[5008]: I0129 16:08:14.695265 5008 generic.go:334] "Generic (PLEG): container finished" podID="ca0fcb2d-733d-4bde-9bbf-3f7082d0e244" containerID="cb5e6384a544764e5b0e5a38f2e442c3dc79aaa0e3b882c450dadd5dfb981e64" exitCode=0 Jan 29 16:08:14 crc kubenswrapper[5008]: I0129 16:08:14.695314 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-gk9q8" event={"ID":"ca0fcb2d-733d-4bde-9bbf-3f7082d0e244","Type":"ContainerDied","Data":"cb5e6384a544764e5b0e5a38f2e442c3dc79aaa0e3b882c450dadd5dfb981e64"} Jan 29 16:08:14 crc kubenswrapper[5008]: I0129 16:08:14.695381 5008 scope.go:117] "RemoveContainer" containerID="0dec156c206cdfc740e5715a405a715fb9e2750f61e850f0cbfb19fecfd528cb" Jan 29 16:08:14 crc kubenswrapper[5008]: I0129 16:08:14.696037 5008 scope.go:117] "RemoveContainer" containerID="cb5e6384a544764e5b0e5a38f2e442c3dc79aaa0e3b882c450dadd5dfb981e64" Jan 29 16:08:14 crc kubenswrapper[5008]: E0129 16:08:14.696413 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gk9q8_openshift-machine-config-operator(ca0fcb2d-733d-4bde-9bbf-3f7082d0e244)\"" pod="openshift-machine-config-operator/machine-config-daemon-gk9q8" podUID="ca0fcb2d-733d-4bde-9bbf-3f7082d0e244" Jan 29 16:08:15 crc kubenswrapper[5008]: I0129 16:08:15.571953 5008 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-fl9wc"] Jan 29 16:08:15 crc kubenswrapper[5008]: I0129 16:08:15.574296 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-fl9wc" Jan 29 16:08:15 crc kubenswrapper[5008]: I0129 16:08:15.590272 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-fl9wc"] Jan 29 16:08:15 crc kubenswrapper[5008]: I0129 16:08:15.629430 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/66b503d3-cf12-4a89-90ca-27d7f941ed63-utilities\") pod \"redhat-marketplace-fl9wc\" (UID: \"66b503d3-cf12-4a89-90ca-27d7f941ed63\") " pod="openshift-marketplace/redhat-marketplace-fl9wc" Jan 29 16:08:15 crc kubenswrapper[5008]: I0129 16:08:15.629583 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/66b503d3-cf12-4a89-90ca-27d7f941ed63-catalog-content\") pod \"redhat-marketplace-fl9wc\" (UID: \"66b503d3-cf12-4a89-90ca-27d7f941ed63\") " pod="openshift-marketplace/redhat-marketplace-fl9wc" Jan 29 16:08:15 crc kubenswrapper[5008]: I0129 16:08:15.629653 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l8j5q\" (UniqueName: \"kubernetes.io/projected/66b503d3-cf12-4a89-90ca-27d7f941ed63-kube-api-access-l8j5q\") pod \"redhat-marketplace-fl9wc\" (UID: \"66b503d3-cf12-4a89-90ca-27d7f941ed63\") " pod="openshift-marketplace/redhat-marketplace-fl9wc" Jan 29 16:08:15 crc kubenswrapper[5008]: I0129 16:08:15.731929 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/66b503d3-cf12-4a89-90ca-27d7f941ed63-catalog-content\") pod \"redhat-marketplace-fl9wc\" (UID: \"66b503d3-cf12-4a89-90ca-27d7f941ed63\") " pod="openshift-marketplace/redhat-marketplace-fl9wc" Jan 29 16:08:15 crc kubenswrapper[5008]: I0129 16:08:15.732012 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l8j5q\" (UniqueName: \"kubernetes.io/projected/66b503d3-cf12-4a89-90ca-27d7f941ed63-kube-api-access-l8j5q\") pod \"redhat-marketplace-fl9wc\" (UID: \"66b503d3-cf12-4a89-90ca-27d7f941ed63\") " pod="openshift-marketplace/redhat-marketplace-fl9wc" Jan 29 16:08:15 crc kubenswrapper[5008]: I0129 16:08:15.732153 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/66b503d3-cf12-4a89-90ca-27d7f941ed63-utilities\") pod \"redhat-marketplace-fl9wc\" (UID: \"66b503d3-cf12-4a89-90ca-27d7f941ed63\") " pod="openshift-marketplace/redhat-marketplace-fl9wc" Jan 29 16:08:15 crc kubenswrapper[5008]: I0129 16:08:15.732748 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/66b503d3-cf12-4a89-90ca-27d7f941ed63-catalog-content\") pod \"redhat-marketplace-fl9wc\" (UID: \"66b503d3-cf12-4a89-90ca-27d7f941ed63\") " pod="openshift-marketplace/redhat-marketplace-fl9wc" Jan 29 16:08:15 crc kubenswrapper[5008]: I0129 16:08:15.732870 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/66b503d3-cf12-4a89-90ca-27d7f941ed63-utilities\") pod \"redhat-marketplace-fl9wc\" (UID: \"66b503d3-cf12-4a89-90ca-27d7f941ed63\") " pod="openshift-marketplace/redhat-marketplace-fl9wc" Jan 29 16:08:15 crc kubenswrapper[5008]: I0129 16:08:15.754663 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l8j5q\" (UniqueName: \"kubernetes.io/projected/66b503d3-cf12-4a89-90ca-27d7f941ed63-kube-api-access-l8j5q\") pod \"redhat-marketplace-fl9wc\" (UID: \"66b503d3-cf12-4a89-90ca-27d7f941ed63\") " pod="openshift-marketplace/redhat-marketplace-fl9wc" Jan 29 16:08:15 crc kubenswrapper[5008]: I0129 16:08:15.914098 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-fl9wc" Jan 29 16:08:16 crc kubenswrapper[5008]: I0129 16:08:16.365118 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-fl9wc"] Jan 29 16:08:16 crc kubenswrapper[5008]: I0129 16:08:16.714838 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fl9wc" event={"ID":"66b503d3-cf12-4a89-90ca-27d7f941ed63","Type":"ContainerStarted","Data":"048187ee97fe863a1a9a27bcd1b80c7e899bb088322f45c86b7fe479870681b4"} Jan 29 16:08:16 crc kubenswrapper[5008]: I0129 16:08:16.714900 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fl9wc" event={"ID":"66b503d3-cf12-4a89-90ca-27d7f941ed63","Type":"ContainerStarted","Data":"5b1b00bb2ae97cde561959176674c8591e6b4a491353c5009f561f79b72ee787"} Jan 29 16:08:17 crc kubenswrapper[5008]: I0129 16:08:17.724622 5008 generic.go:334] "Generic (PLEG): container finished" podID="66b503d3-cf12-4a89-90ca-27d7f941ed63" containerID="048187ee97fe863a1a9a27bcd1b80c7e899bb088322f45c86b7fe479870681b4" exitCode=0 Jan 29 16:08:17 crc kubenswrapper[5008]: I0129 16:08:17.724662 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fl9wc" event={"ID":"66b503d3-cf12-4a89-90ca-27d7f941ed63","Type":"ContainerDied","Data":"048187ee97fe863a1a9a27bcd1b80c7e899bb088322f45c86b7fe479870681b4"} Jan 29 16:08:17 crc kubenswrapper[5008]: E0129 16:08:17.855145 5008 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://registry.redhat.io/redhat/redhat-marketplace-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Jan 29 16:08:17 crc kubenswrapper[5008]: E0129 16:08:17.855642 5008 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-l8j5q,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-fl9wc_openshift-marketplace(66b503d3-cf12-4a89-90ca-27d7f941ed63): ErrImagePull: initializing source docker://registry.redhat.io/redhat/redhat-marketplace-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" logger="UnhandledError" Jan 29 16:08:17 crc kubenswrapper[5008]: E0129 16:08:17.857287 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"initializing source docker://registry.redhat.io/redhat/redhat-marketplace-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)\"" pod="openshift-marketplace/redhat-marketplace-fl9wc" podUID="66b503d3-cf12-4a89-90ca-27d7f941ed63" Jan 29 16:08:18 crc kubenswrapper[5008]: E0129 16:08:18.734333 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-fl9wc" podUID="66b503d3-cf12-4a89-90ca-27d7f941ed63" Jan 29 16:08:21 crc kubenswrapper[5008]: E0129 16:08:21.326673 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-6qmv7" podUID="2ed48245-be09-46c8-97f9-263179717512" Jan 29 16:08:24 crc kubenswrapper[5008]: E0129 16:08:24.455012 5008 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://registry.redhat.io/ubi9/httpd-24:latest: Requesting bearer token: invalid status code from registry 403 (Forbidden)" image="registry.redhat.io/ubi9/httpd-24:latest" Jan 29 16:08:24 crc kubenswrapper[5008]: E0129 16:08:24.455888 5008 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:proxy-httpd,Image:registry.redhat.io/ubi9/httpd-24:latest,Command:[/usr/sbin/httpd],Args:[-DFOREGROUND],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:proxy-httpd,HostPort:0,ContainerPort:3000,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/httpd/conf/httpd.conf,SubPath:httpd.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/httpd/conf.d/ssl.conf,SubPath:ssl.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:run-httpd,ReadOnly:false,MountPath:/run/httpd,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:log-httpd,ReadOnly:false,MountPath:/var/log/httpd,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-4zk8n,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/,Port:{0 3000 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:30,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/,Port:{0 3000 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:30,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(d40740f9-e8d8-4f46-b8b0-d913a6c33210): ErrImagePull: initializing source docker://registry.redhat.io/ubi9/httpd-24:latest: Requesting bearer token: invalid status code from registry 403 (Forbidden)" logger="UnhandledError" Jan 29 16:08:24 crc kubenswrapper[5008]: E0129 16:08:24.457640 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-httpd\" with ErrImagePull: \"initializing source docker://registry.redhat.io/ubi9/httpd-24:latest: Requesting bearer token: invalid status code from registry 403 (Forbidden)\"" pod="openstack/ceilometer-0" podUID="d40740f9-e8d8-4f46-b8b0-d913a6c33210" Jan 29 16:08:25 crc kubenswrapper[5008]: E0129 16:08:25.326535 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-9lmvr" podUID="0cf4cf5b-529f-49a9-900c-a94b840568d8" Jan 29 16:08:26 crc kubenswrapper[5008]: I0129 16:08:26.324146 5008 scope.go:117] "RemoveContainer" containerID="cb5e6384a544764e5b0e5a38f2e442c3dc79aaa0e3b882c450dadd5dfb981e64" Jan 29 16:08:26 crc kubenswrapper[5008]: E0129 16:08:26.324387 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gk9q8_openshift-machine-config-operator(ca0fcb2d-733d-4bde-9bbf-3f7082d0e244)\"" pod="openshift-machine-config-operator/machine-config-daemon-gk9q8" podUID="ca0fcb2d-733d-4bde-9bbf-3f7082d0e244" Jan 29 16:08:29 crc kubenswrapper[5008]: E0129 16:08:29.454236 5008 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://registry.redhat.io/redhat/redhat-marketplace-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Jan 29 16:08:29 crc kubenswrapper[5008]: E0129 16:08:29.454694 5008 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-l8j5q,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-fl9wc_openshift-marketplace(66b503d3-cf12-4a89-90ca-27d7f941ed63): ErrImagePull: initializing source docker://registry.redhat.io/redhat/redhat-marketplace-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" logger="UnhandledError" Jan 29 16:08:29 crc kubenswrapper[5008]: E0129 16:08:29.455873 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"initializing source docker://registry.redhat.io/redhat/redhat-marketplace-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)\"" pod="openshift-marketplace/redhat-marketplace-fl9wc" podUID="66b503d3-cf12-4a89-90ca-27d7f941ed63" Jan 29 16:08:33 crc kubenswrapper[5008]: E0129 16:08:33.328500 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-6qmv7" podUID="2ed48245-be09-46c8-97f9-263179717512" Jan 29 16:08:36 crc kubenswrapper[5008]: E0129 16:08:36.328904 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-httpd\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/ubi9/httpd-24:latest\\\"\"" pod="openstack/ceilometer-0" podUID="d40740f9-e8d8-4f46-b8b0-d913a6c33210" Jan 29 16:08:37 crc kubenswrapper[5008]: I0129 16:08:37.348604 5008 scope.go:117] "RemoveContainer" containerID="cb5e6384a544764e5b0e5a38f2e442c3dc79aaa0e3b882c450dadd5dfb981e64" Jan 29 16:08:37 crc kubenswrapper[5008]: E0129 16:08:37.357616 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gk9q8_openshift-machine-config-operator(ca0fcb2d-733d-4bde-9bbf-3f7082d0e244)\"" pod="openshift-machine-config-operator/machine-config-daemon-gk9q8" podUID="ca0fcb2d-733d-4bde-9bbf-3f7082d0e244" Jan 29 16:08:37 crc kubenswrapper[5008]: E0129 16:08:37.493033 5008 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://registry.redhat.io/redhat/community-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" image="registry.redhat.io/redhat/community-operator-index:v4.18" Jan 29 16:08:37 crc kubenswrapper[5008]: E0129 16:08:37.493181 5008 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-gcgl4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-9lmvr_openshift-marketplace(0cf4cf5b-529f-49a9-900c-a94b840568d8): ErrImagePull: initializing source docker://registry.redhat.io/redhat/community-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" logger="UnhandledError" Jan 29 16:08:37 crc kubenswrapper[5008]: E0129 16:08:37.494363 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"initializing source docker://registry.redhat.io/redhat/community-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)\"" pod="openshift-marketplace/community-operators-9lmvr" podUID="0cf4cf5b-529f-49a9-900c-a94b840568d8" Jan 29 16:08:42 crc kubenswrapper[5008]: E0129 16:08:42.328564 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-fl9wc" podUID="66b503d3-cf12-4a89-90ca-27d7f941ed63" Jan 29 16:08:45 crc kubenswrapper[5008]: E0129 16:08:45.325527 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-6qmv7" podUID="2ed48245-be09-46c8-97f9-263179717512" Jan 29 16:08:49 crc kubenswrapper[5008]: I0129 16:08:49.324518 5008 scope.go:117] "RemoveContainer" containerID="cb5e6384a544764e5b0e5a38f2e442c3dc79aaa0e3b882c450dadd5dfb981e64" Jan 29 16:08:49 crc kubenswrapper[5008]: E0129 16:08:49.325566 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gk9q8_openshift-machine-config-operator(ca0fcb2d-733d-4bde-9bbf-3f7082d0e244)\"" pod="openshift-machine-config-operator/machine-config-daemon-gk9q8" podUID="ca0fcb2d-733d-4bde-9bbf-3f7082d0e244" Jan 29 16:08:50 crc kubenswrapper[5008]: E0129 16:08:50.325898 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-9lmvr" podUID="0cf4cf5b-529f-49a9-900c-a94b840568d8" Jan 29 16:08:50 crc kubenswrapper[5008]: E0129 16:08:50.325916 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-httpd\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/ubi9/httpd-24:latest\\\"\"" pod="openstack/ceilometer-0" podUID="d40740f9-e8d8-4f46-b8b0-d913a6c33210" Jan 29 16:08:57 crc kubenswrapper[5008]: E0129 16:08:57.468318 5008 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://registry.redhat.io/redhat/redhat-marketplace-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Jan 29 16:08:57 crc kubenswrapper[5008]: E0129 16:08:57.468853 5008 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-l8j5q,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-fl9wc_openshift-marketplace(66b503d3-cf12-4a89-90ca-27d7f941ed63): ErrImagePull: initializing source docker://registry.redhat.io/redhat/redhat-marketplace-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" logger="UnhandledError" Jan 29 16:08:57 crc kubenswrapper[5008]: E0129 16:08:57.469956 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"initializing source docker://registry.redhat.io/redhat/redhat-marketplace-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)\"" pod="openshift-marketplace/redhat-marketplace-fl9wc" podUID="66b503d3-cf12-4a89-90ca-27d7f941ed63" Jan 29 16:08:57 crc kubenswrapper[5008]: E0129 16:08:57.473874 5008 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://registry.redhat.io/redhat/certified-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Jan 29 16:08:57 crc kubenswrapper[5008]: E0129 16:08:57.473977 5008 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-29db7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-6qmv7_openshift-marketplace(2ed48245-be09-46c8-97f9-263179717512): ErrImagePull: initializing source docker://registry.redhat.io/redhat/certified-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" logger="UnhandledError" Jan 29 16:08:57 crc kubenswrapper[5008]: E0129 16:08:57.475150 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"initializing source docker://registry.redhat.io/redhat/certified-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)\"" pod="openshift-marketplace/certified-operators-6qmv7" podUID="2ed48245-be09-46c8-97f9-263179717512" Jan 29 16:09:00 crc kubenswrapper[5008]: I0129 16:09:00.323947 5008 scope.go:117] "RemoveContainer" containerID="cb5e6384a544764e5b0e5a38f2e442c3dc79aaa0e3b882c450dadd5dfb981e64" Jan 29 16:09:00 crc kubenswrapper[5008]: E0129 16:09:00.324671 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gk9q8_openshift-machine-config-operator(ca0fcb2d-733d-4bde-9bbf-3f7082d0e244)\"" pod="openshift-machine-config-operator/machine-config-daemon-gk9q8" podUID="ca0fcb2d-733d-4bde-9bbf-3f7082d0e244" Jan 29 16:09:03 crc kubenswrapper[5008]: E0129 16:09:03.327234 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-httpd\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/ubi9/httpd-24:latest\\\"\"" pod="openstack/ceilometer-0" podUID="d40740f9-e8d8-4f46-b8b0-d913a6c33210" Jan 29 16:09:05 crc kubenswrapper[5008]: E0129 16:09:05.326200 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-9lmvr" podUID="0cf4cf5b-529f-49a9-900c-a94b840568d8" Jan 29 16:09:08 crc kubenswrapper[5008]: E0129 16:09:08.326021 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-fl9wc" podUID="66b503d3-cf12-4a89-90ca-27d7f941ed63" Jan 29 16:09:09 crc kubenswrapper[5008]: E0129 16:09:09.326052 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-6qmv7" podUID="2ed48245-be09-46c8-97f9-263179717512" Jan 29 16:09:13 crc kubenswrapper[5008]: I0129 16:09:13.324265 5008 scope.go:117] "RemoveContainer" containerID="cb5e6384a544764e5b0e5a38f2e442c3dc79aaa0e3b882c450dadd5dfb981e64" Jan 29 16:09:13 crc kubenswrapper[5008]: E0129 16:09:13.325717 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gk9q8_openshift-machine-config-operator(ca0fcb2d-733d-4bde-9bbf-3f7082d0e244)\"" pod="openshift-machine-config-operator/machine-config-daemon-gk9q8" podUID="ca0fcb2d-733d-4bde-9bbf-3f7082d0e244" Jan 29 16:09:15 crc kubenswrapper[5008]: E0129 16:09:15.326817 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-httpd\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/ubi9/httpd-24:latest\\\"\"" pod="openstack/ceilometer-0" podUID="d40740f9-e8d8-4f46-b8b0-d913a6c33210" Jan 29 16:09:20 crc kubenswrapper[5008]: E0129 16:09:20.458444 5008 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://registry.redhat.io/redhat/community-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" image="registry.redhat.io/redhat/community-operator-index:v4.18" Jan 29 16:09:20 crc kubenswrapper[5008]: E0129 16:09:20.459101 5008 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-gcgl4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-9lmvr_openshift-marketplace(0cf4cf5b-529f-49a9-900c-a94b840568d8): ErrImagePull: initializing source docker://registry.redhat.io/redhat/community-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" logger="UnhandledError" Jan 29 16:09:20 crc kubenswrapper[5008]: E0129 16:09:20.460355 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"initializing source docker://registry.redhat.io/redhat/community-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)\"" pod="openshift-marketplace/community-operators-9lmvr" podUID="0cf4cf5b-529f-49a9-900c-a94b840568d8" Jan 29 16:09:22 crc kubenswrapper[5008]: E0129 16:09:22.324655 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-fl9wc" podUID="66b503d3-cf12-4a89-90ca-27d7f941ed63" Jan 29 16:09:24 crc kubenswrapper[5008]: E0129 16:09:24.326365 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-6qmv7" podUID="2ed48245-be09-46c8-97f9-263179717512" Jan 29 16:09:28 crc kubenswrapper[5008]: I0129 16:09:28.324075 5008 scope.go:117] "RemoveContainer" containerID="cb5e6384a544764e5b0e5a38f2e442c3dc79aaa0e3b882c450dadd5dfb981e64" Jan 29 16:09:28 crc kubenswrapper[5008]: E0129 16:09:28.325017 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gk9q8_openshift-machine-config-operator(ca0fcb2d-733d-4bde-9bbf-3f7082d0e244)\"" pod="openshift-machine-config-operator/machine-config-daemon-gk9q8" podUID="ca0fcb2d-733d-4bde-9bbf-3f7082d0e244" Jan 29 16:09:29 crc kubenswrapper[5008]: E0129 16:09:29.325966 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-httpd\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/ubi9/httpd-24:latest\\\"\"" pod="openstack/ceilometer-0" podUID="d40740f9-e8d8-4f46-b8b0-d913a6c33210" Jan 29 16:09:31 crc kubenswrapper[5008]: E0129 16:09:31.325242 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-9lmvr" podUID="0cf4cf5b-529f-49a9-900c-a94b840568d8" Jan 29 16:09:36 crc kubenswrapper[5008]: E0129 16:09:36.326170 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-fl9wc" podUID="66b503d3-cf12-4a89-90ca-27d7f941ed63" Jan 29 16:09:36 crc kubenswrapper[5008]: E0129 16:09:36.326216 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-6qmv7" podUID="2ed48245-be09-46c8-97f9-263179717512" Jan 29 16:09:39 crc kubenswrapper[5008]: I0129 16:09:39.324347 5008 scope.go:117] "RemoveContainer" containerID="cb5e6384a544764e5b0e5a38f2e442c3dc79aaa0e3b882c450dadd5dfb981e64" Jan 29 16:09:39 crc kubenswrapper[5008]: E0129 16:09:39.324923 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gk9q8_openshift-machine-config-operator(ca0fcb2d-733d-4bde-9bbf-3f7082d0e244)\"" pod="openshift-machine-config-operator/machine-config-daemon-gk9q8" podUID="ca0fcb2d-733d-4bde-9bbf-3f7082d0e244" Jan 29 16:09:43 crc kubenswrapper[5008]: E0129 16:09:43.325849 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-httpd\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/ubi9/httpd-24:latest\\\"\"" pod="openstack/ceilometer-0" podUID="d40740f9-e8d8-4f46-b8b0-d913a6c33210" Jan 29 16:09:44 crc kubenswrapper[5008]: E0129 16:09:44.325695 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-9lmvr" podUID="0cf4cf5b-529f-49a9-900c-a94b840568d8" Jan 29 16:09:47 crc kubenswrapper[5008]: E0129 16:09:47.463064 5008 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://registry.redhat.io/redhat/redhat-marketplace-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Jan 29 16:09:47 crc kubenswrapper[5008]: E0129 16:09:47.463626 5008 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-l8j5q,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-fl9wc_openshift-marketplace(66b503d3-cf12-4a89-90ca-27d7f941ed63): ErrImagePull: initializing source docker://registry.redhat.io/redhat/redhat-marketplace-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" logger="UnhandledError" Jan 29 16:09:47 crc kubenswrapper[5008]: E0129 16:09:47.464909 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"initializing source docker://registry.redhat.io/redhat/redhat-marketplace-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)\"" pod="openshift-marketplace/redhat-marketplace-fl9wc" podUID="66b503d3-cf12-4a89-90ca-27d7f941ed63" Jan 29 16:09:50 crc kubenswrapper[5008]: E0129 16:09:50.325936 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-6qmv7" podUID="2ed48245-be09-46c8-97f9-263179717512" Jan 29 16:09:51 crc kubenswrapper[5008]: I0129 16:09:51.324631 5008 scope.go:117] "RemoveContainer" containerID="cb5e6384a544764e5b0e5a38f2e442c3dc79aaa0e3b882c450dadd5dfb981e64" Jan 29 16:09:51 crc kubenswrapper[5008]: E0129 16:09:51.325198 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gk9q8_openshift-machine-config-operator(ca0fcb2d-733d-4bde-9bbf-3f7082d0e244)\"" pod="openshift-machine-config-operator/machine-config-daemon-gk9q8" podUID="ca0fcb2d-733d-4bde-9bbf-3f7082d0e244" Jan 29 16:09:57 crc kubenswrapper[5008]: E0129 16:09:57.334151 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-httpd\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/ubi9/httpd-24:latest\\\"\"" pod="openstack/ceilometer-0" podUID="d40740f9-e8d8-4f46-b8b0-d913a6c33210" Jan 29 16:09:58 crc kubenswrapper[5008]: E0129 16:09:58.325571 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-9lmvr" podUID="0cf4cf5b-529f-49a9-900c-a94b840568d8" Jan 29 16:09:59 crc kubenswrapper[5008]: E0129 16:09:59.325824 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-fl9wc" podUID="66b503d3-cf12-4a89-90ca-27d7f941ed63" Jan 29 16:10:02 crc kubenswrapper[5008]: E0129 16:10:02.326566 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-6qmv7" podUID="2ed48245-be09-46c8-97f9-263179717512" Jan 29 16:10:03 crc kubenswrapper[5008]: I0129 16:10:03.323763 5008 scope.go:117] "RemoveContainer" containerID="cb5e6384a544764e5b0e5a38f2e442c3dc79aaa0e3b882c450dadd5dfb981e64" Jan 29 16:10:03 crc kubenswrapper[5008]: E0129 16:10:03.324580 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gk9q8_openshift-machine-config-operator(ca0fcb2d-733d-4bde-9bbf-3f7082d0e244)\"" pod="openshift-machine-config-operator/machine-config-daemon-gk9q8" podUID="ca0fcb2d-733d-4bde-9bbf-3f7082d0e244" Jan 29 16:10:10 crc kubenswrapper[5008]: E0129 16:10:10.326726 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-9lmvr" podUID="0cf4cf5b-529f-49a9-900c-a94b840568d8" Jan 29 16:10:11 crc kubenswrapper[5008]: E0129 16:10:11.325856 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-httpd\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/ubi9/httpd-24:latest\\\"\"" pod="openstack/ceilometer-0" podUID="d40740f9-e8d8-4f46-b8b0-d913a6c33210" Jan 29 16:10:14 crc kubenswrapper[5008]: E0129 16:10:14.326593 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-fl9wc" podUID="66b503d3-cf12-4a89-90ca-27d7f941ed63" Jan 29 16:10:15 crc kubenswrapper[5008]: E0129 16:10:15.325235 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-6qmv7" podUID="2ed48245-be09-46c8-97f9-263179717512" Jan 29 16:10:16 crc kubenswrapper[5008]: I0129 16:10:16.324476 5008 scope.go:117] "RemoveContainer" containerID="cb5e6384a544764e5b0e5a38f2e442c3dc79aaa0e3b882c450dadd5dfb981e64" Jan 29 16:10:16 crc kubenswrapper[5008]: E0129 16:10:16.324808 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gk9q8_openshift-machine-config-operator(ca0fcb2d-733d-4bde-9bbf-3f7082d0e244)\"" pod="openshift-machine-config-operator/machine-config-daemon-gk9q8" podUID="ca0fcb2d-733d-4bde-9bbf-3f7082d0e244" Jan 29 16:10:22 crc kubenswrapper[5008]: E0129 16:10:22.326048 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-9lmvr" podUID="0cf4cf5b-529f-49a9-900c-a94b840568d8" Jan 29 16:10:25 crc kubenswrapper[5008]: E0129 16:10:25.327413 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-httpd\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/ubi9/httpd-24:latest\\\"\"" pod="openstack/ceilometer-0" podUID="d40740f9-e8d8-4f46-b8b0-d913a6c33210" Jan 29 16:10:26 crc kubenswrapper[5008]: E0129 16:10:26.325380 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-fl9wc" podUID="66b503d3-cf12-4a89-90ca-27d7f941ed63" Jan 29 16:10:28 crc kubenswrapper[5008]: I0129 16:10:28.323552 5008 scope.go:117] "RemoveContainer" containerID="cb5e6384a544764e5b0e5a38f2e442c3dc79aaa0e3b882c450dadd5dfb981e64" Jan 29 16:10:28 crc kubenswrapper[5008]: E0129 16:10:28.324059 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gk9q8_openshift-machine-config-operator(ca0fcb2d-733d-4bde-9bbf-3f7082d0e244)\"" pod="openshift-machine-config-operator/machine-config-daemon-gk9q8" podUID="ca0fcb2d-733d-4bde-9bbf-3f7082d0e244" Jan 29 16:10:30 crc kubenswrapper[5008]: E0129 16:10:30.452565 5008 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://registry.redhat.io/redhat/certified-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Jan 29 16:10:30 crc kubenswrapper[5008]: E0129 16:10:30.453112 5008 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-29db7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-6qmv7_openshift-marketplace(2ed48245-be09-46c8-97f9-263179717512): ErrImagePull: initializing source docker://registry.redhat.io/redhat/certified-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" logger="UnhandledError" Jan 29 16:10:30 crc kubenswrapper[5008]: E0129 16:10:30.454325 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"initializing source docker://registry.redhat.io/redhat/certified-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)\"" pod="openshift-marketplace/certified-operators-6qmv7" podUID="2ed48245-be09-46c8-97f9-263179717512" Jan 29 16:10:36 crc kubenswrapper[5008]: E0129 16:10:36.326544 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-9lmvr" podUID="0cf4cf5b-529f-49a9-900c-a94b840568d8" Jan 29 16:10:38 crc kubenswrapper[5008]: E0129 16:10:38.325320 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-httpd\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/ubi9/httpd-24:latest\\\"\"" pod="openstack/ceilometer-0" podUID="d40740f9-e8d8-4f46-b8b0-d913a6c33210" Jan 29 16:10:39 crc kubenswrapper[5008]: E0129 16:10:39.325581 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-fl9wc" podUID="66b503d3-cf12-4a89-90ca-27d7f941ed63" Jan 29 16:10:41 crc kubenswrapper[5008]: E0129 16:10:41.326252 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-6qmv7" podUID="2ed48245-be09-46c8-97f9-263179717512" Jan 29 16:10:42 crc kubenswrapper[5008]: I0129 16:10:42.323948 5008 scope.go:117] "RemoveContainer" containerID="cb5e6384a544764e5b0e5a38f2e442c3dc79aaa0e3b882c450dadd5dfb981e64" Jan 29 16:10:42 crc kubenswrapper[5008]: E0129 16:10:42.324280 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gk9q8_openshift-machine-config-operator(ca0fcb2d-733d-4bde-9bbf-3f7082d0e244)\"" pod="openshift-machine-config-operator/machine-config-daemon-gk9q8" podUID="ca0fcb2d-733d-4bde-9bbf-3f7082d0e244" Jan 29 16:10:47 crc kubenswrapper[5008]: E0129 16:10:47.458698 5008 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://registry.redhat.io/redhat/community-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" image="registry.redhat.io/redhat/community-operator-index:v4.18" Jan 29 16:10:47 crc kubenswrapper[5008]: E0129 16:10:47.459223 5008 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-gcgl4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-9lmvr_openshift-marketplace(0cf4cf5b-529f-49a9-900c-a94b840568d8): ErrImagePull: initializing source docker://registry.redhat.io/redhat/community-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" logger="UnhandledError" Jan 29 16:10:47 crc kubenswrapper[5008]: E0129 16:10:47.460425 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"initializing source docker://registry.redhat.io/redhat/community-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)\"" pod="openshift-marketplace/community-operators-9lmvr" podUID="0cf4cf5b-529f-49a9-900c-a94b840568d8" Jan 29 16:10:50 crc kubenswrapper[5008]: E0129 16:10:50.325961 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-fl9wc" podUID="66b503d3-cf12-4a89-90ca-27d7f941ed63" Jan 29 16:10:52 crc kubenswrapper[5008]: E0129 16:10:52.325660 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-httpd\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/ubi9/httpd-24:latest\\\"\"" pod="openstack/ceilometer-0" podUID="d40740f9-e8d8-4f46-b8b0-d913a6c33210" Jan 29 16:10:55 crc kubenswrapper[5008]: E0129 16:10:55.325730 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-6qmv7" podUID="2ed48245-be09-46c8-97f9-263179717512" Jan 29 16:10:57 crc kubenswrapper[5008]: I0129 16:10:57.330424 5008 scope.go:117] "RemoveContainer" containerID="cb5e6384a544764e5b0e5a38f2e442c3dc79aaa0e3b882c450dadd5dfb981e64" Jan 29 16:10:57 crc kubenswrapper[5008]: E0129 16:10:57.331023 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gk9q8_openshift-machine-config-operator(ca0fcb2d-733d-4bde-9bbf-3f7082d0e244)\"" pod="openshift-machine-config-operator/machine-config-daemon-gk9q8" podUID="ca0fcb2d-733d-4bde-9bbf-3f7082d0e244" Jan 29 16:11:01 crc kubenswrapper[5008]: E0129 16:11:01.327458 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-9lmvr" podUID="0cf4cf5b-529f-49a9-900c-a94b840568d8" Jan 29 16:11:04 crc kubenswrapper[5008]: E0129 16:11:04.326828 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-fl9wc" podUID="66b503d3-cf12-4a89-90ca-27d7f941ed63" Jan 29 16:11:04 crc kubenswrapper[5008]: E0129 16:11:04.327107 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-httpd\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/ubi9/httpd-24:latest\\\"\"" pod="openstack/ceilometer-0" podUID="d40740f9-e8d8-4f46-b8b0-d913a6c33210" Jan 29 16:11:08 crc kubenswrapper[5008]: E0129 16:11:08.325850 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-6qmv7" podUID="2ed48245-be09-46c8-97f9-263179717512" Jan 29 16:11:10 crc kubenswrapper[5008]: I0129 16:11:10.323938 5008 scope.go:117] "RemoveContainer" containerID="cb5e6384a544764e5b0e5a38f2e442c3dc79aaa0e3b882c450dadd5dfb981e64" Jan 29 16:11:10 crc kubenswrapper[5008]: E0129 16:11:10.324505 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gk9q8_openshift-machine-config-operator(ca0fcb2d-733d-4bde-9bbf-3f7082d0e244)\"" pod="openshift-machine-config-operator/machine-config-daemon-gk9q8" podUID="ca0fcb2d-733d-4bde-9bbf-3f7082d0e244" Jan 29 16:11:13 crc kubenswrapper[5008]: E0129 16:11:13.326991 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-9lmvr" podUID="0cf4cf5b-529f-49a9-900c-a94b840568d8" Jan 29 16:11:15 crc kubenswrapper[5008]: E0129 16:11:15.327358 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-httpd\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/ubi9/httpd-24:latest\\\"\"" pod="openstack/ceilometer-0" podUID="d40740f9-e8d8-4f46-b8b0-d913a6c33210" Jan 29 16:11:16 crc kubenswrapper[5008]: E0129 16:11:16.458323 5008 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://registry.redhat.io/redhat/redhat-marketplace-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Jan 29 16:11:16 crc kubenswrapper[5008]: E0129 16:11:16.458715 5008 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-l8j5q,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-fl9wc_openshift-marketplace(66b503d3-cf12-4a89-90ca-27d7f941ed63): ErrImagePull: initializing source docker://registry.redhat.io/redhat/redhat-marketplace-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" logger="UnhandledError" Jan 29 16:11:16 crc kubenswrapper[5008]: E0129 16:11:16.460183 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"initializing source docker://registry.redhat.io/redhat/redhat-marketplace-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)\"" pod="openshift-marketplace/redhat-marketplace-fl9wc" podUID="66b503d3-cf12-4a89-90ca-27d7f941ed63" Jan 29 16:11:20 crc kubenswrapper[5008]: E0129 16:11:20.325876 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-6qmv7" podUID="2ed48245-be09-46c8-97f9-263179717512" Jan 29 16:11:23 crc kubenswrapper[5008]: I0129 16:11:23.324226 5008 scope.go:117] "RemoveContainer" containerID="cb5e6384a544764e5b0e5a38f2e442c3dc79aaa0e3b882c450dadd5dfb981e64" Jan 29 16:11:23 crc kubenswrapper[5008]: E0129 16:11:23.324859 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gk9q8_openshift-machine-config-operator(ca0fcb2d-733d-4bde-9bbf-3f7082d0e244)\"" pod="openshift-machine-config-operator/machine-config-daemon-gk9q8" podUID="ca0fcb2d-733d-4bde-9bbf-3f7082d0e244" Jan 29 16:11:26 crc kubenswrapper[5008]: E0129 16:11:26.326819 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-httpd\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/ubi9/httpd-24:latest\\\"\"" pod="openstack/ceilometer-0" podUID="d40740f9-e8d8-4f46-b8b0-d913a6c33210" Jan 29 16:11:28 crc kubenswrapper[5008]: E0129 16:11:28.325296 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-fl9wc" podUID="66b503d3-cf12-4a89-90ca-27d7f941ed63" Jan 29 16:11:28 crc kubenswrapper[5008]: E0129 16:11:28.326672 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-9lmvr" podUID="0cf4cf5b-529f-49a9-900c-a94b840568d8" Jan 29 16:11:32 crc kubenswrapper[5008]: E0129 16:11:32.325982 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-6qmv7" podUID="2ed48245-be09-46c8-97f9-263179717512" Jan 29 16:11:35 crc kubenswrapper[5008]: I0129 16:11:35.324925 5008 scope.go:117] "RemoveContainer" containerID="cb5e6384a544764e5b0e5a38f2e442c3dc79aaa0e3b882c450dadd5dfb981e64" Jan 29 16:11:35 crc kubenswrapper[5008]: E0129 16:11:35.326213 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gk9q8_openshift-machine-config-operator(ca0fcb2d-733d-4bde-9bbf-3f7082d0e244)\"" pod="openshift-machine-config-operator/machine-config-daemon-gk9q8" podUID="ca0fcb2d-733d-4bde-9bbf-3f7082d0e244" Jan 29 16:11:37 crc kubenswrapper[5008]: E0129 16:11:37.332154 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-httpd\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/ubi9/httpd-24:latest\\\"\"" pod="openstack/ceilometer-0" podUID="d40740f9-e8d8-4f46-b8b0-d913a6c33210" Jan 29 16:11:40 crc kubenswrapper[5008]: E0129 16:11:40.325648 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-9lmvr" podUID="0cf4cf5b-529f-49a9-900c-a94b840568d8" Jan 29 16:11:40 crc kubenswrapper[5008]: E0129 16:11:40.327193 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-fl9wc" podUID="66b503d3-cf12-4a89-90ca-27d7f941ed63" Jan 29 16:11:43 crc kubenswrapper[5008]: E0129 16:11:43.326390 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-6qmv7" podUID="2ed48245-be09-46c8-97f9-263179717512" Jan 29 16:11:48 crc kubenswrapper[5008]: I0129 16:11:48.325447 5008 scope.go:117] "RemoveContainer" containerID="cb5e6384a544764e5b0e5a38f2e442c3dc79aaa0e3b882c450dadd5dfb981e64" Jan 29 16:11:48 crc kubenswrapper[5008]: E0129 16:11:48.326741 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gk9q8_openshift-machine-config-operator(ca0fcb2d-733d-4bde-9bbf-3f7082d0e244)\"" pod="openshift-machine-config-operator/machine-config-daemon-gk9q8" podUID="ca0fcb2d-733d-4bde-9bbf-3f7082d0e244" Jan 29 16:11:50 crc kubenswrapper[5008]: E0129 16:11:50.330577 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-httpd\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/ubi9/httpd-24:latest\\\"\"" pod="openstack/ceilometer-0" podUID="d40740f9-e8d8-4f46-b8b0-d913a6c33210" Jan 29 16:11:53 crc kubenswrapper[5008]: E0129 16:11:53.333453 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-fl9wc" podUID="66b503d3-cf12-4a89-90ca-27d7f941ed63" Jan 29 16:11:53 crc kubenswrapper[5008]: E0129 16:11:53.333508 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-9lmvr" podUID="0cf4cf5b-529f-49a9-900c-a94b840568d8" Jan 29 16:11:58 crc kubenswrapper[5008]: E0129 16:11:58.326234 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-6qmv7" podUID="2ed48245-be09-46c8-97f9-263179717512" Jan 29 16:12:03 crc kubenswrapper[5008]: I0129 16:12:03.324126 5008 scope.go:117] "RemoveContainer" containerID="cb5e6384a544764e5b0e5a38f2e442c3dc79aaa0e3b882c450dadd5dfb981e64" Jan 29 16:12:03 crc kubenswrapper[5008]: E0129 16:12:03.325656 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gk9q8_openshift-machine-config-operator(ca0fcb2d-733d-4bde-9bbf-3f7082d0e244)\"" pod="openshift-machine-config-operator/machine-config-daemon-gk9q8" podUID="ca0fcb2d-733d-4bde-9bbf-3f7082d0e244" Jan 29 16:12:03 crc kubenswrapper[5008]: E0129 16:12:03.328912 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-httpd\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/ubi9/httpd-24:latest\\\"\"" pod="openstack/ceilometer-0" podUID="d40740f9-e8d8-4f46-b8b0-d913a6c33210" Jan 29 16:12:04 crc kubenswrapper[5008]: E0129 16:12:04.326035 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-fl9wc" podUID="66b503d3-cf12-4a89-90ca-27d7f941ed63" Jan 29 16:12:07 crc kubenswrapper[5008]: E0129 16:12:07.336511 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-9lmvr" podUID="0cf4cf5b-529f-49a9-900c-a94b840568d8" Jan 29 16:12:12 crc kubenswrapper[5008]: E0129 16:12:12.327403 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-6qmv7" podUID="2ed48245-be09-46c8-97f9-263179717512" Jan 29 16:12:14 crc kubenswrapper[5008]: E0129 16:12:14.326284 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-httpd\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/ubi9/httpd-24:latest\\\"\"" pod="openstack/ceilometer-0" podUID="d40740f9-e8d8-4f46-b8b0-d913a6c33210" Jan 29 16:12:17 crc kubenswrapper[5008]: I0129 16:12:17.330423 5008 scope.go:117] "RemoveContainer" containerID="cb5e6384a544764e5b0e5a38f2e442c3dc79aaa0e3b882c450dadd5dfb981e64" Jan 29 16:12:17 crc kubenswrapper[5008]: E0129 16:12:17.331145 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gk9q8_openshift-machine-config-operator(ca0fcb2d-733d-4bde-9bbf-3f7082d0e244)\"" pod="openshift-machine-config-operator/machine-config-daemon-gk9q8" podUID="ca0fcb2d-733d-4bde-9bbf-3f7082d0e244" Jan 29 16:12:19 crc kubenswrapper[5008]: E0129 16:12:19.326347 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-fl9wc" podUID="66b503d3-cf12-4a89-90ca-27d7f941ed63" Jan 29 16:12:21 crc kubenswrapper[5008]: E0129 16:12:21.333217 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-9lmvr" podUID="0cf4cf5b-529f-49a9-900c-a94b840568d8" Jan 29 16:12:26 crc kubenswrapper[5008]: E0129 16:12:26.325878 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-6qmv7" podUID="2ed48245-be09-46c8-97f9-263179717512" Jan 29 16:12:29 crc kubenswrapper[5008]: E0129 16:12:29.328802 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-httpd\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/ubi9/httpd-24:latest\\\"\"" pod="openstack/ceilometer-0" podUID="d40740f9-e8d8-4f46-b8b0-d913a6c33210" Jan 29 16:12:30 crc kubenswrapper[5008]: E0129 16:12:30.325552 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-fl9wc" podUID="66b503d3-cf12-4a89-90ca-27d7f941ed63" Jan 29 16:12:32 crc kubenswrapper[5008]: I0129 16:12:32.324305 5008 scope.go:117] "RemoveContainer" containerID="cb5e6384a544764e5b0e5a38f2e442c3dc79aaa0e3b882c450dadd5dfb981e64" Jan 29 16:12:32 crc kubenswrapper[5008]: E0129 16:12:32.324822 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gk9q8_openshift-machine-config-operator(ca0fcb2d-733d-4bde-9bbf-3f7082d0e244)\"" pod="openshift-machine-config-operator/machine-config-daemon-gk9q8" podUID="ca0fcb2d-733d-4bde-9bbf-3f7082d0e244" Jan 29 16:12:32 crc kubenswrapper[5008]: E0129 16:12:32.326206 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-9lmvr" podUID="0cf4cf5b-529f-49a9-900c-a94b840568d8" Jan 29 16:12:37 crc kubenswrapper[5008]: E0129 16:12:37.332961 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-6qmv7" podUID="2ed48245-be09-46c8-97f9-263179717512" Jan 29 16:12:42 crc kubenswrapper[5008]: E0129 16:12:42.326880 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-httpd\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/ubi9/httpd-24:latest\\\"\"" pod="openstack/ceilometer-0" podUID="d40740f9-e8d8-4f46-b8b0-d913a6c33210" Jan 29 16:12:42 crc kubenswrapper[5008]: E0129 16:12:42.327384 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-fl9wc" podUID="66b503d3-cf12-4a89-90ca-27d7f941ed63" Jan 29 16:12:44 crc kubenswrapper[5008]: E0129 16:12:44.326254 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-9lmvr" podUID="0cf4cf5b-529f-49a9-900c-a94b840568d8" Jan 29 16:12:47 crc kubenswrapper[5008]: I0129 16:12:47.330973 5008 scope.go:117] "RemoveContainer" containerID="cb5e6384a544764e5b0e5a38f2e442c3dc79aaa0e3b882c450dadd5dfb981e64" Jan 29 16:12:47 crc kubenswrapper[5008]: E0129 16:12:47.331686 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gk9q8_openshift-machine-config-operator(ca0fcb2d-733d-4bde-9bbf-3f7082d0e244)\"" pod="openshift-machine-config-operator/machine-config-daemon-gk9q8" podUID="ca0fcb2d-733d-4bde-9bbf-3f7082d0e244" Jan 29 16:12:52 crc kubenswrapper[5008]: E0129 16:12:52.325431 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-6qmv7" podUID="2ed48245-be09-46c8-97f9-263179717512" Jan 29 16:12:53 crc kubenswrapper[5008]: E0129 16:12:53.326585 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-httpd\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/ubi9/httpd-24:latest\\\"\"" pod="openstack/ceilometer-0" podUID="d40740f9-e8d8-4f46-b8b0-d913a6c33210" Jan 29 16:12:54 crc kubenswrapper[5008]: E0129 16:12:54.324764 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-fl9wc" podUID="66b503d3-cf12-4a89-90ca-27d7f941ed63" Jan 29 16:12:55 crc kubenswrapper[5008]: E0129 16:12:55.325255 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-9lmvr" podUID="0cf4cf5b-529f-49a9-900c-a94b840568d8" Jan 29 16:12:58 crc kubenswrapper[5008]: I0129 16:12:58.323494 5008 scope.go:117] "RemoveContainer" containerID="cb5e6384a544764e5b0e5a38f2e442c3dc79aaa0e3b882c450dadd5dfb981e64" Jan 29 16:12:58 crc kubenswrapper[5008]: E0129 16:12:58.324022 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gk9q8_openshift-machine-config-operator(ca0fcb2d-733d-4bde-9bbf-3f7082d0e244)\"" pod="openshift-machine-config-operator/machine-config-daemon-gk9q8" podUID="ca0fcb2d-733d-4bde-9bbf-3f7082d0e244" Jan 29 16:13:06 crc kubenswrapper[5008]: E0129 16:13:06.326514 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-6qmv7" podUID="2ed48245-be09-46c8-97f9-263179717512" Jan 29 16:13:07 crc kubenswrapper[5008]: E0129 16:13:07.333374 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-fl9wc" podUID="66b503d3-cf12-4a89-90ca-27d7f941ed63" Jan 29 16:13:08 crc kubenswrapper[5008]: E0129 16:13:08.325259 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-httpd\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/ubi9/httpd-24:latest\\\"\"" pod="openstack/ceilometer-0" podUID="d40740f9-e8d8-4f46-b8b0-d913a6c33210" Jan 29 16:13:10 crc kubenswrapper[5008]: E0129 16:13:10.325527 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-9lmvr" podUID="0cf4cf5b-529f-49a9-900c-a94b840568d8" Jan 29 16:13:13 crc kubenswrapper[5008]: I0129 16:13:13.324572 5008 scope.go:117] "RemoveContainer" containerID="cb5e6384a544764e5b0e5a38f2e442c3dc79aaa0e3b882c450dadd5dfb981e64" Jan 29 16:13:13 crc kubenswrapper[5008]: E0129 16:13:13.325042 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gk9q8_openshift-machine-config-operator(ca0fcb2d-733d-4bde-9bbf-3f7082d0e244)\"" pod="openshift-machine-config-operator/machine-config-daemon-gk9q8" podUID="ca0fcb2d-733d-4bde-9bbf-3f7082d0e244" Jan 29 16:13:19 crc kubenswrapper[5008]: E0129 16:13:19.328535 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-fl9wc" podUID="66b503d3-cf12-4a89-90ca-27d7f941ed63" Jan 29 16:13:20 crc kubenswrapper[5008]: I0129 16:13:20.325690 5008 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 29 16:13:20 crc kubenswrapper[5008]: E0129 16:13:20.462679 5008 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://registry.redhat.io/redhat/certified-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Jan 29 16:13:20 crc kubenswrapper[5008]: E0129 16:13:20.462909 5008 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-29db7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-6qmv7_openshift-marketplace(2ed48245-be09-46c8-97f9-263179717512): ErrImagePull: initializing source docker://registry.redhat.io/redhat/certified-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" logger="UnhandledError" Jan 29 16:13:20 crc kubenswrapper[5008]: E0129 16:13:20.464448 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"initializing source docker://registry.redhat.io/redhat/certified-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)\"" pod="openshift-marketplace/certified-operators-6qmv7" podUID="2ed48245-be09-46c8-97f9-263179717512" Jan 29 16:13:23 crc kubenswrapper[5008]: E0129 16:13:23.325048 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-httpd\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/ubi9/httpd-24:latest\\\"\"" pod="openstack/ceilometer-0" podUID="d40740f9-e8d8-4f46-b8b0-d913a6c33210" Jan 29 16:13:24 crc kubenswrapper[5008]: E0129 16:13:24.326308 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-9lmvr" podUID="0cf4cf5b-529f-49a9-900c-a94b840568d8" Jan 29 16:13:26 crc kubenswrapper[5008]: I0129 16:13:26.324382 5008 scope.go:117] "RemoveContainer" containerID="cb5e6384a544764e5b0e5a38f2e442c3dc79aaa0e3b882c450dadd5dfb981e64" Jan 29 16:13:27 crc kubenswrapper[5008]: I0129 16:13:27.536096 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-gk9q8" event={"ID":"ca0fcb2d-733d-4bde-9bbf-3f7082d0e244","Type":"ContainerStarted","Data":"b700e8418443771845187d679243e192744c1e88425ed21d7245867ce870d957"} Jan 29 16:13:30 crc kubenswrapper[5008]: E0129 16:13:30.326737 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-fl9wc" podUID="66b503d3-cf12-4a89-90ca-27d7f941ed63" Jan 29 16:13:32 crc kubenswrapper[5008]: E0129 16:13:32.325913 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-6qmv7" podUID="2ed48245-be09-46c8-97f9-263179717512" Jan 29 16:13:34 crc kubenswrapper[5008]: E0129 16:13:34.447447 5008 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://registry.redhat.io/ubi9/httpd-24:latest: Requesting bearer token: invalid status code from registry 403 (Forbidden)" image="registry.redhat.io/ubi9/httpd-24:latest" Jan 29 16:13:34 crc kubenswrapper[5008]: E0129 16:13:34.448355 5008 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:proxy-httpd,Image:registry.redhat.io/ubi9/httpd-24:latest,Command:[/usr/sbin/httpd],Args:[-DFOREGROUND],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:proxy-httpd,HostPort:0,ContainerPort:3000,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/httpd/conf/httpd.conf,SubPath:httpd.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/httpd/conf.d/ssl.conf,SubPath:ssl.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:run-httpd,ReadOnly:false,MountPath:/run/httpd,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:log-httpd,ReadOnly:false,MountPath:/var/log/httpd,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-4zk8n,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/,Port:{0 3000 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:30,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/,Port:{0 3000 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:30,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(d40740f9-e8d8-4f46-b8b0-d913a6c33210): ErrImagePull: initializing source docker://registry.redhat.io/ubi9/httpd-24:latest: Requesting bearer token: invalid status code from registry 403 (Forbidden)" logger="UnhandledError" Jan 29 16:13:34 crc kubenswrapper[5008]: E0129 16:13:34.450560 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-httpd\" with ErrImagePull: \"initializing source docker://registry.redhat.io/ubi9/httpd-24:latest: Requesting bearer token: invalid status code from registry 403 (Forbidden)\"" pod="openstack/ceilometer-0" podUID="d40740f9-e8d8-4f46-b8b0-d913a6c33210" Jan 29 16:13:40 crc kubenswrapper[5008]: E0129 16:13:40.449993 5008 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://registry.redhat.io/redhat/community-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" image="registry.redhat.io/redhat/community-operator-index:v4.18" Jan 29 16:13:40 crc kubenswrapper[5008]: E0129 16:13:40.450710 5008 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-gcgl4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-9lmvr_openshift-marketplace(0cf4cf5b-529f-49a9-900c-a94b840568d8): ErrImagePull: initializing source docker://registry.redhat.io/redhat/community-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" logger="UnhandledError" Jan 29 16:13:40 crc kubenswrapper[5008]: E0129 16:13:40.452168 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"initializing source docker://registry.redhat.io/redhat/community-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)\"" pod="openshift-marketplace/community-operators-9lmvr" podUID="0cf4cf5b-529f-49a9-900c-a94b840568d8" Jan 29 16:13:41 crc kubenswrapper[5008]: E0129 16:13:41.325602 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-fl9wc" podUID="66b503d3-cf12-4a89-90ca-27d7f941ed63" Jan 29 16:13:47 crc kubenswrapper[5008]: E0129 16:13:47.332239 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-httpd\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/ubi9/httpd-24:latest\\\"\"" pod="openstack/ceilometer-0" podUID="d40740f9-e8d8-4f46-b8b0-d913a6c33210" Jan 29 16:13:47 crc kubenswrapper[5008]: E0129 16:13:47.332631 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-6qmv7" podUID="2ed48245-be09-46c8-97f9-263179717512" Jan 29 16:13:53 crc kubenswrapper[5008]: E0129 16:13:53.327848 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-9lmvr" podUID="0cf4cf5b-529f-49a9-900c-a94b840568d8" Jan 29 16:13:55 crc kubenswrapper[5008]: E0129 16:13:55.325496 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-fl9wc" podUID="66b503d3-cf12-4a89-90ca-27d7f941ed63" Jan 29 16:13:58 crc kubenswrapper[5008]: E0129 16:13:58.328067 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-6qmv7" podUID="2ed48245-be09-46c8-97f9-263179717512" Jan 29 16:14:02 crc kubenswrapper[5008]: E0129 16:14:02.326683 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-httpd\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/ubi9/httpd-24:latest\\\"\"" pod="openstack/ceilometer-0" podUID="d40740f9-e8d8-4f46-b8b0-d913a6c33210" Jan 29 16:14:05 crc kubenswrapper[5008]: E0129 16:14:05.325236 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-9lmvr" podUID="0cf4cf5b-529f-49a9-900c-a94b840568d8" Jan 29 16:14:09 crc kubenswrapper[5008]: E0129 16:14:09.326440 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-6qmv7" podUID="2ed48245-be09-46c8-97f9-263179717512" Jan 29 16:14:09 crc kubenswrapper[5008]: E0129 16:14:09.453469 5008 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://registry.redhat.io/redhat/redhat-marketplace-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Jan 29 16:14:09 crc kubenswrapper[5008]: E0129 16:14:09.453955 5008 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-l8j5q,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-fl9wc_openshift-marketplace(66b503d3-cf12-4a89-90ca-27d7f941ed63): ErrImagePull: initializing source docker://registry.redhat.io/redhat/redhat-marketplace-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" logger="UnhandledError" Jan 29 16:14:09 crc kubenswrapper[5008]: E0129 16:14:09.455124 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"initializing source docker://registry.redhat.io/redhat/redhat-marketplace-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)\"" pod="openshift-marketplace/redhat-marketplace-fl9wc" podUID="66b503d3-cf12-4a89-90ca-27d7f941ed63" Jan 29 16:14:15 crc kubenswrapper[5008]: I0129 16:14:15.628093 5008 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-7dqqz"] Jan 29 16:14:15 crc kubenswrapper[5008]: I0129 16:14:15.630681 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-7dqqz" Jan 29 16:14:15 crc kubenswrapper[5008]: I0129 16:14:15.644210 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-7dqqz"] Jan 29 16:14:15 crc kubenswrapper[5008]: I0129 16:14:15.805658 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bl7kv\" (UniqueName: \"kubernetes.io/projected/4101eecf-8a4f-4ec9-9b3e-7dc1d9a34f55-kube-api-access-bl7kv\") pod \"redhat-operators-7dqqz\" (UID: \"4101eecf-8a4f-4ec9-9b3e-7dc1d9a34f55\") " pod="openshift-marketplace/redhat-operators-7dqqz" Jan 29 16:14:15 crc kubenswrapper[5008]: I0129 16:14:15.805711 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4101eecf-8a4f-4ec9-9b3e-7dc1d9a34f55-utilities\") pod \"redhat-operators-7dqqz\" (UID: \"4101eecf-8a4f-4ec9-9b3e-7dc1d9a34f55\") " pod="openshift-marketplace/redhat-operators-7dqqz" Jan 29 16:14:15 crc kubenswrapper[5008]: I0129 16:14:15.805750 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4101eecf-8a4f-4ec9-9b3e-7dc1d9a34f55-catalog-content\") pod \"redhat-operators-7dqqz\" (UID: \"4101eecf-8a4f-4ec9-9b3e-7dc1d9a34f55\") " pod="openshift-marketplace/redhat-operators-7dqqz" Jan 29 16:14:15 crc kubenswrapper[5008]: I0129 16:14:15.908003 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4101eecf-8a4f-4ec9-9b3e-7dc1d9a34f55-catalog-content\") pod \"redhat-operators-7dqqz\" (UID: \"4101eecf-8a4f-4ec9-9b3e-7dc1d9a34f55\") " pod="openshift-marketplace/redhat-operators-7dqqz" Jan 29 16:14:15 crc kubenswrapper[5008]: I0129 16:14:15.908232 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bl7kv\" (UniqueName: \"kubernetes.io/projected/4101eecf-8a4f-4ec9-9b3e-7dc1d9a34f55-kube-api-access-bl7kv\") pod \"redhat-operators-7dqqz\" (UID: \"4101eecf-8a4f-4ec9-9b3e-7dc1d9a34f55\") " pod="openshift-marketplace/redhat-operators-7dqqz" Jan 29 16:14:15 crc kubenswrapper[5008]: I0129 16:14:15.908260 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4101eecf-8a4f-4ec9-9b3e-7dc1d9a34f55-utilities\") pod \"redhat-operators-7dqqz\" (UID: \"4101eecf-8a4f-4ec9-9b3e-7dc1d9a34f55\") " pod="openshift-marketplace/redhat-operators-7dqqz" Jan 29 16:14:15 crc kubenswrapper[5008]: I0129 16:14:15.909197 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4101eecf-8a4f-4ec9-9b3e-7dc1d9a34f55-utilities\") pod \"redhat-operators-7dqqz\" (UID: \"4101eecf-8a4f-4ec9-9b3e-7dc1d9a34f55\") " pod="openshift-marketplace/redhat-operators-7dqqz" Jan 29 16:14:15 crc kubenswrapper[5008]: I0129 16:14:15.909598 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4101eecf-8a4f-4ec9-9b3e-7dc1d9a34f55-catalog-content\") pod \"redhat-operators-7dqqz\" (UID: \"4101eecf-8a4f-4ec9-9b3e-7dc1d9a34f55\") " pod="openshift-marketplace/redhat-operators-7dqqz" Jan 29 16:14:15 crc kubenswrapper[5008]: I0129 16:14:15.940924 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bl7kv\" (UniqueName: \"kubernetes.io/projected/4101eecf-8a4f-4ec9-9b3e-7dc1d9a34f55-kube-api-access-bl7kv\") pod \"redhat-operators-7dqqz\" (UID: \"4101eecf-8a4f-4ec9-9b3e-7dc1d9a34f55\") " pod="openshift-marketplace/redhat-operators-7dqqz" Jan 29 16:14:15 crc kubenswrapper[5008]: I0129 16:14:15.962865 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-7dqqz" Jan 29 16:14:16 crc kubenswrapper[5008]: I0129 16:14:16.454001 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-7dqqz"] Jan 29 16:14:16 crc kubenswrapper[5008]: I0129 16:14:16.933471 5008 generic.go:334] "Generic (PLEG): container finished" podID="4101eecf-8a4f-4ec9-9b3e-7dc1d9a34f55" containerID="5afd0a214b8f8d22e6164362eafb7f99729ea9d22bade9b4d16142746c8240a6" exitCode=0 Jan 29 16:14:16 crc kubenswrapper[5008]: I0129 16:14:16.933667 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7dqqz" event={"ID":"4101eecf-8a4f-4ec9-9b3e-7dc1d9a34f55","Type":"ContainerDied","Data":"5afd0a214b8f8d22e6164362eafb7f99729ea9d22bade9b4d16142746c8240a6"} Jan 29 16:14:16 crc kubenswrapper[5008]: I0129 16:14:16.933849 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7dqqz" event={"ID":"4101eecf-8a4f-4ec9-9b3e-7dc1d9a34f55","Type":"ContainerStarted","Data":"4bc8d674639c663e12f180fa6c89b4e70c92f8b3fda66ccac4d3e879acdf15cc"} Jan 29 16:14:17 crc kubenswrapper[5008]: E0129 16:14:17.073478 5008 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://registry.redhat.io/redhat/redhat-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Jan 29 16:14:17 crc kubenswrapper[5008]: E0129 16:14:17.073806 5008 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-bl7kv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-7dqqz_openshift-marketplace(4101eecf-8a4f-4ec9-9b3e-7dc1d9a34f55): ErrImagePull: initializing source docker://registry.redhat.io/redhat/redhat-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" logger="UnhandledError" Jan 29 16:14:17 crc kubenswrapper[5008]: E0129 16:14:17.075245 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"initializing source docker://registry.redhat.io/redhat/redhat-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)\"" pod="openshift-marketplace/redhat-operators-7dqqz" podUID="4101eecf-8a4f-4ec9-9b3e-7dc1d9a34f55" Jan 29 16:14:17 crc kubenswrapper[5008]: E0129 16:14:17.331071 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-9lmvr" podUID="0cf4cf5b-529f-49a9-900c-a94b840568d8" Jan 29 16:14:17 crc kubenswrapper[5008]: E0129 16:14:17.331377 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-httpd\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/ubi9/httpd-24:latest\\\"\"" pod="openstack/ceilometer-0" podUID="d40740f9-e8d8-4f46-b8b0-d913a6c33210" Jan 29 16:14:17 crc kubenswrapper[5008]: E0129 16:14:17.949802 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-7dqqz" podUID="4101eecf-8a4f-4ec9-9b3e-7dc1d9a34f55" Jan 29 16:14:21 crc kubenswrapper[5008]: E0129 16:14:21.325352 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-fl9wc" podUID="66b503d3-cf12-4a89-90ca-27d7f941ed63" Jan 29 16:14:24 crc kubenswrapper[5008]: E0129 16:14:24.326215 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-6qmv7" podUID="2ed48245-be09-46c8-97f9-263179717512" Jan 29 16:14:31 crc kubenswrapper[5008]: E0129 16:14:31.326831 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-9lmvr" podUID="0cf4cf5b-529f-49a9-900c-a94b840568d8" Jan 29 16:14:31 crc kubenswrapper[5008]: E0129 16:14:31.453056 5008 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://registry.redhat.io/redhat/redhat-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Jan 29 16:14:31 crc kubenswrapper[5008]: E0129 16:14:31.453212 5008 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-bl7kv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-7dqqz_openshift-marketplace(4101eecf-8a4f-4ec9-9b3e-7dc1d9a34f55): ErrImagePull: initializing source docker://registry.redhat.io/redhat/redhat-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" logger="UnhandledError" Jan 29 16:14:31 crc kubenswrapper[5008]: E0129 16:14:31.454379 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"initializing source docker://registry.redhat.io/redhat/redhat-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)\"" pod="openshift-marketplace/redhat-operators-7dqqz" podUID="4101eecf-8a4f-4ec9-9b3e-7dc1d9a34f55" Jan 29 16:14:32 crc kubenswrapper[5008]: E0129 16:14:32.325144 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-fl9wc" podUID="66b503d3-cf12-4a89-90ca-27d7f941ed63" Jan 29 16:14:32 crc kubenswrapper[5008]: E0129 16:14:32.325150 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-httpd\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/ubi9/httpd-24:latest\\\"\"" pod="openstack/ceilometer-0" podUID="d40740f9-e8d8-4f46-b8b0-d913a6c33210" Jan 29 16:14:37 crc kubenswrapper[5008]: E0129 16:14:37.332542 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-6qmv7" podUID="2ed48245-be09-46c8-97f9-263179717512" Jan 29 16:14:46 crc kubenswrapper[5008]: E0129 16:14:46.326476 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-7dqqz" podUID="4101eecf-8a4f-4ec9-9b3e-7dc1d9a34f55" Jan 29 16:14:46 crc kubenswrapper[5008]: E0129 16:14:46.326712 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-9lmvr" podUID="0cf4cf5b-529f-49a9-900c-a94b840568d8" Jan 29 16:14:47 crc kubenswrapper[5008]: E0129 16:14:47.331882 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-fl9wc" podUID="66b503d3-cf12-4a89-90ca-27d7f941ed63" Jan 29 16:14:47 crc kubenswrapper[5008]: E0129 16:14:47.332042 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-httpd\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/ubi9/httpd-24:latest\\\"\"" pod="openstack/ceilometer-0" podUID="d40740f9-e8d8-4f46-b8b0-d913a6c33210" Jan 29 16:14:51 crc kubenswrapper[5008]: E0129 16:14:51.327907 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-6qmv7" podUID="2ed48245-be09-46c8-97f9-263179717512" Jan 29 16:14:58 crc kubenswrapper[5008]: E0129 16:14:58.325607 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-9lmvr" podUID="0cf4cf5b-529f-49a9-900c-a94b840568d8" Jan 29 16:14:59 crc kubenswrapper[5008]: E0129 16:14:59.325977 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-httpd\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/ubi9/httpd-24:latest\\\"\"" pod="openstack/ceilometer-0" podUID="d40740f9-e8d8-4f46-b8b0-d913a6c33210" Jan 29 16:14:59 crc kubenswrapper[5008]: E0129 16:14:59.465868 5008 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://registry.redhat.io/redhat/redhat-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Jan 29 16:14:59 crc kubenswrapper[5008]: E0129 16:14:59.466049 5008 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-bl7kv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-7dqqz_openshift-marketplace(4101eecf-8a4f-4ec9-9b3e-7dc1d9a34f55): ErrImagePull: initializing source docker://registry.redhat.io/redhat/redhat-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" logger="UnhandledError" Jan 29 16:14:59 crc kubenswrapper[5008]: E0129 16:14:59.467259 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"initializing source docker://registry.redhat.io/redhat/redhat-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)\"" pod="openshift-marketplace/redhat-operators-7dqqz" podUID="4101eecf-8a4f-4ec9-9b3e-7dc1d9a34f55" Jan 29 16:15:00 crc kubenswrapper[5008]: I0129 16:15:00.165949 5008 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29495055-8s5hx"] Jan 29 16:15:00 crc kubenswrapper[5008]: I0129 16:15:00.167484 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29495055-8s5hx" Jan 29 16:15:00 crc kubenswrapper[5008]: I0129 16:15:00.173105 5008 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 29 16:15:00 crc kubenswrapper[5008]: I0129 16:15:00.176679 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 29 16:15:00 crc kubenswrapper[5008]: I0129 16:15:00.189263 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29495055-8s5hx"] Jan 29 16:15:00 crc kubenswrapper[5008]: I0129 16:15:00.257596 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wm8vj\" (UniqueName: \"kubernetes.io/projected/44e772a8-b044-4c03-a83a-4634997d4139-kube-api-access-wm8vj\") pod \"collect-profiles-29495055-8s5hx\" (UID: \"44e772a8-b044-4c03-a83a-4634997d4139\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29495055-8s5hx" Jan 29 16:15:00 crc kubenswrapper[5008]: I0129 16:15:00.257685 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/44e772a8-b044-4c03-a83a-4634997d4139-secret-volume\") pod \"collect-profiles-29495055-8s5hx\" (UID: \"44e772a8-b044-4c03-a83a-4634997d4139\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29495055-8s5hx" Jan 29 16:15:00 crc kubenswrapper[5008]: I0129 16:15:00.257845 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/44e772a8-b044-4c03-a83a-4634997d4139-config-volume\") pod \"collect-profiles-29495055-8s5hx\" (UID: \"44e772a8-b044-4c03-a83a-4634997d4139\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29495055-8s5hx" Jan 29 16:15:00 crc kubenswrapper[5008]: I0129 16:15:00.360954 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wm8vj\" (UniqueName: \"kubernetes.io/projected/44e772a8-b044-4c03-a83a-4634997d4139-kube-api-access-wm8vj\") pod \"collect-profiles-29495055-8s5hx\" (UID: \"44e772a8-b044-4c03-a83a-4634997d4139\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29495055-8s5hx" Jan 29 16:15:00 crc kubenswrapper[5008]: I0129 16:15:00.361224 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/44e772a8-b044-4c03-a83a-4634997d4139-secret-volume\") pod \"collect-profiles-29495055-8s5hx\" (UID: \"44e772a8-b044-4c03-a83a-4634997d4139\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29495055-8s5hx" Jan 29 16:15:00 crc kubenswrapper[5008]: I0129 16:15:00.361395 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/44e772a8-b044-4c03-a83a-4634997d4139-config-volume\") pod \"collect-profiles-29495055-8s5hx\" (UID: \"44e772a8-b044-4c03-a83a-4634997d4139\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29495055-8s5hx" Jan 29 16:15:00 crc kubenswrapper[5008]: I0129 16:15:00.363004 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/44e772a8-b044-4c03-a83a-4634997d4139-config-volume\") pod \"collect-profiles-29495055-8s5hx\" (UID: \"44e772a8-b044-4c03-a83a-4634997d4139\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29495055-8s5hx" Jan 29 16:15:00 crc kubenswrapper[5008]: I0129 16:15:00.369216 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/44e772a8-b044-4c03-a83a-4634997d4139-secret-volume\") pod \"collect-profiles-29495055-8s5hx\" (UID: \"44e772a8-b044-4c03-a83a-4634997d4139\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29495055-8s5hx" Jan 29 16:15:00 crc kubenswrapper[5008]: I0129 16:15:00.384518 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wm8vj\" (UniqueName: \"kubernetes.io/projected/44e772a8-b044-4c03-a83a-4634997d4139-kube-api-access-wm8vj\") pod \"collect-profiles-29495055-8s5hx\" (UID: \"44e772a8-b044-4c03-a83a-4634997d4139\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29495055-8s5hx" Jan 29 16:15:00 crc kubenswrapper[5008]: I0129 16:15:00.498413 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29495055-8s5hx" Jan 29 16:15:00 crc kubenswrapper[5008]: I0129 16:15:00.941905 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29495055-8s5hx"] Jan 29 16:15:00 crc kubenswrapper[5008]: W0129 16:15:00.945954 5008 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod44e772a8_b044_4c03_a83a_4634997d4139.slice/crio-cf286ebe62ff3cb0452ca5303bcdb9523113e735312b843d7928f893722fa21c WatchSource:0}: Error finding container cf286ebe62ff3cb0452ca5303bcdb9523113e735312b843d7928f893722fa21c: Status 404 returned error can't find the container with id cf286ebe62ff3cb0452ca5303bcdb9523113e735312b843d7928f893722fa21c Jan 29 16:15:01 crc kubenswrapper[5008]: I0129 16:15:01.301014 5008 generic.go:334] "Generic (PLEG): container finished" podID="44e772a8-b044-4c03-a83a-4634997d4139" containerID="1955b67636880bbd2ed0bae81f814ec3605cfeeec18fe7a5bbb4a833cb6b1859" exitCode=0 Jan 29 16:15:01 crc kubenswrapper[5008]: I0129 16:15:01.301054 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29495055-8s5hx" event={"ID":"44e772a8-b044-4c03-a83a-4634997d4139","Type":"ContainerDied","Data":"1955b67636880bbd2ed0bae81f814ec3605cfeeec18fe7a5bbb4a833cb6b1859"} Jan 29 16:15:01 crc kubenswrapper[5008]: I0129 16:15:01.301084 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29495055-8s5hx" event={"ID":"44e772a8-b044-4c03-a83a-4634997d4139","Type":"ContainerStarted","Data":"cf286ebe62ff3cb0452ca5303bcdb9523113e735312b843d7928f893722fa21c"} Jan 29 16:15:02 crc kubenswrapper[5008]: E0129 16:15:02.325688 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-fl9wc" podUID="66b503d3-cf12-4a89-90ca-27d7f941ed63" Jan 29 16:15:02 crc kubenswrapper[5008]: I0129 16:15:02.657020 5008 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29495055-8s5hx" Jan 29 16:15:02 crc kubenswrapper[5008]: I0129 16:15:02.818001 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/44e772a8-b044-4c03-a83a-4634997d4139-config-volume\") pod \"44e772a8-b044-4c03-a83a-4634997d4139\" (UID: \"44e772a8-b044-4c03-a83a-4634997d4139\") " Jan 29 16:15:02 crc kubenswrapper[5008]: I0129 16:15:02.818154 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/44e772a8-b044-4c03-a83a-4634997d4139-secret-volume\") pod \"44e772a8-b044-4c03-a83a-4634997d4139\" (UID: \"44e772a8-b044-4c03-a83a-4634997d4139\") " Jan 29 16:15:02 crc kubenswrapper[5008]: I0129 16:15:02.818210 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wm8vj\" (UniqueName: \"kubernetes.io/projected/44e772a8-b044-4c03-a83a-4634997d4139-kube-api-access-wm8vj\") pod \"44e772a8-b044-4c03-a83a-4634997d4139\" (UID: \"44e772a8-b044-4c03-a83a-4634997d4139\") " Jan 29 16:15:02 crc kubenswrapper[5008]: I0129 16:15:02.819146 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/44e772a8-b044-4c03-a83a-4634997d4139-config-volume" (OuterVolumeSpecName: "config-volume") pod "44e772a8-b044-4c03-a83a-4634997d4139" (UID: "44e772a8-b044-4c03-a83a-4634997d4139"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:15:02 crc kubenswrapper[5008]: I0129 16:15:02.823505 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/44e772a8-b044-4c03-a83a-4634997d4139-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "44e772a8-b044-4c03-a83a-4634997d4139" (UID: "44e772a8-b044-4c03-a83a-4634997d4139"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:15:02 crc kubenswrapper[5008]: I0129 16:15:02.823872 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/44e772a8-b044-4c03-a83a-4634997d4139-kube-api-access-wm8vj" (OuterVolumeSpecName: "kube-api-access-wm8vj") pod "44e772a8-b044-4c03-a83a-4634997d4139" (UID: "44e772a8-b044-4c03-a83a-4634997d4139"). InnerVolumeSpecName "kube-api-access-wm8vj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:15:02 crc kubenswrapper[5008]: I0129 16:15:02.920580 5008 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/44e772a8-b044-4c03-a83a-4634997d4139-config-volume\") on node \"crc\" DevicePath \"\"" Jan 29 16:15:02 crc kubenswrapper[5008]: I0129 16:15:02.920630 5008 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/44e772a8-b044-4c03-a83a-4634997d4139-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 29 16:15:02 crc kubenswrapper[5008]: I0129 16:15:02.920642 5008 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wm8vj\" (UniqueName: \"kubernetes.io/projected/44e772a8-b044-4c03-a83a-4634997d4139-kube-api-access-wm8vj\") on node \"crc\" DevicePath \"\"" Jan 29 16:15:03 crc kubenswrapper[5008]: I0129 16:15:03.317694 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29495055-8s5hx" event={"ID":"44e772a8-b044-4c03-a83a-4634997d4139","Type":"ContainerDied","Data":"cf286ebe62ff3cb0452ca5303bcdb9523113e735312b843d7928f893722fa21c"} Jan 29 16:15:03 crc kubenswrapper[5008]: I0129 16:15:03.317739 5008 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cf286ebe62ff3cb0452ca5303bcdb9523113e735312b843d7928f893722fa21c" Jan 29 16:15:03 crc kubenswrapper[5008]: I0129 16:15:03.317749 5008 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29495055-8s5hx" Jan 29 16:15:03 crc kubenswrapper[5008]: I0129 16:15:03.724577 5008 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29495010-t7nh4"] Jan 29 16:15:03 crc kubenswrapper[5008]: I0129 16:15:03.733042 5008 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29495010-t7nh4"] Jan 29 16:15:05 crc kubenswrapper[5008]: E0129 16:15:05.326095 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-6qmv7" podUID="2ed48245-be09-46c8-97f9-263179717512" Jan 29 16:15:05 crc kubenswrapper[5008]: I0129 16:15:05.342488 5008 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4a912999-007c-495d-aaa3-857d76158a91" path="/var/lib/kubelet/pods/4a912999-007c-495d-aaa3-857d76158a91/volumes" Jan 29 16:15:12 crc kubenswrapper[5008]: E0129 16:15:12.325585 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-7dqqz" podUID="4101eecf-8a4f-4ec9-9b3e-7dc1d9a34f55" Jan 29 16:15:12 crc kubenswrapper[5008]: E0129 16:15:12.326240 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-9lmvr" podUID="0cf4cf5b-529f-49a9-900c-a94b840568d8" Jan 29 16:15:13 crc kubenswrapper[5008]: E0129 16:15:13.326024 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-httpd\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/ubi9/httpd-24:latest\\\"\"" pod="openstack/ceilometer-0" podUID="d40740f9-e8d8-4f46-b8b0-d913a6c33210" Jan 29 16:15:16 crc kubenswrapper[5008]: E0129 16:15:16.326270 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-fl9wc" podUID="66b503d3-cf12-4a89-90ca-27d7f941ed63" Jan 29 16:15:18 crc kubenswrapper[5008]: E0129 16:15:18.325651 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-6qmv7" podUID="2ed48245-be09-46c8-97f9-263179717512" Jan 29 16:15:24 crc kubenswrapper[5008]: E0129 16:15:24.325867 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-httpd\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/ubi9/httpd-24:latest\\\"\"" pod="openstack/ceilometer-0" podUID="d40740f9-e8d8-4f46-b8b0-d913a6c33210" Jan 29 16:15:24 crc kubenswrapper[5008]: E0129 16:15:24.325991 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-9lmvr" podUID="0cf4cf5b-529f-49a9-900c-a94b840568d8" Jan 29 16:15:26 crc kubenswrapper[5008]: E0129 16:15:26.326160 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-7dqqz" podUID="4101eecf-8a4f-4ec9-9b3e-7dc1d9a34f55" Jan 29 16:15:27 crc kubenswrapper[5008]: I0129 16:15:27.227099 5008 scope.go:117] "RemoveContainer" containerID="74e48ee561dff74c0b937607b1d67f636544c839b5dfad578f5c993d847e004b" Jan 29 16:15:28 crc kubenswrapper[5008]: E0129 16:15:28.325870 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-fl9wc" podUID="66b503d3-cf12-4a89-90ca-27d7f941ed63" Jan 29 16:15:31 crc kubenswrapper[5008]: E0129 16:15:31.330414 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-6qmv7" podUID="2ed48245-be09-46c8-97f9-263179717512" Jan 29 16:15:38 crc kubenswrapper[5008]: E0129 16:15:38.326654 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-9lmvr" podUID="0cf4cf5b-529f-49a9-900c-a94b840568d8" Jan 29 16:15:39 crc kubenswrapper[5008]: E0129 16:15:39.326742 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-httpd\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/ubi9/httpd-24:latest\\\"\"" pod="openstack/ceilometer-0" podUID="d40740f9-e8d8-4f46-b8b0-d913a6c33210" Jan 29 16:15:40 crc kubenswrapper[5008]: E0129 16:15:40.457836 5008 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://registry.redhat.io/redhat/redhat-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Jan 29 16:15:40 crc kubenswrapper[5008]: E0129 16:15:40.458630 5008 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-bl7kv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-7dqqz_openshift-marketplace(4101eecf-8a4f-4ec9-9b3e-7dc1d9a34f55): ErrImagePull: initializing source docker://registry.redhat.io/redhat/redhat-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" logger="UnhandledError" Jan 29 16:15:40 crc kubenswrapper[5008]: E0129 16:15:40.460318 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"initializing source docker://registry.redhat.io/redhat/redhat-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)\"" pod="openshift-marketplace/redhat-operators-7dqqz" podUID="4101eecf-8a4f-4ec9-9b3e-7dc1d9a34f55" Jan 29 16:15:41 crc kubenswrapper[5008]: E0129 16:15:41.325617 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-fl9wc" podUID="66b503d3-cf12-4a89-90ca-27d7f941ed63" Jan 29 16:15:43 crc kubenswrapper[5008]: I0129 16:15:43.990390 5008 patch_prober.go:28] interesting pod/machine-config-daemon-gk9q8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 16:15:43 crc kubenswrapper[5008]: I0129 16:15:43.990703 5008 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-gk9q8" podUID="ca0fcb2d-733d-4bde-9bbf-3f7082d0e244" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 16:15:44 crc kubenswrapper[5008]: E0129 16:15:44.326178 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-6qmv7" podUID="2ed48245-be09-46c8-97f9-263179717512" Jan 29 16:15:52 crc kubenswrapper[5008]: E0129 16:15:52.326278 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-7dqqz" podUID="4101eecf-8a4f-4ec9-9b3e-7dc1d9a34f55" Jan 29 16:15:53 crc kubenswrapper[5008]: E0129 16:15:53.325397 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-9lmvr" podUID="0cf4cf5b-529f-49a9-900c-a94b840568d8" Jan 29 16:15:54 crc kubenswrapper[5008]: E0129 16:15:54.326321 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-httpd\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/ubi9/httpd-24:latest\\\"\"" pod="openstack/ceilometer-0" podUID="d40740f9-e8d8-4f46-b8b0-d913a6c33210" Jan 29 16:15:54 crc kubenswrapper[5008]: E0129 16:15:54.327082 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-fl9wc" podUID="66b503d3-cf12-4a89-90ca-27d7f941ed63" Jan 29 16:15:56 crc kubenswrapper[5008]: E0129 16:15:56.326879 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-6qmv7" podUID="2ed48245-be09-46c8-97f9-263179717512" Jan 29 16:16:04 crc kubenswrapper[5008]: E0129 16:16:04.326756 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-7dqqz" podUID="4101eecf-8a4f-4ec9-9b3e-7dc1d9a34f55" Jan 29 16:16:05 crc kubenswrapper[5008]: E0129 16:16:05.324316 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-fl9wc" podUID="66b503d3-cf12-4a89-90ca-27d7f941ed63" Jan 29 16:16:08 crc kubenswrapper[5008]: E0129 16:16:08.326114 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-httpd\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/ubi9/httpd-24:latest\\\"\"" pod="openstack/ceilometer-0" podUID="d40740f9-e8d8-4f46-b8b0-d913a6c33210" Jan 29 16:16:08 crc kubenswrapper[5008]: E0129 16:16:08.335547 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-9lmvr" podUID="0cf4cf5b-529f-49a9-900c-a94b840568d8" Jan 29 16:16:10 crc kubenswrapper[5008]: E0129 16:16:10.325465 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-6qmv7" podUID="2ed48245-be09-46c8-97f9-263179717512" Jan 29 16:16:13 crc kubenswrapper[5008]: I0129 16:16:13.990299 5008 patch_prober.go:28] interesting pod/machine-config-daemon-gk9q8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 16:16:13 crc kubenswrapper[5008]: I0129 16:16:13.991166 5008 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-gk9q8" podUID="ca0fcb2d-733d-4bde-9bbf-3f7082d0e244" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 16:16:15 crc kubenswrapper[5008]: E0129 16:16:15.325646 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-7dqqz" podUID="4101eecf-8a4f-4ec9-9b3e-7dc1d9a34f55" Jan 29 16:16:19 crc kubenswrapper[5008]: E0129 16:16:19.325475 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-httpd\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/ubi9/httpd-24:latest\\\"\"" pod="openstack/ceilometer-0" podUID="d40740f9-e8d8-4f46-b8b0-d913a6c33210" Jan 29 16:16:20 crc kubenswrapper[5008]: E0129 16:16:20.326036 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-fl9wc" podUID="66b503d3-cf12-4a89-90ca-27d7f941ed63" Jan 29 16:16:20 crc kubenswrapper[5008]: E0129 16:16:20.326163 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-9lmvr" podUID="0cf4cf5b-529f-49a9-900c-a94b840568d8" Jan 29 16:16:25 crc kubenswrapper[5008]: E0129 16:16:25.327652 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-6qmv7" podUID="2ed48245-be09-46c8-97f9-263179717512" Jan 29 16:16:29 crc kubenswrapper[5008]: E0129 16:16:29.325105 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-7dqqz" podUID="4101eecf-8a4f-4ec9-9b3e-7dc1d9a34f55" Jan 29 16:16:31 crc kubenswrapper[5008]: E0129 16:16:31.325902 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-9lmvr" podUID="0cf4cf5b-529f-49a9-900c-a94b840568d8" Jan 29 16:16:32 crc kubenswrapper[5008]: E0129 16:16:32.326600 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-httpd\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/ubi9/httpd-24:latest\\\"\"" pod="openstack/ceilometer-0" podUID="d40740f9-e8d8-4f46-b8b0-d913a6c33210" Jan 29 16:16:34 crc kubenswrapper[5008]: E0129 16:16:34.326296 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-fl9wc" podUID="66b503d3-cf12-4a89-90ca-27d7f941ed63" Jan 29 16:16:37 crc kubenswrapper[5008]: E0129 16:16:37.337347 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-6qmv7" podUID="2ed48245-be09-46c8-97f9-263179717512" Jan 29 16:16:41 crc kubenswrapper[5008]: E0129 16:16:41.326193 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-7dqqz" podUID="4101eecf-8a4f-4ec9-9b3e-7dc1d9a34f55" Jan 29 16:16:43 crc kubenswrapper[5008]: I0129 16:16:43.990501 5008 patch_prober.go:28] interesting pod/machine-config-daemon-gk9q8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 16:16:43 crc kubenswrapper[5008]: I0129 16:16:43.990857 5008 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-gk9q8" podUID="ca0fcb2d-733d-4bde-9bbf-3f7082d0e244" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 16:16:43 crc kubenswrapper[5008]: I0129 16:16:43.990900 5008 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-gk9q8" Jan 29 16:16:43 crc kubenswrapper[5008]: I0129 16:16:43.991625 5008 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"b700e8418443771845187d679243e192744c1e88425ed21d7245867ce870d957"} pod="openshift-machine-config-operator/machine-config-daemon-gk9q8" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 29 16:16:43 crc kubenswrapper[5008]: I0129 16:16:43.991680 5008 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-gk9q8" podUID="ca0fcb2d-733d-4bde-9bbf-3f7082d0e244" containerName="machine-config-daemon" containerID="cri-o://b700e8418443771845187d679243e192744c1e88425ed21d7245867ce870d957" gracePeriod=600 Jan 29 16:16:44 crc kubenswrapper[5008]: I0129 16:16:44.163105 5008 generic.go:334] "Generic (PLEG): container finished" podID="ca0fcb2d-733d-4bde-9bbf-3f7082d0e244" containerID="b700e8418443771845187d679243e192744c1e88425ed21d7245867ce870d957" exitCode=0 Jan 29 16:16:44 crc kubenswrapper[5008]: I0129 16:16:44.163142 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-gk9q8" event={"ID":"ca0fcb2d-733d-4bde-9bbf-3f7082d0e244","Type":"ContainerDied","Data":"b700e8418443771845187d679243e192744c1e88425ed21d7245867ce870d957"} Jan 29 16:16:44 crc kubenswrapper[5008]: I0129 16:16:44.163196 5008 scope.go:117] "RemoveContainer" containerID="cb5e6384a544764e5b0e5a38f2e442c3dc79aaa0e3b882c450dadd5dfb981e64" Jan 29 16:16:44 crc kubenswrapper[5008]: I0129 16:16:44.300285 5008 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-nvrh2/must-gather-f7qvt"] Jan 29 16:16:44 crc kubenswrapper[5008]: E0129 16:16:44.301403 5008 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="44e772a8-b044-4c03-a83a-4634997d4139" containerName="collect-profiles" Jan 29 16:16:44 crc kubenswrapper[5008]: I0129 16:16:44.301424 5008 state_mem.go:107] "Deleted CPUSet assignment" podUID="44e772a8-b044-4c03-a83a-4634997d4139" containerName="collect-profiles" Jan 29 16:16:44 crc kubenswrapper[5008]: I0129 16:16:44.301678 5008 memory_manager.go:354] "RemoveStaleState removing state" podUID="44e772a8-b044-4c03-a83a-4634997d4139" containerName="collect-profiles" Jan 29 16:16:44 crc kubenswrapper[5008]: I0129 16:16:44.302761 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-nvrh2/must-gather-f7qvt" Jan 29 16:16:44 crc kubenswrapper[5008]: I0129 16:16:44.307355 5008 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-nvrh2"/"openshift-service-ca.crt" Jan 29 16:16:44 crc kubenswrapper[5008]: I0129 16:16:44.307740 5008 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-nvrh2"/"kube-root-ca.crt" Jan 29 16:16:44 crc kubenswrapper[5008]: E0129 16:16:44.329046 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-9lmvr" podUID="0cf4cf5b-529f-49a9-900c-a94b840568d8" Jan 29 16:16:44 crc kubenswrapper[5008]: I0129 16:16:44.331355 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-nvrh2/must-gather-f7qvt"] Jan 29 16:16:44 crc kubenswrapper[5008]: I0129 16:16:44.372741 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p5tbc\" (UniqueName: \"kubernetes.io/projected/d320dd2e-14dc-4c54-86bf-25b5abd30dae-kube-api-access-p5tbc\") pod \"must-gather-f7qvt\" (UID: \"d320dd2e-14dc-4c54-86bf-25b5abd30dae\") " pod="openshift-must-gather-nvrh2/must-gather-f7qvt" Jan 29 16:16:44 crc kubenswrapper[5008]: I0129 16:16:44.372840 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/d320dd2e-14dc-4c54-86bf-25b5abd30dae-must-gather-output\") pod \"must-gather-f7qvt\" (UID: \"d320dd2e-14dc-4c54-86bf-25b5abd30dae\") " pod="openshift-must-gather-nvrh2/must-gather-f7qvt" Jan 29 16:16:44 crc kubenswrapper[5008]: I0129 16:16:44.475041 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p5tbc\" (UniqueName: \"kubernetes.io/projected/d320dd2e-14dc-4c54-86bf-25b5abd30dae-kube-api-access-p5tbc\") pod \"must-gather-f7qvt\" (UID: \"d320dd2e-14dc-4c54-86bf-25b5abd30dae\") " pod="openshift-must-gather-nvrh2/must-gather-f7qvt" Jan 29 16:16:44 crc kubenswrapper[5008]: I0129 16:16:44.475124 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/d320dd2e-14dc-4c54-86bf-25b5abd30dae-must-gather-output\") pod \"must-gather-f7qvt\" (UID: \"d320dd2e-14dc-4c54-86bf-25b5abd30dae\") " pod="openshift-must-gather-nvrh2/must-gather-f7qvt" Jan 29 16:16:44 crc kubenswrapper[5008]: I0129 16:16:44.475990 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/d320dd2e-14dc-4c54-86bf-25b5abd30dae-must-gather-output\") pod \"must-gather-f7qvt\" (UID: \"d320dd2e-14dc-4c54-86bf-25b5abd30dae\") " pod="openshift-must-gather-nvrh2/must-gather-f7qvt" Jan 29 16:16:44 crc kubenswrapper[5008]: I0129 16:16:44.497973 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p5tbc\" (UniqueName: \"kubernetes.io/projected/d320dd2e-14dc-4c54-86bf-25b5abd30dae-kube-api-access-p5tbc\") pod \"must-gather-f7qvt\" (UID: \"d320dd2e-14dc-4c54-86bf-25b5abd30dae\") " pod="openshift-must-gather-nvrh2/must-gather-f7qvt" Jan 29 16:16:44 crc kubenswrapper[5008]: I0129 16:16:44.632372 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-nvrh2/must-gather-f7qvt" Jan 29 16:16:45 crc kubenswrapper[5008]: I0129 16:16:45.069963 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-nvrh2/must-gather-f7qvt"] Jan 29 16:16:45 crc kubenswrapper[5008]: W0129 16:16:45.070375 5008 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd320dd2e_14dc_4c54_86bf_25b5abd30dae.slice/crio-4e472c34cfa7d773a6e23ce027b20aac173cd5ea59646b458c8fe01c231b2b31 WatchSource:0}: Error finding container 4e472c34cfa7d773a6e23ce027b20aac173cd5ea59646b458c8fe01c231b2b31: Status 404 returned error can't find the container with id 4e472c34cfa7d773a6e23ce027b20aac173cd5ea59646b458c8fe01c231b2b31 Jan 29 16:16:45 crc kubenswrapper[5008]: I0129 16:16:45.177506 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-gk9q8" event={"ID":"ca0fcb2d-733d-4bde-9bbf-3f7082d0e244","Type":"ContainerStarted","Data":"4869b8ff7292689d034b462eb087eeb3d660872c7c7ec7e800ab22acc04bbfec"} Jan 29 16:16:45 crc kubenswrapper[5008]: I0129 16:16:45.179143 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-nvrh2/must-gather-f7qvt" event={"ID":"d320dd2e-14dc-4c54-86bf-25b5abd30dae","Type":"ContainerStarted","Data":"4e472c34cfa7d773a6e23ce027b20aac173cd5ea59646b458c8fe01c231b2b31"} Jan 29 16:16:46 crc kubenswrapper[5008]: E0129 16:16:46.325654 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-httpd\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/ubi9/httpd-24:latest\\\"\"" pod="openstack/ceilometer-0" podUID="d40740f9-e8d8-4f46-b8b0-d913a6c33210" Jan 29 16:16:50 crc kubenswrapper[5008]: E0129 16:16:50.761096 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-fl9wc" podUID="66b503d3-cf12-4a89-90ca-27d7f941ed63" Jan 29 16:16:52 crc kubenswrapper[5008]: E0129 16:16:52.932501 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-6qmv7" podUID="2ed48245-be09-46c8-97f9-263179717512" Jan 29 16:16:52 crc kubenswrapper[5008]: E0129 16:16:52.932502 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-7dqqz" podUID="4101eecf-8a4f-4ec9-9b3e-7dc1d9a34f55" Jan 29 16:16:54 crc kubenswrapper[5008]: I0129 16:16:54.266629 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-nvrh2/must-gather-f7qvt" event={"ID":"d320dd2e-14dc-4c54-86bf-25b5abd30dae","Type":"ContainerStarted","Data":"3327cb68737f553fc5a657c32f672ee7fa9a240ba24d843df1220fe098f622fe"} Jan 29 16:16:55 crc kubenswrapper[5008]: I0129 16:16:55.281258 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-nvrh2/must-gather-f7qvt" event={"ID":"d320dd2e-14dc-4c54-86bf-25b5abd30dae","Type":"ContainerStarted","Data":"ea23d1b8036291fc45a3f31fc97e29dc32fd1ff69a4590d0e2497457df3a82ce"} Jan 29 16:16:55 crc kubenswrapper[5008]: I0129 16:16:55.319060 5008 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-nvrh2/must-gather-f7qvt" podStartSLOduration=2.708850406 podStartE2EDuration="11.319043024s" podCreationTimestamp="2026-01-29 16:16:44 +0000 UTC" firstStartedPulling="2026-01-29 16:16:45.072278672 +0000 UTC m=+2948.745132909" lastFinishedPulling="2026-01-29 16:16:53.68247129 +0000 UTC m=+2957.355325527" observedRunningTime="2026-01-29 16:16:55.313270105 +0000 UTC m=+2958.986124342" watchObservedRunningTime="2026-01-29 16:16:55.319043024 +0000 UTC m=+2958.991897261" Jan 29 16:16:55 crc kubenswrapper[5008]: E0129 16:16:55.326322 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-9lmvr" podUID="0cf4cf5b-529f-49a9-900c-a94b840568d8" Jan 29 16:16:58 crc kubenswrapper[5008]: E0129 16:16:58.328209 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-httpd\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/ubi9/httpd-24:latest\\\"\"" pod="openstack/ceilometer-0" podUID="d40740f9-e8d8-4f46-b8b0-d913a6c33210" Jan 29 16:17:04 crc kubenswrapper[5008]: E0129 16:17:04.325191 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-fl9wc" podUID="66b503d3-cf12-4a89-90ca-27d7f941ed63" Jan 29 16:17:05 crc kubenswrapper[5008]: I0129 16:17:05.308575 5008 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-nvrh2/crc-debug-wrjnm"] Jan 29 16:17:05 crc kubenswrapper[5008]: I0129 16:17:05.310392 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-nvrh2/crc-debug-wrjnm" Jan 29 16:17:05 crc kubenswrapper[5008]: I0129 16:17:05.312597 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-nvrh2"/"default-dockercfg-r9q2j" Jan 29 16:17:05 crc kubenswrapper[5008]: I0129 16:17:05.394297 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/8077b692-59d3-4065-8632-745ffcd783af-host\") pod \"crc-debug-wrjnm\" (UID: \"8077b692-59d3-4065-8632-745ffcd783af\") " pod="openshift-must-gather-nvrh2/crc-debug-wrjnm" Jan 29 16:17:05 crc kubenswrapper[5008]: I0129 16:17:05.394370 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cvl6w\" (UniqueName: \"kubernetes.io/projected/8077b692-59d3-4065-8632-745ffcd783af-kube-api-access-cvl6w\") pod \"crc-debug-wrjnm\" (UID: \"8077b692-59d3-4065-8632-745ffcd783af\") " pod="openshift-must-gather-nvrh2/crc-debug-wrjnm" Jan 29 16:17:05 crc kubenswrapper[5008]: I0129 16:17:05.496184 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/8077b692-59d3-4065-8632-745ffcd783af-host\") pod \"crc-debug-wrjnm\" (UID: \"8077b692-59d3-4065-8632-745ffcd783af\") " pod="openshift-must-gather-nvrh2/crc-debug-wrjnm" Jan 29 16:17:05 crc kubenswrapper[5008]: I0129 16:17:05.496261 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cvl6w\" (UniqueName: \"kubernetes.io/projected/8077b692-59d3-4065-8632-745ffcd783af-kube-api-access-cvl6w\") pod \"crc-debug-wrjnm\" (UID: \"8077b692-59d3-4065-8632-745ffcd783af\") " pod="openshift-must-gather-nvrh2/crc-debug-wrjnm" Jan 29 16:17:05 crc kubenswrapper[5008]: I0129 16:17:05.496643 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/8077b692-59d3-4065-8632-745ffcd783af-host\") pod \"crc-debug-wrjnm\" (UID: \"8077b692-59d3-4065-8632-745ffcd783af\") " pod="openshift-must-gather-nvrh2/crc-debug-wrjnm" Jan 29 16:17:05 crc kubenswrapper[5008]: I0129 16:17:05.525428 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cvl6w\" (UniqueName: \"kubernetes.io/projected/8077b692-59d3-4065-8632-745ffcd783af-kube-api-access-cvl6w\") pod \"crc-debug-wrjnm\" (UID: \"8077b692-59d3-4065-8632-745ffcd783af\") " pod="openshift-must-gather-nvrh2/crc-debug-wrjnm" Jan 29 16:17:05 crc kubenswrapper[5008]: I0129 16:17:05.635528 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-nvrh2/crc-debug-wrjnm" Jan 29 16:17:05 crc kubenswrapper[5008]: W0129 16:17:05.666822 5008 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8077b692_59d3_4065_8632_745ffcd783af.slice/crio-fafb9aae9ad84d964e9ac7b5fe41fe2d6341c2a5ab14aebcb1e10322b2b043fe WatchSource:0}: Error finding container fafb9aae9ad84d964e9ac7b5fe41fe2d6341c2a5ab14aebcb1e10322b2b043fe: Status 404 returned error can't find the container with id fafb9aae9ad84d964e9ac7b5fe41fe2d6341c2a5ab14aebcb1e10322b2b043fe Jan 29 16:17:06 crc kubenswrapper[5008]: I0129 16:17:06.394703 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-nvrh2/crc-debug-wrjnm" event={"ID":"8077b692-59d3-4065-8632-745ffcd783af","Type":"ContainerStarted","Data":"fafb9aae9ad84d964e9ac7b5fe41fe2d6341c2a5ab14aebcb1e10322b2b043fe"} Jan 29 16:17:06 crc kubenswrapper[5008]: E0129 16:17:06.449869 5008 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://registry.redhat.io/redhat/redhat-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Jan 29 16:17:06 crc kubenswrapper[5008]: E0129 16:17:06.450016 5008 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-bl7kv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-7dqqz_openshift-marketplace(4101eecf-8a4f-4ec9-9b3e-7dc1d9a34f55): ErrImagePull: initializing source docker://registry.redhat.io/redhat/redhat-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" logger="UnhandledError" Jan 29 16:17:06 crc kubenswrapper[5008]: E0129 16:17:06.451260 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"initializing source docker://registry.redhat.io/redhat/redhat-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)\"" pod="openshift-marketplace/redhat-operators-7dqqz" podUID="4101eecf-8a4f-4ec9-9b3e-7dc1d9a34f55" Jan 29 16:17:07 crc kubenswrapper[5008]: E0129 16:17:07.332270 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-6qmv7" podUID="2ed48245-be09-46c8-97f9-263179717512" Jan 29 16:17:09 crc kubenswrapper[5008]: E0129 16:17:09.325754 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-9lmvr" podUID="0cf4cf5b-529f-49a9-900c-a94b840568d8" Jan 29 16:17:18 crc kubenswrapper[5008]: E0129 16:17:18.816120 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-7dqqz" podUID="4101eecf-8a4f-4ec9-9b3e-7dc1d9a34f55" Jan 29 16:17:18 crc kubenswrapper[5008]: E0129 16:17:18.817172 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-httpd\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/ubi9/httpd-24:latest\\\"\"" pod="openstack/ceilometer-0" podUID="d40740f9-e8d8-4f46-b8b0-d913a6c33210" Jan 29 16:17:19 crc kubenswrapper[5008]: E0129 16:17:19.325948 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-fl9wc" podUID="66b503d3-cf12-4a89-90ca-27d7f941ed63" Jan 29 16:17:19 crc kubenswrapper[5008]: E0129 16:17:19.325997 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-6qmv7" podUID="2ed48245-be09-46c8-97f9-263179717512" Jan 29 16:17:19 crc kubenswrapper[5008]: I0129 16:17:19.502434 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-nvrh2/crc-debug-wrjnm" event={"ID":"8077b692-59d3-4065-8632-745ffcd783af","Type":"ContainerStarted","Data":"12f78f704b07eccfa0b429f65cb28772b19c0b10e53b2bfdda418b422bc2f249"} Jan 29 16:17:19 crc kubenswrapper[5008]: I0129 16:17:19.523130 5008 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-nvrh2/crc-debug-wrjnm" podStartSLOduration=1.3095030730000001 podStartE2EDuration="14.523111221s" podCreationTimestamp="2026-01-29 16:17:05 +0000 UTC" firstStartedPulling="2026-01-29 16:17:05.670503408 +0000 UTC m=+2969.343357645" lastFinishedPulling="2026-01-29 16:17:18.884111556 +0000 UTC m=+2982.556965793" observedRunningTime="2026-01-29 16:17:19.514521813 +0000 UTC m=+2983.187376050" watchObservedRunningTime="2026-01-29 16:17:19.523111221 +0000 UTC m=+2983.195965448" Jan 29 16:17:20 crc kubenswrapper[5008]: E0129 16:17:20.326550 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-9lmvr" podUID="0cf4cf5b-529f-49a9-900c-a94b840568d8" Jan 29 16:17:21 crc kubenswrapper[5008]: I0129 16:17:21.520460 5008 generic.go:334] "Generic (PLEG): container finished" podID="8077b692-59d3-4065-8632-745ffcd783af" containerID="12f78f704b07eccfa0b429f65cb28772b19c0b10e53b2bfdda418b422bc2f249" exitCode=125 Jan 29 16:17:21 crc kubenswrapper[5008]: I0129 16:17:21.520546 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-nvrh2/crc-debug-wrjnm" event={"ID":"8077b692-59d3-4065-8632-745ffcd783af","Type":"ContainerDied","Data":"12f78f704b07eccfa0b429f65cb28772b19c0b10e53b2bfdda418b422bc2f249"} Jan 29 16:17:22 crc kubenswrapper[5008]: I0129 16:17:22.645301 5008 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-nvrh2/crc-debug-wrjnm" Jan 29 16:17:22 crc kubenswrapper[5008]: I0129 16:17:22.682762 5008 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-nvrh2/crc-debug-wrjnm"] Jan 29 16:17:22 crc kubenswrapper[5008]: I0129 16:17:22.693125 5008 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-nvrh2/crc-debug-wrjnm"] Jan 29 16:17:22 crc kubenswrapper[5008]: I0129 16:17:22.726638 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cvl6w\" (UniqueName: \"kubernetes.io/projected/8077b692-59d3-4065-8632-745ffcd783af-kube-api-access-cvl6w\") pod \"8077b692-59d3-4065-8632-745ffcd783af\" (UID: \"8077b692-59d3-4065-8632-745ffcd783af\") " Jan 29 16:17:22 crc kubenswrapper[5008]: I0129 16:17:22.726697 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/8077b692-59d3-4065-8632-745ffcd783af-host\") pod \"8077b692-59d3-4065-8632-745ffcd783af\" (UID: \"8077b692-59d3-4065-8632-745ffcd783af\") " Jan 29 16:17:22 crc kubenswrapper[5008]: I0129 16:17:22.727523 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8077b692-59d3-4065-8632-745ffcd783af-host" (OuterVolumeSpecName: "host") pod "8077b692-59d3-4065-8632-745ffcd783af" (UID: "8077b692-59d3-4065-8632-745ffcd783af"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 16:17:22 crc kubenswrapper[5008]: I0129 16:17:22.745716 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8077b692-59d3-4065-8632-745ffcd783af-kube-api-access-cvl6w" (OuterVolumeSpecName: "kube-api-access-cvl6w") pod "8077b692-59d3-4065-8632-745ffcd783af" (UID: "8077b692-59d3-4065-8632-745ffcd783af"). InnerVolumeSpecName "kube-api-access-cvl6w". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:17:22 crc kubenswrapper[5008]: I0129 16:17:22.829390 5008 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cvl6w\" (UniqueName: \"kubernetes.io/projected/8077b692-59d3-4065-8632-745ffcd783af-kube-api-access-cvl6w\") on node \"crc\" DevicePath \"\"" Jan 29 16:17:22 crc kubenswrapper[5008]: I0129 16:17:22.829434 5008 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/8077b692-59d3-4065-8632-745ffcd783af-host\") on node \"crc\" DevicePath \"\"" Jan 29 16:17:23 crc kubenswrapper[5008]: I0129 16:17:23.334353 5008 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8077b692-59d3-4065-8632-745ffcd783af" path="/var/lib/kubelet/pods/8077b692-59d3-4065-8632-745ffcd783af/volumes" Jan 29 16:17:23 crc kubenswrapper[5008]: I0129 16:17:23.538587 5008 scope.go:117] "RemoveContainer" containerID="12f78f704b07eccfa0b429f65cb28772b19c0b10e53b2bfdda418b422bc2f249" Jan 29 16:17:23 crc kubenswrapper[5008]: I0129 16:17:23.538618 5008 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-nvrh2/crc-debug-wrjnm" Jan 29 16:17:31 crc kubenswrapper[5008]: E0129 16:17:31.326127 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-7dqqz" podUID="4101eecf-8a4f-4ec9-9b3e-7dc1d9a34f55" Jan 29 16:17:32 crc kubenswrapper[5008]: E0129 16:17:32.327026 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-httpd\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/ubi9/httpd-24:latest\\\"\"" pod="openstack/ceilometer-0" podUID="d40740f9-e8d8-4f46-b8b0-d913a6c33210" Jan 29 16:17:32 crc kubenswrapper[5008]: E0129 16:17:32.327079 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-6qmv7" podUID="2ed48245-be09-46c8-97f9-263179717512" Jan 29 16:17:34 crc kubenswrapper[5008]: E0129 16:17:34.326130 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-9lmvr" podUID="0cf4cf5b-529f-49a9-900c-a94b840568d8" Jan 29 16:17:34 crc kubenswrapper[5008]: E0129 16:17:34.326951 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-fl9wc" podUID="66b503d3-cf12-4a89-90ca-27d7f941ed63" Jan 29 16:17:46 crc kubenswrapper[5008]: E0129 16:17:46.326407 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-fl9wc" podUID="66b503d3-cf12-4a89-90ca-27d7f941ed63" Jan 29 16:17:46 crc kubenswrapper[5008]: E0129 16:17:46.326447 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-httpd\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/ubi9/httpd-24:latest\\\"\"" pod="openstack/ceilometer-0" podUID="d40740f9-e8d8-4f46-b8b0-d913a6c33210" Jan 29 16:17:46 crc kubenswrapper[5008]: E0129 16:17:46.326694 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-7dqqz" podUID="4101eecf-8a4f-4ec9-9b3e-7dc1d9a34f55" Jan 29 16:17:47 crc kubenswrapper[5008]: E0129 16:17:47.330693 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-6qmv7" podUID="2ed48245-be09-46c8-97f9-263179717512" Jan 29 16:17:48 crc kubenswrapper[5008]: E0129 16:17:48.325322 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-9lmvr" podUID="0cf4cf5b-529f-49a9-900c-a94b840568d8" Jan 29 16:17:58 crc kubenswrapper[5008]: E0129 16:17:58.326447 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-fl9wc" podUID="66b503d3-cf12-4a89-90ca-27d7f941ed63" Jan 29 16:17:59 crc kubenswrapper[5008]: E0129 16:17:59.325868 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-9lmvr" podUID="0cf4cf5b-529f-49a9-900c-a94b840568d8" Jan 29 16:18:01 crc kubenswrapper[5008]: E0129 16:18:01.326418 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-7dqqz" podUID="4101eecf-8a4f-4ec9-9b3e-7dc1d9a34f55" Jan 29 16:18:01 crc kubenswrapper[5008]: E0129 16:18:01.327089 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-httpd\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/ubi9/httpd-24:latest\\\"\"" pod="openstack/ceilometer-0" podUID="d40740f9-e8d8-4f46-b8b0-d913a6c33210" Jan 29 16:18:02 crc kubenswrapper[5008]: E0129 16:18:02.326252 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-6qmv7" podUID="2ed48245-be09-46c8-97f9-263179717512" Jan 29 16:18:06 crc kubenswrapper[5008]: I0129 16:18:06.614113 5008 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-7f9c9f8766-4lf97_ce981b8e-ff53-48ad-b44e-b150c0b1b80f/barbican-api/0.log" Jan 29 16:18:06 crc kubenswrapper[5008]: I0129 16:18:06.731935 5008 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-7f9c9f8766-4lf97_ce981b8e-ff53-48ad-b44e-b150c0b1b80f/barbican-api-log/0.log" Jan 29 16:18:06 crc kubenswrapper[5008]: I0129 16:18:06.838569 5008 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-d5688bfcd-94rkm_24c4cc25-9e50-4601-bac2-552e1aded799/barbican-keystone-listener/0.log" Jan 29 16:18:06 crc kubenswrapper[5008]: I0129 16:18:06.930364 5008 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-d5688bfcd-94rkm_24c4cc25-9e50-4601-bac2-552e1aded799/barbican-keystone-listener-log/0.log" Jan 29 16:18:07 crc kubenswrapper[5008]: I0129 16:18:07.008702 5008 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-5c46c758ff-5p4jl_f77f54f0-02b9-4082-8a76-dc78a9b7d08c/barbican-worker/0.log" Jan 29 16:18:07 crc kubenswrapper[5008]: I0129 16:18:07.066420 5008 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-5c46c758ff-5p4jl_f77f54f0-02b9-4082-8a76-dc78a9b7d08c/barbican-worker-log/0.log" Jan 29 16:18:07 crc kubenswrapper[5008]: I0129 16:18:07.220157 5008 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_d40740f9-e8d8-4f46-b8b0-d913a6c33210/ceilometer-central-agent/0.log" Jan 29 16:18:07 crc kubenswrapper[5008]: I0129 16:18:07.286670 5008 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_d40740f9-e8d8-4f46-b8b0-d913a6c33210/ceilometer-notification-agent/0.log" Jan 29 16:18:07 crc kubenswrapper[5008]: I0129 16:18:07.360047 5008 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_d40740f9-e8d8-4f46-b8b0-d913a6c33210/sg-core/0.log" Jan 29 16:18:07 crc kubenswrapper[5008]: I0129 16:18:07.498352 5008 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_2f60d298-c33b-44b3-a99c-a0e75a321a80/cinder-api/0.log" Jan 29 16:18:07 crc kubenswrapper[5008]: I0129 16:18:07.502960 5008 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_2f60d298-c33b-44b3-a99c-a0e75a321a80/cinder-api-log/0.log" Jan 29 16:18:07 crc kubenswrapper[5008]: I0129 16:18:07.646136 5008 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_2c4e7961-5802-47c7-becf-75dd01d6e7d1/cinder-scheduler/0.log" Jan 29 16:18:07 crc kubenswrapper[5008]: I0129 16:18:07.716547 5008 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_2c4e7961-5802-47c7-becf-75dd01d6e7d1/probe/0.log" Jan 29 16:18:07 crc kubenswrapper[5008]: I0129 16:18:07.796690 5008 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-cd5cbd7b9-ttnd7_ffdf9dd1-5826-4e41-90ba-770e9ae42cc2/init/0.log" Jan 29 16:18:07 crc kubenswrapper[5008]: I0129 16:18:07.996940 5008 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-cd5cbd7b9-ttnd7_ffdf9dd1-5826-4e41-90ba-770e9ae42cc2/init/0.log" Jan 29 16:18:08 crc kubenswrapper[5008]: I0129 16:18:08.073137 5008 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-cd5cbd7b9-ttnd7_ffdf9dd1-5826-4e41-90ba-770e9ae42cc2/dnsmasq-dns/0.log" Jan 29 16:18:08 crc kubenswrapper[5008]: I0129 16:18:08.183407 5008 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_b210097f-985c-4014-a76e-b430ef390fce/glance-httpd/0.log" Jan 29 16:18:08 crc kubenswrapper[5008]: I0129 16:18:08.300908 5008 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_b210097f-985c-4014-a76e-b430ef390fce/glance-log/0.log" Jan 29 16:18:08 crc kubenswrapper[5008]: I0129 16:18:08.368416 5008 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_d30face9-2636-4cb7-8e84-8558b7b40df4/glance-httpd/0.log" Jan 29 16:18:08 crc kubenswrapper[5008]: I0129 16:18:08.369558 5008 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_d30face9-2636-4cb7-8e84-8558b7b40df4/glance-log/0.log" Jan 29 16:18:08 crc kubenswrapper[5008]: I0129 16:18:08.683463 5008 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-bf5f5fc4b-t9vk7_fc599e48-62d0-4908-b4ed-cd3f13094665/horizon/0.log" Jan 29 16:18:08 crc kubenswrapper[5008]: I0129 16:18:08.842324 5008 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-779d6696cc-ltp9g_4732d1d7-c3d2-4f17-bf74-d92f350a3e2b/keystone-api/0.log" Jan 29 16:18:08 crc kubenswrapper[5008]: I0129 16:18:08.874759 5008 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-bf5f5fc4b-t9vk7_fc599e48-62d0-4908-b4ed-cd3f13094665/horizon-log/0.log" Jan 29 16:18:08 crc kubenswrapper[5008]: I0129 16:18:08.880798 5008 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-cron-29495041-5xjnv_3b2cbc69-268a-4c30-b9c0-d1352f380259/keystone-cron/0.log" Jan 29 16:18:09 crc kubenswrapper[5008]: I0129 16:18:09.094585 5008 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_kube-state-metrics-0_2691fca5-fe1e-4796-bf43-7135e9d5a198/kube-state-metrics/0.log" Jan 29 16:18:09 crc kubenswrapper[5008]: I0129 16:18:09.367343 5008 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-98cff5df-8qpcl_6bf14a27-dc0a-430e-affa-a6a28e944947/neutron-httpd/0.log" Jan 29 16:18:09 crc kubenswrapper[5008]: I0129 16:18:09.375971 5008 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-98cff5df-8qpcl_6bf14a27-dc0a-430e-affa-a6a28e944947/neutron-api/0.log" Jan 29 16:18:09 crc kubenswrapper[5008]: I0129 16:18:09.823677 5008 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_ffff5fc1-f4be-4fad-bfa8-890ea58d2a00/nova-api-log/0.log" Jan 29 16:18:09 crc kubenswrapper[5008]: I0129 16:18:09.875699 5008 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_ffff5fc1-f4be-4fad-bfa8-890ea58d2a00/nova-api-api/0.log" Jan 29 16:18:10 crc kubenswrapper[5008]: I0129 16:18:10.144663 5008 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell0-conductor-0_fc7804a1-e957-4095-b882-901a403bce11/nova-cell0-conductor-conductor/0.log" Jan 29 16:18:10 crc kubenswrapper[5008]: E0129 16:18:10.329028 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-fl9wc" podUID="66b503d3-cf12-4a89-90ca-27d7f941ed63" Jan 29 16:18:10 crc kubenswrapper[5008]: I0129 16:18:10.446190 5008 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-conductor-0_1a40e352-7353-41e6-8c6e-58b7beca8ab9/nova-cell1-conductor-conductor/0.log" Jan 29 16:18:10 crc kubenswrapper[5008]: I0129 16:18:10.503545 5008 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-novncproxy-0_21ca19b4-0317-4b08-8dc2-a4295c2fb8e4/nova-cell1-novncproxy-novncproxy/0.log" Jan 29 16:18:10 crc kubenswrapper[5008]: I0129 16:18:10.814302 5008 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_a4470533-b658-46fe-8749-f371b22703b2/nova-metadata-log/0.log" Jan 29 16:18:11 crc kubenswrapper[5008]: I0129 16:18:11.194577 5008 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-scheduler-0_f6caa062-78b8-42ad-a655-6828f63a7e8f/nova-scheduler-scheduler/0.log" Jan 29 16:18:11 crc kubenswrapper[5008]: I0129 16:18:11.216918 5008 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_2c8d6871-1129-4597-8a1e-94006a17448a/mysql-bootstrap/0.log" Jan 29 16:18:11 crc kubenswrapper[5008]: E0129 16:18:11.326194 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-9lmvr" podUID="0cf4cf5b-529f-49a9-900c-a94b840568d8" Jan 29 16:18:11 crc kubenswrapper[5008]: I0129 16:18:11.414256 5008 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_2c8d6871-1129-4597-8a1e-94006a17448a/mysql-bootstrap/0.log" Jan 29 16:18:11 crc kubenswrapper[5008]: I0129 16:18:11.448592 5008 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_2c8d6871-1129-4597-8a1e-94006a17448a/galera/0.log" Jan 29 16:18:11 crc kubenswrapper[5008]: I0129 16:18:11.596457 5008 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_a4470533-b658-46fe-8749-f371b22703b2/nova-metadata-metadata/0.log" Jan 29 16:18:11 crc kubenswrapper[5008]: I0129 16:18:11.623369 5008 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_a2958b99-a5fe-447a-93cc-64bade998854/mysql-bootstrap/0.log" Jan 29 16:18:11 crc kubenswrapper[5008]: I0129 16:18:11.961389 5008 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_a2958b99-a5fe-447a-93cc-64bade998854/mysql-bootstrap/0.log" Jan 29 16:18:12 crc kubenswrapper[5008]: I0129 16:18:12.022636 5008 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstackclient_3b26c725-8ee1-4144-baa0-a4a85bb7e1d2/openstackclient/0.log" Jan 29 16:18:12 crc kubenswrapper[5008]: I0129 16:18:12.040979 5008 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_a2958b99-a5fe-447a-93cc-64bade998854/galera/0.log" Jan 29 16:18:12 crc kubenswrapper[5008]: I0129 16:18:12.289138 5008 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-bw9wr_0dd702c8-269b-4fb6-a3a7-03adf93d916a/ovn-controller/0.log" Jan 29 16:18:12 crc kubenswrapper[5008]: E0129 16:18:12.326154 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-httpd\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/ubi9/httpd-24:latest\\\"\"" pod="openstack/ceilometer-0" podUID="d40740f9-e8d8-4f46-b8b0-d913a6c33210" Jan 29 16:18:12 crc kubenswrapper[5008]: I0129 16:18:12.344004 5008 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-metrics-qkf4v_90c13843-e314-4465-af68-367fc8d59731/openstack-network-exporter/0.log" Jan 29 16:18:12 crc kubenswrapper[5008]: I0129 16:18:12.509091 5008 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-k5zwb_fb07a603-1696-4378-8d99-382d5bc152da/ovsdb-server-init/0.log" Jan 29 16:18:12 crc kubenswrapper[5008]: I0129 16:18:12.775160 5008 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-k5zwb_fb07a603-1696-4378-8d99-382d5bc152da/ovsdb-server-init/0.log" Jan 29 16:18:12 crc kubenswrapper[5008]: I0129 16:18:12.791569 5008 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-k5zwb_fb07a603-1696-4378-8d99-382d5bc152da/ovs-vswitchd/0.log" Jan 29 16:18:12 crc kubenswrapper[5008]: I0129 16:18:12.853753 5008 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-k5zwb_fb07a603-1696-4378-8d99-382d5bc152da/ovsdb-server/0.log" Jan 29 16:18:13 crc kubenswrapper[5008]: I0129 16:18:13.029865 5008 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_f251affb-8e6d-445d-996c-da5e3fc29de8/openstack-network-exporter/0.log" Jan 29 16:18:13 crc kubenswrapper[5008]: I0129 16:18:13.057775 5008 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_f251affb-8e6d-445d-996c-da5e3fc29de8/ovn-northd/0.log" Jan 29 16:18:13 crc kubenswrapper[5008]: I0129 16:18:13.171392 5008 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_4d502938-9e22-4a6c-951e-b476cb87ee8f/openstack-network-exporter/0.log" Jan 29 16:18:13 crc kubenswrapper[5008]: I0129 16:18:13.248066 5008 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_4d502938-9e22-4a6c-951e-b476cb87ee8f/ovsdbserver-nb/0.log" Jan 29 16:18:13 crc kubenswrapper[5008]: I0129 16:18:13.353242 5008 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_ea8d28cd-76d6-4a6e-b6bd-a0e5f0fc2106/ovsdbserver-sb/0.log" Jan 29 16:18:13 crc kubenswrapper[5008]: I0129 16:18:13.399234 5008 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_ea8d28cd-76d6-4a6e-b6bd-a0e5f0fc2106/openstack-network-exporter/0.log" Jan 29 16:18:13 crc kubenswrapper[5008]: I0129 16:18:13.643701 5008 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-55d9fbf66-r5kj8_85024049-9e4b-4814-a617-cd17614f2a80/placement-api/0.log" Jan 29 16:18:13 crc kubenswrapper[5008]: I0129 16:18:13.660759 5008 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-55d9fbf66-r5kj8_85024049-9e4b-4814-a617-cd17614f2a80/placement-log/0.log" Jan 29 16:18:13 crc kubenswrapper[5008]: I0129 16:18:13.768416 5008 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_4dcd0990-beb1-445a-b387-b2b78c1a39d2/setup-container/0.log" Jan 29 16:18:13 crc kubenswrapper[5008]: I0129 16:18:13.942578 5008 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_4dcd0990-beb1-445a-b387-b2b78c1a39d2/setup-container/0.log" Jan 29 16:18:14 crc kubenswrapper[5008]: I0129 16:18:14.041447 5008 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_4dcd0990-beb1-445a-b387-b2b78c1a39d2/rabbitmq/0.log" Jan 29 16:18:14 crc kubenswrapper[5008]: I0129 16:18:14.052534 5008 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_8c8683a3-18f6-4242-9991-b542aed9143b/setup-container/0.log" Jan 29 16:18:14 crc kubenswrapper[5008]: I0129 16:18:14.311814 5008 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_8c8683a3-18f6-4242-9991-b542aed9143b/setup-container/0.log" Jan 29 16:18:14 crc kubenswrapper[5008]: I0129 16:18:14.315746 5008 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_8c8683a3-18f6-4242-9991-b542aed9143b/rabbitmq/0.log" Jan 29 16:18:14 crc kubenswrapper[5008]: E0129 16:18:14.325044 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-6qmv7" podUID="2ed48245-be09-46c8-97f9-263179717512" Jan 29 16:18:14 crc kubenswrapper[5008]: I0129 16:18:14.414073 5008 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-5c6fbdb57f-zvhpz_64c08f63-12a2-4dfb-b96d-0a12e9725021/proxy-httpd/0.log" Jan 29 16:18:14 crc kubenswrapper[5008]: I0129 16:18:14.547717 5008 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-5c6fbdb57f-zvhpz_64c08f63-12a2-4dfb-b96d-0a12e9725021/proxy-server/0.log" Jan 29 16:18:14 crc kubenswrapper[5008]: I0129 16:18:14.574441 5008 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-ring-rebalance-phmts_5b273a50-b2db-40d5-b4b4-6494206c606d/swift-ring-rebalance/0.log" Jan 29 16:18:14 crc kubenswrapper[5008]: I0129 16:18:14.772013 5008 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_7d8596d3-fe9a-4e1a-969b-2a40a90e437d/account-auditor/0.log" Jan 29 16:18:14 crc kubenswrapper[5008]: I0129 16:18:14.851326 5008 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_7d8596d3-fe9a-4e1a-969b-2a40a90e437d/account-reaper/0.log" Jan 29 16:18:14 crc kubenswrapper[5008]: I0129 16:18:14.867821 5008 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_7d8596d3-fe9a-4e1a-969b-2a40a90e437d/account-replicator/0.log" Jan 29 16:18:14 crc kubenswrapper[5008]: I0129 16:18:14.940681 5008 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_7d8596d3-fe9a-4e1a-969b-2a40a90e437d/account-server/0.log" Jan 29 16:18:15 crc kubenswrapper[5008]: I0129 16:18:15.013490 5008 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_7d8596d3-fe9a-4e1a-969b-2a40a90e437d/container-auditor/0.log" Jan 29 16:18:15 crc kubenswrapper[5008]: I0129 16:18:15.105048 5008 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_7d8596d3-fe9a-4e1a-969b-2a40a90e437d/container-server/0.log" Jan 29 16:18:15 crc kubenswrapper[5008]: I0129 16:18:15.109050 5008 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_7d8596d3-fe9a-4e1a-969b-2a40a90e437d/container-replicator/0.log" Jan 29 16:18:15 crc kubenswrapper[5008]: I0129 16:18:15.147618 5008 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_7d8596d3-fe9a-4e1a-969b-2a40a90e437d/container-updater/0.log" Jan 29 16:18:15 crc kubenswrapper[5008]: I0129 16:18:15.270837 5008 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_7d8596d3-fe9a-4e1a-969b-2a40a90e437d/object-auditor/0.log" Jan 29 16:18:15 crc kubenswrapper[5008]: E0129 16:18:15.326493 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-7dqqz" podUID="4101eecf-8a4f-4ec9-9b3e-7dc1d9a34f55" Jan 29 16:18:15 crc kubenswrapper[5008]: I0129 16:18:15.351470 5008 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_7d8596d3-fe9a-4e1a-969b-2a40a90e437d/object-expirer/0.log" Jan 29 16:18:15 crc kubenswrapper[5008]: I0129 16:18:15.374446 5008 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_7d8596d3-fe9a-4e1a-969b-2a40a90e437d/object-replicator/0.log" Jan 29 16:18:15 crc kubenswrapper[5008]: I0129 16:18:15.394949 5008 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_7d8596d3-fe9a-4e1a-969b-2a40a90e437d/object-server/0.log" Jan 29 16:18:15 crc kubenswrapper[5008]: I0129 16:18:15.466697 5008 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_7d8596d3-fe9a-4e1a-969b-2a40a90e437d/object-updater/0.log" Jan 29 16:18:15 crc kubenswrapper[5008]: I0129 16:18:15.589402 5008 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_7d8596d3-fe9a-4e1a-969b-2a40a90e437d/swift-recon-cron/0.log" Jan 29 16:18:15 crc kubenswrapper[5008]: I0129 16:18:15.624007 5008 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_7d8596d3-fe9a-4e1a-969b-2a40a90e437d/rsync/0.log" Jan 29 16:18:18 crc kubenswrapper[5008]: I0129 16:18:18.520479 5008 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_memcached-0_b37ef43d-23ae-4a9c-af60-e616882400c3/memcached/0.log" Jan 29 16:18:23 crc kubenswrapper[5008]: E0129 16:18:23.332797 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-httpd\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/ubi9/httpd-24:latest\\\"\"" pod="openstack/ceilometer-0" podUID="d40740f9-e8d8-4f46-b8b0-d913a6c33210" Jan 29 16:18:24 crc kubenswrapper[5008]: E0129 16:18:24.326178 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-9lmvr" podUID="0cf4cf5b-529f-49a9-900c-a94b840568d8" Jan 29 16:18:24 crc kubenswrapper[5008]: E0129 16:18:24.326217 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-fl9wc" podUID="66b503d3-cf12-4a89-90ca-27d7f941ed63" Jan 29 16:18:27 crc kubenswrapper[5008]: E0129 16:18:27.343360 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-7dqqz" podUID="4101eecf-8a4f-4ec9-9b3e-7dc1d9a34f55" Jan 29 16:18:28 crc kubenswrapper[5008]: I0129 16:18:28.325123 5008 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 29 16:18:30 crc kubenswrapper[5008]: I0129 16:18:30.063995 5008 generic.go:334] "Generic (PLEG): container finished" podID="2ed48245-be09-46c8-97f9-263179717512" containerID="2a3e039c86c16529ffc1767b999614b707b1d52ce151e11129bd73623bb6bff2" exitCode=0 Jan 29 16:18:30 crc kubenswrapper[5008]: I0129 16:18:30.064643 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6qmv7" event={"ID":"2ed48245-be09-46c8-97f9-263179717512","Type":"ContainerDied","Data":"2a3e039c86c16529ffc1767b999614b707b1d52ce151e11129bd73623bb6bff2"} Jan 29 16:18:31 crc kubenswrapper[5008]: I0129 16:18:31.079552 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6qmv7" event={"ID":"2ed48245-be09-46c8-97f9-263179717512","Type":"ContainerStarted","Data":"46eb4d3796891c306cbde105e94442d37ac30f507cbbd4c4047d92b51dd2d1d5"} Jan 29 16:18:31 crc kubenswrapper[5008]: I0129 16:18:31.106188 5008 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-6qmv7" podStartSLOduration=2.670004418 podStartE2EDuration="11m9.106165272s" podCreationTimestamp="2026-01-29 16:07:22 +0000 UTC" firstStartedPulling="2026-01-29 16:07:24.255197834 +0000 UTC m=+2387.928052071" lastFinishedPulling="2026-01-29 16:18:30.691358688 +0000 UTC m=+3054.364212925" observedRunningTime="2026-01-29 16:18:31.098255371 +0000 UTC m=+3054.771109638" watchObservedRunningTime="2026-01-29 16:18:31.106165272 +0000 UTC m=+3054.779019509" Jan 29 16:18:33 crc kubenswrapper[5008]: I0129 16:18:33.050679 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-6qmv7" Jan 29 16:18:33 crc kubenswrapper[5008]: I0129 16:18:33.051038 5008 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-6qmv7" Jan 29 16:18:33 crc kubenswrapper[5008]: I0129 16:18:33.102906 5008 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-6qmv7" Jan 29 16:18:36 crc kubenswrapper[5008]: E0129 16:18:36.326900 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-9lmvr" podUID="0cf4cf5b-529f-49a9-900c-a94b840568d8" Jan 29 16:18:37 crc kubenswrapper[5008]: E0129 16:18:37.332199 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-fl9wc" podUID="66b503d3-cf12-4a89-90ca-27d7f941ed63" Jan 29 16:18:38 crc kubenswrapper[5008]: I0129 16:18:38.900958 5008 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_488b31f3850666f759755213b2d3367735e8b7118e0fd5a1c8e4c15b72n4rxg_dcbfd66c-b06c-432d-b8e8-a222ab00f36c/util/0.log" Jan 29 16:18:39 crc kubenswrapper[5008]: I0129 16:18:39.101635 5008 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_488b31f3850666f759755213b2d3367735e8b7118e0fd5a1c8e4c15b72n4rxg_dcbfd66c-b06c-432d-b8e8-a222ab00f36c/pull/0.log" Jan 29 16:18:39 crc kubenswrapper[5008]: I0129 16:18:39.125665 5008 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_488b31f3850666f759755213b2d3367735e8b7118e0fd5a1c8e4c15b72n4rxg_dcbfd66c-b06c-432d-b8e8-a222ab00f36c/util/0.log" Jan 29 16:18:39 crc kubenswrapper[5008]: I0129 16:18:39.154653 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d40740f9-e8d8-4f46-b8b0-d913a6c33210","Type":"ContainerStarted","Data":"9e74ba55685ef91dc5c5fd4f75d0c04e6a02240db3ef22d23b01c38947545bf7"} Jan 29 16:18:39 crc kubenswrapper[5008]: I0129 16:18:39.155103 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 29 16:18:39 crc kubenswrapper[5008]: I0129 16:18:39.172856 5008 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_488b31f3850666f759755213b2d3367735e8b7118e0fd5a1c8e4c15b72n4rxg_dcbfd66c-b06c-432d-b8e8-a222ab00f36c/pull/0.log" Jan 29 16:18:39 crc kubenswrapper[5008]: I0129 16:18:39.181681 5008 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.772613144 podStartE2EDuration="26m8.181661737s" podCreationTimestamp="2026-01-29 15:52:31 +0000 UTC" firstStartedPulling="2026-01-29 15:52:32.537977257 +0000 UTC m=+1496.210831494" lastFinishedPulling="2026-01-29 16:18:37.94702585 +0000 UTC m=+3061.619880087" observedRunningTime="2026-01-29 16:18:39.176070531 +0000 UTC m=+3062.848924768" watchObservedRunningTime="2026-01-29 16:18:39.181661737 +0000 UTC m=+3062.854515974" Jan 29 16:18:39 crc kubenswrapper[5008]: I0129 16:18:39.372370 5008 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_488b31f3850666f759755213b2d3367735e8b7118e0fd5a1c8e4c15b72n4rxg_dcbfd66c-b06c-432d-b8e8-a222ab00f36c/util/0.log" Jan 29 16:18:39 crc kubenswrapper[5008]: I0129 16:18:39.379374 5008 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_488b31f3850666f759755213b2d3367735e8b7118e0fd5a1c8e4c15b72n4rxg_dcbfd66c-b06c-432d-b8e8-a222ab00f36c/pull/0.log" Jan 29 16:18:39 crc kubenswrapper[5008]: I0129 16:18:39.381985 5008 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_488b31f3850666f759755213b2d3367735e8b7118e0fd5a1c8e4c15b72n4rxg_dcbfd66c-b06c-432d-b8e8-a222ab00f36c/extract/0.log" Jan 29 16:18:39 crc kubenswrapper[5008]: I0129 16:18:39.662678 5008 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-7b6c4d8c5f-hh7sg_68468eb9-9e76-4f2f-9aba-cc3198e0a241/manager/0.log" Jan 29 16:18:39 crc kubenswrapper[5008]: I0129 16:18:39.666934 5008 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-8d874c8fc-4zrsr_6e775178-095e-451d-bded-b83f229c4231/manager/0.log" Jan 29 16:18:39 crc kubenswrapper[5008]: I0129 16:18:39.842574 5008 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-6d9697b7f4-n4xtj_7a610d2e-cb71-4995-a0e8-f6dc26f7664a/manager/0.log" Jan 29 16:18:39 crc kubenswrapper[5008]: I0129 16:18:39.946260 5008 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-8886f4c47-s4fq5_94a4547d-0c92-41e4-8ca7-64e21df1708e/manager/0.log" Jan 29 16:18:40 crc kubenswrapper[5008]: I0129 16:18:40.255152 5008 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-69d6db494d-9sf7f_b46e3eea-2330-4b3f-b45d-34ae38a0dde9/manager/0.log" Jan 29 16:18:40 crc kubenswrapper[5008]: I0129 16:18:40.421968 5008 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-5fb775575f-qs9wh_cae67616-1145-4057-b304-08a322e78d9d/manager/0.log" Jan 29 16:18:40 crc kubenswrapper[5008]: I0129 16:18:40.658281 5008 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-5f4b8bd54d-ncxxj_6196a4fd-8576-412f-9140-cf61b98444a4/manager/0.log" Jan 29 16:18:40 crc kubenswrapper[5008]: I0129 16:18:40.969723 5008 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-7dd968899f-q7khh_e57e9a97-d32e-4464-b12c-ba44a4643ada/manager/0.log" Jan 29 16:18:40 crc kubenswrapper[5008]: I0129 16:18:40.983524 5008 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-79955696d6-zvcs5_4ff89cd9-951e-4907-b60c-a1a1c08007a4/manager/0.log" Jan 29 16:18:41 crc kubenswrapper[5008]: E0129 16:18:41.326521 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-7dqqz" podUID="4101eecf-8a4f-4ec9-9b3e-7dc1d9a34f55" Jan 29 16:18:41 crc kubenswrapper[5008]: I0129 16:18:41.426716 5008 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-84f48565d4-qhwnb_e76346a9-7ba5-4178-82b7-da9f0c337c08/manager/0.log" Jan 29 16:18:41 crc kubenswrapper[5008]: I0129 16:18:41.526724 5008 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-67bf948998-bjjwz_d39876a5-4ca3-44e2-a4c5-c6541c2ec812/manager/0.log" Jan 29 16:18:41 crc kubenswrapper[5008]: I0129 16:18:41.572736 5008 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-585dbc889-44qcp_14020423-5911-4b69-8889-b12267c9bbf9/manager/0.log" Jan 29 16:18:41 crc kubenswrapper[5008]: I0129 16:18:41.662021 5008 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-55bff696bd-klqvj_27a92a88-ee29-47fd-b4cf-5e3232ce7573/manager/0.log" Jan 29 16:18:41 crc kubenswrapper[5008]: I0129 16:18:41.746704 5008 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-6687f8d877-zbddd_4dc123ee-b76c-46a7-9aea-76457232036b/manager/0.log" Jan 29 16:18:41 crc kubenswrapper[5008]: I0129 16:18:41.875801 5008 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-59c4b45c4dxkdxv_9f5d1ef8-a9b5-428a-b441-b7d763dbd102/manager/0.log" Jan 29 16:18:42 crc kubenswrapper[5008]: I0129 16:18:42.215862 5008 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-init-6d9fb954d-qlkhn_9edb96c4-66c6-464b-8dd3-089d6be05a60/operator/0.log" Jan 29 16:18:42 crc kubenswrapper[5008]: I0129 16:18:42.297880 5008 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-index-lv8km_cdce8b7e-15b6-41ae-89f3-fd69472b9800/registry-server/0.log" Jan 29 16:18:42 crc kubenswrapper[5008]: I0129 16:18:42.879730 5008 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_rabbitmq-cluster-operator-manager-668c99d594-vtv85_1a373ec7-8da3-4b3e-a08a-e5e8b8e5a2d1/operator/0.log" Jan 29 16:18:42 crc kubenswrapper[5008]: I0129 16:18:42.919609 5008 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-77db58b9dd-srsvv_44442d63-1bbc-4d1c-9e9d-2a9ad59baf59/manager/0.log" Jan 29 16:18:42 crc kubenswrapper[5008]: I0129 16:18:42.929347 5008 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-5b964cf4cd-xjf4m_ce6a1921-bd9b-47c4-8f5f-9443d8e4c08f/manager/0.log" Jan 29 16:18:42 crc kubenswrapper[5008]: I0129 16:18:42.947425 5008 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-788c46999f-qjtzq_cb2d6253-7fa7-41a9-9d0b-002ef590c4db/manager/0.log" Jan 29 16:18:43 crc kubenswrapper[5008]: I0129 16:18:43.110280 5008 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-68fc8c869-84h7l_a9dfe223-8569-48bb-8b52-c3fb069208a0/manager/0.log" Jan 29 16:18:43 crc kubenswrapper[5008]: I0129 16:18:43.113048 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-6qmv7" Jan 29 16:18:43 crc kubenswrapper[5008]: I0129 16:18:43.175407 5008 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-6qmv7"] Jan 29 16:18:43 crc kubenswrapper[5008]: I0129 16:18:43.226864 5008 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-6qmv7" podUID="2ed48245-be09-46c8-97f9-263179717512" containerName="registry-server" containerID="cri-o://46eb4d3796891c306cbde105e94442d37ac30f507cbbd4c4047d92b51dd2d1d5" gracePeriod=2 Jan 29 16:18:43 crc kubenswrapper[5008]: I0129 16:18:43.252672 5008 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-64b5b76f97-bbsft_30b3e5fd-7f41-4ed9-a1de-cb282994ad38/manager/0.log" Jan 29 16:18:43 crc kubenswrapper[5008]: I0129 16:18:43.384695 5008 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-56f8bfcd9f-fxz5k_d4fd527b-7108-4f94-b7a9-bb0b358b8c3c/manager/0.log" Jan 29 16:18:43 crc kubenswrapper[5008]: I0129 16:18:43.443203 5008 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-564965969-dwhc5_a2163508-5800-4d97-b8d4-1f3815764822/manager/0.log" Jan 29 16:18:43 crc kubenswrapper[5008]: I0129 16:18:43.715513 5008 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-6qmv7" Jan 29 16:18:43 crc kubenswrapper[5008]: I0129 16:18:43.770186 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2ed48245-be09-46c8-97f9-263179717512-utilities\") pod \"2ed48245-be09-46c8-97f9-263179717512\" (UID: \"2ed48245-be09-46c8-97f9-263179717512\") " Jan 29 16:18:43 crc kubenswrapper[5008]: I0129 16:18:43.770255 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2ed48245-be09-46c8-97f9-263179717512-catalog-content\") pod \"2ed48245-be09-46c8-97f9-263179717512\" (UID: \"2ed48245-be09-46c8-97f9-263179717512\") " Jan 29 16:18:43 crc kubenswrapper[5008]: I0129 16:18:43.770398 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-29db7\" (UniqueName: \"kubernetes.io/projected/2ed48245-be09-46c8-97f9-263179717512-kube-api-access-29db7\") pod \"2ed48245-be09-46c8-97f9-263179717512\" (UID: \"2ed48245-be09-46c8-97f9-263179717512\") " Jan 29 16:18:43 crc kubenswrapper[5008]: I0129 16:18:43.770891 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2ed48245-be09-46c8-97f9-263179717512-utilities" (OuterVolumeSpecName: "utilities") pod "2ed48245-be09-46c8-97f9-263179717512" (UID: "2ed48245-be09-46c8-97f9-263179717512"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 16:18:43 crc kubenswrapper[5008]: I0129 16:18:43.771126 5008 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2ed48245-be09-46c8-97f9-263179717512-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 16:18:43 crc kubenswrapper[5008]: I0129 16:18:43.776967 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2ed48245-be09-46c8-97f9-263179717512-kube-api-access-29db7" (OuterVolumeSpecName: "kube-api-access-29db7") pod "2ed48245-be09-46c8-97f9-263179717512" (UID: "2ed48245-be09-46c8-97f9-263179717512"). InnerVolumeSpecName "kube-api-access-29db7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:18:43 crc kubenswrapper[5008]: I0129 16:18:43.832437 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2ed48245-be09-46c8-97f9-263179717512-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "2ed48245-be09-46c8-97f9-263179717512" (UID: "2ed48245-be09-46c8-97f9-263179717512"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 16:18:43 crc kubenswrapper[5008]: I0129 16:18:43.873558 5008 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-29db7\" (UniqueName: \"kubernetes.io/projected/2ed48245-be09-46c8-97f9-263179717512-kube-api-access-29db7\") on node \"crc\" DevicePath \"\"" Jan 29 16:18:43 crc kubenswrapper[5008]: I0129 16:18:43.873598 5008 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2ed48245-be09-46c8-97f9-263179717512-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 16:18:44 crc kubenswrapper[5008]: I0129 16:18:44.238931 5008 generic.go:334] "Generic (PLEG): container finished" podID="2ed48245-be09-46c8-97f9-263179717512" containerID="46eb4d3796891c306cbde105e94442d37ac30f507cbbd4c4047d92b51dd2d1d5" exitCode=0 Jan 29 16:18:44 crc kubenswrapper[5008]: I0129 16:18:44.238998 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6qmv7" event={"ID":"2ed48245-be09-46c8-97f9-263179717512","Type":"ContainerDied","Data":"46eb4d3796891c306cbde105e94442d37ac30f507cbbd4c4047d92b51dd2d1d5"} Jan 29 16:18:44 crc kubenswrapper[5008]: I0129 16:18:44.239323 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6qmv7" event={"ID":"2ed48245-be09-46c8-97f9-263179717512","Type":"ContainerDied","Data":"4e824484315a6e30506a2f7c7fb618d142a68d99bd3176c0a282d8bafa44de26"} Jan 29 16:18:44 crc kubenswrapper[5008]: I0129 16:18:44.239345 5008 scope.go:117] "RemoveContainer" containerID="46eb4d3796891c306cbde105e94442d37ac30f507cbbd4c4047d92b51dd2d1d5" Jan 29 16:18:44 crc kubenswrapper[5008]: I0129 16:18:44.239061 5008 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-6qmv7" Jan 29 16:18:44 crc kubenswrapper[5008]: I0129 16:18:44.272026 5008 scope.go:117] "RemoveContainer" containerID="2a3e039c86c16529ffc1767b999614b707b1d52ce151e11129bd73623bb6bff2" Jan 29 16:18:44 crc kubenswrapper[5008]: I0129 16:18:44.275577 5008 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-6qmv7"] Jan 29 16:18:44 crc kubenswrapper[5008]: I0129 16:18:44.283902 5008 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-6qmv7"] Jan 29 16:18:44 crc kubenswrapper[5008]: I0129 16:18:44.324324 5008 scope.go:117] "RemoveContainer" containerID="150149c6a5ab91f06872737ef57f87254f939be1476ab033203541676c958766" Jan 29 16:18:44 crc kubenswrapper[5008]: I0129 16:18:44.360031 5008 scope.go:117] "RemoveContainer" containerID="46eb4d3796891c306cbde105e94442d37ac30f507cbbd4c4047d92b51dd2d1d5" Jan 29 16:18:44 crc kubenswrapper[5008]: E0129 16:18:44.360493 5008 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"46eb4d3796891c306cbde105e94442d37ac30f507cbbd4c4047d92b51dd2d1d5\": container with ID starting with 46eb4d3796891c306cbde105e94442d37ac30f507cbbd4c4047d92b51dd2d1d5 not found: ID does not exist" containerID="46eb4d3796891c306cbde105e94442d37ac30f507cbbd4c4047d92b51dd2d1d5" Jan 29 16:18:44 crc kubenswrapper[5008]: I0129 16:18:44.360534 5008 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"46eb4d3796891c306cbde105e94442d37ac30f507cbbd4c4047d92b51dd2d1d5"} err="failed to get container status \"46eb4d3796891c306cbde105e94442d37ac30f507cbbd4c4047d92b51dd2d1d5\": rpc error: code = NotFound desc = could not find container \"46eb4d3796891c306cbde105e94442d37ac30f507cbbd4c4047d92b51dd2d1d5\": container with ID starting with 46eb4d3796891c306cbde105e94442d37ac30f507cbbd4c4047d92b51dd2d1d5 not found: ID does not exist" Jan 29 16:18:44 crc kubenswrapper[5008]: I0129 16:18:44.360564 5008 scope.go:117] "RemoveContainer" containerID="2a3e039c86c16529ffc1767b999614b707b1d52ce151e11129bd73623bb6bff2" Jan 29 16:18:44 crc kubenswrapper[5008]: E0129 16:18:44.361751 5008 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2a3e039c86c16529ffc1767b999614b707b1d52ce151e11129bd73623bb6bff2\": container with ID starting with 2a3e039c86c16529ffc1767b999614b707b1d52ce151e11129bd73623bb6bff2 not found: ID does not exist" containerID="2a3e039c86c16529ffc1767b999614b707b1d52ce151e11129bd73623bb6bff2" Jan 29 16:18:44 crc kubenswrapper[5008]: I0129 16:18:44.361783 5008 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2a3e039c86c16529ffc1767b999614b707b1d52ce151e11129bd73623bb6bff2"} err="failed to get container status \"2a3e039c86c16529ffc1767b999614b707b1d52ce151e11129bd73623bb6bff2\": rpc error: code = NotFound desc = could not find container \"2a3e039c86c16529ffc1767b999614b707b1d52ce151e11129bd73623bb6bff2\": container with ID starting with 2a3e039c86c16529ffc1767b999614b707b1d52ce151e11129bd73623bb6bff2 not found: ID does not exist" Jan 29 16:18:44 crc kubenswrapper[5008]: I0129 16:18:44.361817 5008 scope.go:117] "RemoveContainer" containerID="150149c6a5ab91f06872737ef57f87254f939be1476ab033203541676c958766" Jan 29 16:18:44 crc kubenswrapper[5008]: E0129 16:18:44.362137 5008 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"150149c6a5ab91f06872737ef57f87254f939be1476ab033203541676c958766\": container with ID starting with 150149c6a5ab91f06872737ef57f87254f939be1476ab033203541676c958766 not found: ID does not exist" containerID="150149c6a5ab91f06872737ef57f87254f939be1476ab033203541676c958766" Jan 29 16:18:44 crc kubenswrapper[5008]: I0129 16:18:44.362158 5008 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"150149c6a5ab91f06872737ef57f87254f939be1476ab033203541676c958766"} err="failed to get container status \"150149c6a5ab91f06872737ef57f87254f939be1476ab033203541676c958766\": rpc error: code = NotFound desc = could not find container \"150149c6a5ab91f06872737ef57f87254f939be1476ab033203541676c958766\": container with ID starting with 150149c6a5ab91f06872737ef57f87254f939be1476ab033203541676c958766 not found: ID does not exist" Jan 29 16:18:44 crc kubenswrapper[5008]: I0129 16:18:44.769715 5008 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-m2ch2"] Jan 29 16:18:44 crc kubenswrapper[5008]: E0129 16:18:44.770166 5008 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2ed48245-be09-46c8-97f9-263179717512" containerName="extract-utilities" Jan 29 16:18:44 crc kubenswrapper[5008]: I0129 16:18:44.770198 5008 state_mem.go:107] "Deleted CPUSet assignment" podUID="2ed48245-be09-46c8-97f9-263179717512" containerName="extract-utilities" Jan 29 16:18:44 crc kubenswrapper[5008]: E0129 16:18:44.770215 5008 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2ed48245-be09-46c8-97f9-263179717512" containerName="extract-content" Jan 29 16:18:44 crc kubenswrapper[5008]: I0129 16:18:44.770222 5008 state_mem.go:107] "Deleted CPUSet assignment" podUID="2ed48245-be09-46c8-97f9-263179717512" containerName="extract-content" Jan 29 16:18:44 crc kubenswrapper[5008]: E0129 16:18:44.770233 5008 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8077b692-59d3-4065-8632-745ffcd783af" containerName="container-00" Jan 29 16:18:44 crc kubenswrapper[5008]: I0129 16:18:44.770240 5008 state_mem.go:107] "Deleted CPUSet assignment" podUID="8077b692-59d3-4065-8632-745ffcd783af" containerName="container-00" Jan 29 16:18:44 crc kubenswrapper[5008]: E0129 16:18:44.770267 5008 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2ed48245-be09-46c8-97f9-263179717512" containerName="registry-server" Jan 29 16:18:44 crc kubenswrapper[5008]: I0129 16:18:44.770272 5008 state_mem.go:107] "Deleted CPUSet assignment" podUID="2ed48245-be09-46c8-97f9-263179717512" containerName="registry-server" Jan 29 16:18:44 crc kubenswrapper[5008]: I0129 16:18:44.770437 5008 memory_manager.go:354] "RemoveStaleState removing state" podUID="2ed48245-be09-46c8-97f9-263179717512" containerName="registry-server" Jan 29 16:18:44 crc kubenswrapper[5008]: I0129 16:18:44.770459 5008 memory_manager.go:354] "RemoveStaleState removing state" podUID="8077b692-59d3-4065-8632-745ffcd783af" containerName="container-00" Jan 29 16:18:44 crc kubenswrapper[5008]: I0129 16:18:44.771828 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-m2ch2" Jan 29 16:18:44 crc kubenswrapper[5008]: I0129 16:18:44.789296 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8288f5b4-361c-4f53-bcc9-5ec9a42464cb-catalog-content\") pod \"certified-operators-m2ch2\" (UID: \"8288f5b4-361c-4f53-bcc9-5ec9a42464cb\") " pod="openshift-marketplace/certified-operators-m2ch2" Jan 29 16:18:44 crc kubenswrapper[5008]: I0129 16:18:44.789392 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qmvrh\" (UniqueName: \"kubernetes.io/projected/8288f5b4-361c-4f53-bcc9-5ec9a42464cb-kube-api-access-qmvrh\") pod \"certified-operators-m2ch2\" (UID: \"8288f5b4-361c-4f53-bcc9-5ec9a42464cb\") " pod="openshift-marketplace/certified-operators-m2ch2" Jan 29 16:18:44 crc kubenswrapper[5008]: I0129 16:18:44.789456 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8288f5b4-361c-4f53-bcc9-5ec9a42464cb-utilities\") pod \"certified-operators-m2ch2\" (UID: \"8288f5b4-361c-4f53-bcc9-5ec9a42464cb\") " pod="openshift-marketplace/certified-operators-m2ch2" Jan 29 16:18:44 crc kubenswrapper[5008]: I0129 16:18:44.791571 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-m2ch2"] Jan 29 16:18:44 crc kubenswrapper[5008]: I0129 16:18:44.891865 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qmvrh\" (UniqueName: \"kubernetes.io/projected/8288f5b4-361c-4f53-bcc9-5ec9a42464cb-kube-api-access-qmvrh\") pod \"certified-operators-m2ch2\" (UID: \"8288f5b4-361c-4f53-bcc9-5ec9a42464cb\") " pod="openshift-marketplace/certified-operators-m2ch2" Jan 29 16:18:44 crc kubenswrapper[5008]: I0129 16:18:44.892663 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8288f5b4-361c-4f53-bcc9-5ec9a42464cb-utilities\") pod \"certified-operators-m2ch2\" (UID: \"8288f5b4-361c-4f53-bcc9-5ec9a42464cb\") " pod="openshift-marketplace/certified-operators-m2ch2" Jan 29 16:18:44 crc kubenswrapper[5008]: I0129 16:18:44.893049 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8288f5b4-361c-4f53-bcc9-5ec9a42464cb-catalog-content\") pod \"certified-operators-m2ch2\" (UID: \"8288f5b4-361c-4f53-bcc9-5ec9a42464cb\") " pod="openshift-marketplace/certified-operators-m2ch2" Jan 29 16:18:44 crc kubenswrapper[5008]: I0129 16:18:44.893442 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8288f5b4-361c-4f53-bcc9-5ec9a42464cb-utilities\") pod \"certified-operators-m2ch2\" (UID: \"8288f5b4-361c-4f53-bcc9-5ec9a42464cb\") " pod="openshift-marketplace/certified-operators-m2ch2" Jan 29 16:18:44 crc kubenswrapper[5008]: I0129 16:18:44.893647 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8288f5b4-361c-4f53-bcc9-5ec9a42464cb-catalog-content\") pod \"certified-operators-m2ch2\" (UID: \"8288f5b4-361c-4f53-bcc9-5ec9a42464cb\") " pod="openshift-marketplace/certified-operators-m2ch2" Jan 29 16:18:44 crc kubenswrapper[5008]: I0129 16:18:44.911821 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qmvrh\" (UniqueName: \"kubernetes.io/projected/8288f5b4-361c-4f53-bcc9-5ec9a42464cb-kube-api-access-qmvrh\") pod \"certified-operators-m2ch2\" (UID: \"8288f5b4-361c-4f53-bcc9-5ec9a42464cb\") " pod="openshift-marketplace/certified-operators-m2ch2" Jan 29 16:18:45 crc kubenswrapper[5008]: I0129 16:18:45.092516 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-m2ch2" Jan 29 16:18:45 crc kubenswrapper[5008]: I0129 16:18:45.337650 5008 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2ed48245-be09-46c8-97f9-263179717512" path="/var/lib/kubelet/pods/2ed48245-be09-46c8-97f9-263179717512/volumes" Jan 29 16:18:45 crc kubenswrapper[5008]: I0129 16:18:45.503515 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-m2ch2"] Jan 29 16:18:46 crc kubenswrapper[5008]: I0129 16:18:46.257066 5008 generic.go:334] "Generic (PLEG): container finished" podID="8288f5b4-361c-4f53-bcc9-5ec9a42464cb" containerID="6e02cbc77c685b26cd87795bf1ad1154836ba9023d50cdd82fe7d6cbb5bda03f" exitCode=0 Jan 29 16:18:46 crc kubenswrapper[5008]: I0129 16:18:46.257135 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-m2ch2" event={"ID":"8288f5b4-361c-4f53-bcc9-5ec9a42464cb","Type":"ContainerDied","Data":"6e02cbc77c685b26cd87795bf1ad1154836ba9023d50cdd82fe7d6cbb5bda03f"} Jan 29 16:18:46 crc kubenswrapper[5008]: I0129 16:18:46.257407 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-m2ch2" event={"ID":"8288f5b4-361c-4f53-bcc9-5ec9a42464cb","Type":"ContainerStarted","Data":"58ba521f62d36bc6f1b5a187281d524b755d3e2cc08d3d128a1d342bd7761433"} Jan 29 16:18:48 crc kubenswrapper[5008]: I0129 16:18:48.276070 5008 generic.go:334] "Generic (PLEG): container finished" podID="8288f5b4-361c-4f53-bcc9-5ec9a42464cb" containerID="b38e303bac84ac6e6b73c3618d83add378ddc0defa725b1feade55c521510803" exitCode=0 Jan 29 16:18:48 crc kubenswrapper[5008]: I0129 16:18:48.276187 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-m2ch2" event={"ID":"8288f5b4-361c-4f53-bcc9-5ec9a42464cb","Type":"ContainerDied","Data":"b38e303bac84ac6e6b73c3618d83add378ddc0defa725b1feade55c521510803"} Jan 29 16:18:49 crc kubenswrapper[5008]: I0129 16:18:49.285105 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-m2ch2" event={"ID":"8288f5b4-361c-4f53-bcc9-5ec9a42464cb","Type":"ContainerStarted","Data":"f80b22f32e243ce05b9e6f30f2f45f4db27f539d89de49d8c81622c366233bca"} Jan 29 16:18:49 crc kubenswrapper[5008]: I0129 16:18:49.311123 5008 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-m2ch2" podStartSLOduration=2.839254312 podStartE2EDuration="5.311098956s" podCreationTimestamp="2026-01-29 16:18:44 +0000 UTC" firstStartedPulling="2026-01-29 16:18:46.258911836 +0000 UTC m=+3069.931766073" lastFinishedPulling="2026-01-29 16:18:48.73075649 +0000 UTC m=+3072.403610717" observedRunningTime="2026-01-29 16:18:49.302030685 +0000 UTC m=+3072.974884942" watchObservedRunningTime="2026-01-29 16:18:49.311098956 +0000 UTC m=+3072.983953213" Jan 29 16:18:50 crc kubenswrapper[5008]: I0129 16:18:50.297476 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9lmvr" event={"ID":"0cf4cf5b-529f-49a9-900c-a94b840568d8","Type":"ContainerStarted","Data":"55a1073255e02bc66c4374c97c2012312d817a5a770f8d723aa392a76782c498"} Jan 29 16:18:51 crc kubenswrapper[5008]: I0129 16:18:51.306274 5008 generic.go:334] "Generic (PLEG): container finished" podID="0cf4cf5b-529f-49a9-900c-a94b840568d8" containerID="55a1073255e02bc66c4374c97c2012312d817a5a770f8d723aa392a76782c498" exitCode=0 Jan 29 16:18:51 crc kubenswrapper[5008]: I0129 16:18:51.306365 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9lmvr" event={"ID":"0cf4cf5b-529f-49a9-900c-a94b840568d8","Type":"ContainerDied","Data":"55a1073255e02bc66c4374c97c2012312d817a5a770f8d723aa392a76782c498"} Jan 29 16:18:52 crc kubenswrapper[5008]: I0129 16:18:52.318376 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9lmvr" event={"ID":"0cf4cf5b-529f-49a9-900c-a94b840568d8","Type":"ContainerStarted","Data":"e682772386112fdf3c4c07b2f814297c30f02af78e864a2a8f09ea78d9aef32d"} Jan 29 16:18:52 crc kubenswrapper[5008]: E0129 16:18:52.325769 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-fl9wc" podUID="66b503d3-cf12-4a89-90ca-27d7f941ed63" Jan 29 16:18:52 crc kubenswrapper[5008]: I0129 16:18:52.343279 5008 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-9lmvr" podStartSLOduration=2.117183881 podStartE2EDuration="10m57.343256969s" podCreationTimestamp="2026-01-29 16:07:55 +0000 UTC" firstStartedPulling="2026-01-29 16:07:56.537162164 +0000 UTC m=+2420.210016401" lastFinishedPulling="2026-01-29 16:18:51.763235252 +0000 UTC m=+3075.436089489" observedRunningTime="2026-01-29 16:18:52.34000595 +0000 UTC m=+3076.012860207" watchObservedRunningTime="2026-01-29 16:18:52.343256969 +0000 UTC m=+3076.016111206" Jan 29 16:18:55 crc kubenswrapper[5008]: I0129 16:18:55.092927 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-m2ch2" Jan 29 16:18:55 crc kubenswrapper[5008]: I0129 16:18:55.093520 5008 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-m2ch2" Jan 29 16:18:55 crc kubenswrapper[5008]: I0129 16:18:55.140175 5008 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-m2ch2" Jan 29 16:18:55 crc kubenswrapper[5008]: I0129 16:18:55.396004 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-m2ch2" Jan 29 16:18:55 crc kubenswrapper[5008]: I0129 16:18:55.615758 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-9lmvr" Jan 29 16:18:55 crc kubenswrapper[5008]: I0129 16:18:55.615885 5008 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-9lmvr" Jan 29 16:18:55 crc kubenswrapper[5008]: I0129 16:18:55.661952 5008 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-9lmvr" Jan 29 16:18:56 crc kubenswrapper[5008]: I0129 16:18:56.154132 5008 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-m2ch2"] Jan 29 16:18:56 crc kubenswrapper[5008]: E0129 16:18:56.326126 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-7dqqz" podUID="4101eecf-8a4f-4ec9-9b3e-7dc1d9a34f55" Jan 29 16:18:56 crc kubenswrapper[5008]: I0129 16:18:56.406155 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-9lmvr" Jan 29 16:18:57 crc kubenswrapper[5008]: I0129 16:18:57.357406 5008 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-m2ch2" podUID="8288f5b4-361c-4f53-bcc9-5ec9a42464cb" containerName="registry-server" containerID="cri-o://f80b22f32e243ce05b9e6f30f2f45f4db27f539d89de49d8c81622c366233bca" gracePeriod=2 Jan 29 16:18:57 crc kubenswrapper[5008]: I0129 16:18:57.953532 5008 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-9lmvr"] Jan 29 16:18:58 crc kubenswrapper[5008]: I0129 16:18:58.364617 5008 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-9lmvr" podUID="0cf4cf5b-529f-49a9-900c-a94b840568d8" containerName="registry-server" containerID="cri-o://e682772386112fdf3c4c07b2f814297c30f02af78e864a2a8f09ea78d9aef32d" gracePeriod=2 Jan 29 16:18:59 crc kubenswrapper[5008]: I0129 16:18:59.065735 5008 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-m2ch2" Jan 29 16:18:59 crc kubenswrapper[5008]: I0129 16:18:59.175129 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8288f5b4-361c-4f53-bcc9-5ec9a42464cb-catalog-content\") pod \"8288f5b4-361c-4f53-bcc9-5ec9a42464cb\" (UID: \"8288f5b4-361c-4f53-bcc9-5ec9a42464cb\") " Jan 29 16:18:59 crc kubenswrapper[5008]: I0129 16:18:59.175175 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8288f5b4-361c-4f53-bcc9-5ec9a42464cb-utilities\") pod \"8288f5b4-361c-4f53-bcc9-5ec9a42464cb\" (UID: \"8288f5b4-361c-4f53-bcc9-5ec9a42464cb\") " Jan 29 16:18:59 crc kubenswrapper[5008]: I0129 16:18:59.175210 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qmvrh\" (UniqueName: \"kubernetes.io/projected/8288f5b4-361c-4f53-bcc9-5ec9a42464cb-kube-api-access-qmvrh\") pod \"8288f5b4-361c-4f53-bcc9-5ec9a42464cb\" (UID: \"8288f5b4-361c-4f53-bcc9-5ec9a42464cb\") " Jan 29 16:18:59 crc kubenswrapper[5008]: I0129 16:18:59.177402 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8288f5b4-361c-4f53-bcc9-5ec9a42464cb-utilities" (OuterVolumeSpecName: "utilities") pod "8288f5b4-361c-4f53-bcc9-5ec9a42464cb" (UID: "8288f5b4-361c-4f53-bcc9-5ec9a42464cb"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 16:18:59 crc kubenswrapper[5008]: I0129 16:18:59.183051 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8288f5b4-361c-4f53-bcc9-5ec9a42464cb-kube-api-access-qmvrh" (OuterVolumeSpecName: "kube-api-access-qmvrh") pod "8288f5b4-361c-4f53-bcc9-5ec9a42464cb" (UID: "8288f5b4-361c-4f53-bcc9-5ec9a42464cb"). InnerVolumeSpecName "kube-api-access-qmvrh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:18:59 crc kubenswrapper[5008]: I0129 16:18:59.235412 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8288f5b4-361c-4f53-bcc9-5ec9a42464cb-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "8288f5b4-361c-4f53-bcc9-5ec9a42464cb" (UID: "8288f5b4-361c-4f53-bcc9-5ec9a42464cb"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 16:18:59 crc kubenswrapper[5008]: I0129 16:18:59.277106 5008 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8288f5b4-361c-4f53-bcc9-5ec9a42464cb-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 16:18:59 crc kubenswrapper[5008]: I0129 16:18:59.277161 5008 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8288f5b4-361c-4f53-bcc9-5ec9a42464cb-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 16:18:59 crc kubenswrapper[5008]: I0129 16:18:59.277176 5008 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qmvrh\" (UniqueName: \"kubernetes.io/projected/8288f5b4-361c-4f53-bcc9-5ec9a42464cb-kube-api-access-qmvrh\") on node \"crc\" DevicePath \"\"" Jan 29 16:18:59 crc kubenswrapper[5008]: I0129 16:18:59.279577 5008 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-9lmvr" Jan 29 16:18:59 crc kubenswrapper[5008]: I0129 16:18:59.376250 5008 generic.go:334] "Generic (PLEG): container finished" podID="0cf4cf5b-529f-49a9-900c-a94b840568d8" containerID="e682772386112fdf3c4c07b2f814297c30f02af78e864a2a8f09ea78d9aef32d" exitCode=0 Jan 29 16:18:59 crc kubenswrapper[5008]: I0129 16:18:59.376297 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9lmvr" event={"ID":"0cf4cf5b-529f-49a9-900c-a94b840568d8","Type":"ContainerDied","Data":"e682772386112fdf3c4c07b2f814297c30f02af78e864a2a8f09ea78d9aef32d"} Jan 29 16:18:59 crc kubenswrapper[5008]: I0129 16:18:59.376354 5008 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-9lmvr" Jan 29 16:18:59 crc kubenswrapper[5008]: I0129 16:18:59.376691 5008 scope.go:117] "RemoveContainer" containerID="e682772386112fdf3c4c07b2f814297c30f02af78e864a2a8f09ea78d9aef32d" Jan 29 16:18:59 crc kubenswrapper[5008]: I0129 16:18:59.377912 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9lmvr" event={"ID":"0cf4cf5b-529f-49a9-900c-a94b840568d8","Type":"ContainerDied","Data":"3027721e802c941c68316a40edc4f5165c2ccf1c65e058c580444ac3144242da"} Jan 29 16:18:59 crc kubenswrapper[5008]: I0129 16:18:59.379737 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gcgl4\" (UniqueName: \"kubernetes.io/projected/0cf4cf5b-529f-49a9-900c-a94b840568d8-kube-api-access-gcgl4\") pod \"0cf4cf5b-529f-49a9-900c-a94b840568d8\" (UID: \"0cf4cf5b-529f-49a9-900c-a94b840568d8\") " Jan 29 16:18:59 crc kubenswrapper[5008]: I0129 16:18:59.379804 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0cf4cf5b-529f-49a9-900c-a94b840568d8-catalog-content\") pod \"0cf4cf5b-529f-49a9-900c-a94b840568d8\" (UID: \"0cf4cf5b-529f-49a9-900c-a94b840568d8\") " Jan 29 16:18:59 crc kubenswrapper[5008]: I0129 16:18:59.379955 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0cf4cf5b-529f-49a9-900c-a94b840568d8-utilities\") pod \"0cf4cf5b-529f-49a9-900c-a94b840568d8\" (UID: \"0cf4cf5b-529f-49a9-900c-a94b840568d8\") " Jan 29 16:18:59 crc kubenswrapper[5008]: I0129 16:18:59.380132 5008 generic.go:334] "Generic (PLEG): container finished" podID="8288f5b4-361c-4f53-bcc9-5ec9a42464cb" containerID="f80b22f32e243ce05b9e6f30f2f45f4db27f539d89de49d8c81622c366233bca" exitCode=0 Jan 29 16:18:59 crc kubenswrapper[5008]: I0129 16:18:59.380166 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-m2ch2" event={"ID":"8288f5b4-361c-4f53-bcc9-5ec9a42464cb","Type":"ContainerDied","Data":"f80b22f32e243ce05b9e6f30f2f45f4db27f539d89de49d8c81622c366233bca"} Jan 29 16:18:59 crc kubenswrapper[5008]: I0129 16:18:59.380193 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-m2ch2" event={"ID":"8288f5b4-361c-4f53-bcc9-5ec9a42464cb","Type":"ContainerDied","Data":"58ba521f62d36bc6f1b5a187281d524b755d3e2cc08d3d128a1d342bd7761433"} Jan 29 16:18:59 crc kubenswrapper[5008]: I0129 16:18:59.380278 5008 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-m2ch2" Jan 29 16:18:59 crc kubenswrapper[5008]: I0129 16:18:59.381616 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0cf4cf5b-529f-49a9-900c-a94b840568d8-utilities" (OuterVolumeSpecName: "utilities") pod "0cf4cf5b-529f-49a9-900c-a94b840568d8" (UID: "0cf4cf5b-529f-49a9-900c-a94b840568d8"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 16:18:59 crc kubenswrapper[5008]: I0129 16:18:59.385318 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0cf4cf5b-529f-49a9-900c-a94b840568d8-kube-api-access-gcgl4" (OuterVolumeSpecName: "kube-api-access-gcgl4") pod "0cf4cf5b-529f-49a9-900c-a94b840568d8" (UID: "0cf4cf5b-529f-49a9-900c-a94b840568d8"). InnerVolumeSpecName "kube-api-access-gcgl4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:18:59 crc kubenswrapper[5008]: I0129 16:18:59.402398 5008 scope.go:117] "RemoveContainer" containerID="55a1073255e02bc66c4374c97c2012312d817a5a770f8d723aa392a76782c498" Jan 29 16:18:59 crc kubenswrapper[5008]: I0129 16:18:59.411301 5008 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-m2ch2"] Jan 29 16:18:59 crc kubenswrapper[5008]: I0129 16:18:59.420504 5008 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-m2ch2"] Jan 29 16:18:59 crc kubenswrapper[5008]: I0129 16:18:59.430371 5008 scope.go:117] "RemoveContainer" containerID="4afa3ecd1bba399d9d57363e776a21e44e34c2657ea6828efcf74ebcf9e4f108" Jan 29 16:18:59 crc kubenswrapper[5008]: I0129 16:18:59.432504 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0cf4cf5b-529f-49a9-900c-a94b840568d8-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "0cf4cf5b-529f-49a9-900c-a94b840568d8" (UID: "0cf4cf5b-529f-49a9-900c-a94b840568d8"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 16:18:59 crc kubenswrapper[5008]: I0129 16:18:59.451931 5008 scope.go:117] "RemoveContainer" containerID="e682772386112fdf3c4c07b2f814297c30f02af78e864a2a8f09ea78d9aef32d" Jan 29 16:18:59 crc kubenswrapper[5008]: E0129 16:18:59.452419 5008 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e682772386112fdf3c4c07b2f814297c30f02af78e864a2a8f09ea78d9aef32d\": container with ID starting with e682772386112fdf3c4c07b2f814297c30f02af78e864a2a8f09ea78d9aef32d not found: ID does not exist" containerID="e682772386112fdf3c4c07b2f814297c30f02af78e864a2a8f09ea78d9aef32d" Jan 29 16:18:59 crc kubenswrapper[5008]: I0129 16:18:59.452471 5008 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e682772386112fdf3c4c07b2f814297c30f02af78e864a2a8f09ea78d9aef32d"} err="failed to get container status \"e682772386112fdf3c4c07b2f814297c30f02af78e864a2a8f09ea78d9aef32d\": rpc error: code = NotFound desc = could not find container \"e682772386112fdf3c4c07b2f814297c30f02af78e864a2a8f09ea78d9aef32d\": container with ID starting with e682772386112fdf3c4c07b2f814297c30f02af78e864a2a8f09ea78d9aef32d not found: ID does not exist" Jan 29 16:18:59 crc kubenswrapper[5008]: I0129 16:18:59.452507 5008 scope.go:117] "RemoveContainer" containerID="55a1073255e02bc66c4374c97c2012312d817a5a770f8d723aa392a76782c498" Jan 29 16:18:59 crc kubenswrapper[5008]: E0129 16:18:59.453079 5008 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"55a1073255e02bc66c4374c97c2012312d817a5a770f8d723aa392a76782c498\": container with ID starting with 55a1073255e02bc66c4374c97c2012312d817a5a770f8d723aa392a76782c498 not found: ID does not exist" containerID="55a1073255e02bc66c4374c97c2012312d817a5a770f8d723aa392a76782c498" Jan 29 16:18:59 crc kubenswrapper[5008]: I0129 16:18:59.453106 5008 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"55a1073255e02bc66c4374c97c2012312d817a5a770f8d723aa392a76782c498"} err="failed to get container status \"55a1073255e02bc66c4374c97c2012312d817a5a770f8d723aa392a76782c498\": rpc error: code = NotFound desc = could not find container \"55a1073255e02bc66c4374c97c2012312d817a5a770f8d723aa392a76782c498\": container with ID starting with 55a1073255e02bc66c4374c97c2012312d817a5a770f8d723aa392a76782c498 not found: ID does not exist" Jan 29 16:18:59 crc kubenswrapper[5008]: I0129 16:18:59.453124 5008 scope.go:117] "RemoveContainer" containerID="4afa3ecd1bba399d9d57363e776a21e44e34c2657ea6828efcf74ebcf9e4f108" Jan 29 16:18:59 crc kubenswrapper[5008]: E0129 16:18:59.453427 5008 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4afa3ecd1bba399d9d57363e776a21e44e34c2657ea6828efcf74ebcf9e4f108\": container with ID starting with 4afa3ecd1bba399d9d57363e776a21e44e34c2657ea6828efcf74ebcf9e4f108 not found: ID does not exist" containerID="4afa3ecd1bba399d9d57363e776a21e44e34c2657ea6828efcf74ebcf9e4f108" Jan 29 16:18:59 crc kubenswrapper[5008]: I0129 16:18:59.453458 5008 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4afa3ecd1bba399d9d57363e776a21e44e34c2657ea6828efcf74ebcf9e4f108"} err="failed to get container status \"4afa3ecd1bba399d9d57363e776a21e44e34c2657ea6828efcf74ebcf9e4f108\": rpc error: code = NotFound desc = could not find container \"4afa3ecd1bba399d9d57363e776a21e44e34c2657ea6828efcf74ebcf9e4f108\": container with ID starting with 4afa3ecd1bba399d9d57363e776a21e44e34c2657ea6828efcf74ebcf9e4f108 not found: ID does not exist" Jan 29 16:18:59 crc kubenswrapper[5008]: I0129 16:18:59.453476 5008 scope.go:117] "RemoveContainer" containerID="f80b22f32e243ce05b9e6f30f2f45f4db27f539d89de49d8c81622c366233bca" Jan 29 16:18:59 crc kubenswrapper[5008]: I0129 16:18:59.474589 5008 scope.go:117] "RemoveContainer" containerID="b38e303bac84ac6e6b73c3618d83add378ddc0defa725b1feade55c521510803" Jan 29 16:18:59 crc kubenswrapper[5008]: I0129 16:18:59.482426 5008 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gcgl4\" (UniqueName: \"kubernetes.io/projected/0cf4cf5b-529f-49a9-900c-a94b840568d8-kube-api-access-gcgl4\") on node \"crc\" DevicePath \"\"" Jan 29 16:18:59 crc kubenswrapper[5008]: I0129 16:18:59.482503 5008 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0cf4cf5b-529f-49a9-900c-a94b840568d8-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 16:18:59 crc kubenswrapper[5008]: I0129 16:18:59.482514 5008 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0cf4cf5b-529f-49a9-900c-a94b840568d8-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 16:18:59 crc kubenswrapper[5008]: I0129 16:18:59.501105 5008 scope.go:117] "RemoveContainer" containerID="6e02cbc77c685b26cd87795bf1ad1154836ba9023d50cdd82fe7d6cbb5bda03f" Jan 29 16:18:59 crc kubenswrapper[5008]: I0129 16:18:59.554766 5008 scope.go:117] "RemoveContainer" containerID="f80b22f32e243ce05b9e6f30f2f45f4db27f539d89de49d8c81622c366233bca" Jan 29 16:18:59 crc kubenswrapper[5008]: E0129 16:18:59.555301 5008 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f80b22f32e243ce05b9e6f30f2f45f4db27f539d89de49d8c81622c366233bca\": container with ID starting with f80b22f32e243ce05b9e6f30f2f45f4db27f539d89de49d8c81622c366233bca not found: ID does not exist" containerID="f80b22f32e243ce05b9e6f30f2f45f4db27f539d89de49d8c81622c366233bca" Jan 29 16:18:59 crc kubenswrapper[5008]: I0129 16:18:59.555352 5008 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f80b22f32e243ce05b9e6f30f2f45f4db27f539d89de49d8c81622c366233bca"} err="failed to get container status \"f80b22f32e243ce05b9e6f30f2f45f4db27f539d89de49d8c81622c366233bca\": rpc error: code = NotFound desc = could not find container \"f80b22f32e243ce05b9e6f30f2f45f4db27f539d89de49d8c81622c366233bca\": container with ID starting with f80b22f32e243ce05b9e6f30f2f45f4db27f539d89de49d8c81622c366233bca not found: ID does not exist" Jan 29 16:18:59 crc kubenswrapper[5008]: I0129 16:18:59.555385 5008 scope.go:117] "RemoveContainer" containerID="b38e303bac84ac6e6b73c3618d83add378ddc0defa725b1feade55c521510803" Jan 29 16:18:59 crc kubenswrapper[5008]: E0129 16:18:59.560060 5008 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b38e303bac84ac6e6b73c3618d83add378ddc0defa725b1feade55c521510803\": container with ID starting with b38e303bac84ac6e6b73c3618d83add378ddc0defa725b1feade55c521510803 not found: ID does not exist" containerID="b38e303bac84ac6e6b73c3618d83add378ddc0defa725b1feade55c521510803" Jan 29 16:18:59 crc kubenswrapper[5008]: I0129 16:18:59.560139 5008 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b38e303bac84ac6e6b73c3618d83add378ddc0defa725b1feade55c521510803"} err="failed to get container status \"b38e303bac84ac6e6b73c3618d83add378ddc0defa725b1feade55c521510803\": rpc error: code = NotFound desc = could not find container \"b38e303bac84ac6e6b73c3618d83add378ddc0defa725b1feade55c521510803\": container with ID starting with b38e303bac84ac6e6b73c3618d83add378ddc0defa725b1feade55c521510803 not found: ID does not exist" Jan 29 16:18:59 crc kubenswrapper[5008]: I0129 16:18:59.560186 5008 scope.go:117] "RemoveContainer" containerID="6e02cbc77c685b26cd87795bf1ad1154836ba9023d50cdd82fe7d6cbb5bda03f" Jan 29 16:18:59 crc kubenswrapper[5008]: E0129 16:18:59.560922 5008 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6e02cbc77c685b26cd87795bf1ad1154836ba9023d50cdd82fe7d6cbb5bda03f\": container with ID starting with 6e02cbc77c685b26cd87795bf1ad1154836ba9023d50cdd82fe7d6cbb5bda03f not found: ID does not exist" containerID="6e02cbc77c685b26cd87795bf1ad1154836ba9023d50cdd82fe7d6cbb5bda03f" Jan 29 16:18:59 crc kubenswrapper[5008]: I0129 16:18:59.560989 5008 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6e02cbc77c685b26cd87795bf1ad1154836ba9023d50cdd82fe7d6cbb5bda03f"} err="failed to get container status \"6e02cbc77c685b26cd87795bf1ad1154836ba9023d50cdd82fe7d6cbb5bda03f\": rpc error: code = NotFound desc = could not find container \"6e02cbc77c685b26cd87795bf1ad1154836ba9023d50cdd82fe7d6cbb5bda03f\": container with ID starting with 6e02cbc77c685b26cd87795bf1ad1154836ba9023d50cdd82fe7d6cbb5bda03f not found: ID does not exist" Jan 29 16:18:59 crc kubenswrapper[5008]: I0129 16:18:59.708537 5008 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-9lmvr"] Jan 29 16:18:59 crc kubenswrapper[5008]: I0129 16:18:59.715824 5008 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-9lmvr"] Jan 29 16:19:00 crc kubenswrapper[5008]: I0129 16:19:00.366059 5008 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-4knpg"] Jan 29 16:19:00 crc kubenswrapper[5008]: E0129 16:19:00.366742 5008 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0cf4cf5b-529f-49a9-900c-a94b840568d8" containerName="extract-content" Jan 29 16:19:00 crc kubenswrapper[5008]: I0129 16:19:00.366759 5008 state_mem.go:107] "Deleted CPUSet assignment" podUID="0cf4cf5b-529f-49a9-900c-a94b840568d8" containerName="extract-content" Jan 29 16:19:00 crc kubenswrapper[5008]: E0129 16:19:00.366771 5008 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8288f5b4-361c-4f53-bcc9-5ec9a42464cb" containerName="extract-utilities" Jan 29 16:19:00 crc kubenswrapper[5008]: I0129 16:19:00.369661 5008 state_mem.go:107] "Deleted CPUSet assignment" podUID="8288f5b4-361c-4f53-bcc9-5ec9a42464cb" containerName="extract-utilities" Jan 29 16:19:00 crc kubenswrapper[5008]: E0129 16:19:00.369774 5008 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0cf4cf5b-529f-49a9-900c-a94b840568d8" containerName="registry-server" Jan 29 16:19:00 crc kubenswrapper[5008]: I0129 16:19:00.369800 5008 state_mem.go:107] "Deleted CPUSet assignment" podUID="0cf4cf5b-529f-49a9-900c-a94b840568d8" containerName="registry-server" Jan 29 16:19:00 crc kubenswrapper[5008]: E0129 16:19:00.369824 5008 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8288f5b4-361c-4f53-bcc9-5ec9a42464cb" containerName="registry-server" Jan 29 16:19:00 crc kubenswrapper[5008]: I0129 16:19:00.369831 5008 state_mem.go:107] "Deleted CPUSet assignment" podUID="8288f5b4-361c-4f53-bcc9-5ec9a42464cb" containerName="registry-server" Jan 29 16:19:00 crc kubenswrapper[5008]: E0129 16:19:00.369849 5008 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8288f5b4-361c-4f53-bcc9-5ec9a42464cb" containerName="extract-content" Jan 29 16:19:00 crc kubenswrapper[5008]: I0129 16:19:00.369863 5008 state_mem.go:107] "Deleted CPUSet assignment" podUID="8288f5b4-361c-4f53-bcc9-5ec9a42464cb" containerName="extract-content" Jan 29 16:19:00 crc kubenswrapper[5008]: E0129 16:19:00.369916 5008 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0cf4cf5b-529f-49a9-900c-a94b840568d8" containerName="extract-utilities" Jan 29 16:19:00 crc kubenswrapper[5008]: I0129 16:19:00.369923 5008 state_mem.go:107] "Deleted CPUSet assignment" podUID="0cf4cf5b-529f-49a9-900c-a94b840568d8" containerName="extract-utilities" Jan 29 16:19:00 crc kubenswrapper[5008]: I0129 16:19:00.370456 5008 memory_manager.go:354] "RemoveStaleState removing state" podUID="8288f5b4-361c-4f53-bcc9-5ec9a42464cb" containerName="registry-server" Jan 29 16:19:00 crc kubenswrapper[5008]: I0129 16:19:00.370472 5008 memory_manager.go:354] "RemoveStaleState removing state" podUID="0cf4cf5b-529f-49a9-900c-a94b840568d8" containerName="registry-server" Jan 29 16:19:00 crc kubenswrapper[5008]: I0129 16:19:00.372366 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-4knpg" Jan 29 16:19:00 crc kubenswrapper[5008]: I0129 16:19:00.400903 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5409ba7c-5123-492a-a8d6-230022150d55-catalog-content\") pod \"community-operators-4knpg\" (UID: \"5409ba7c-5123-492a-a8d6-230022150d55\") " pod="openshift-marketplace/community-operators-4knpg" Jan 29 16:19:00 crc kubenswrapper[5008]: I0129 16:19:00.401037 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xbn7k\" (UniqueName: \"kubernetes.io/projected/5409ba7c-5123-492a-a8d6-230022150d55-kube-api-access-xbn7k\") pod \"community-operators-4knpg\" (UID: \"5409ba7c-5123-492a-a8d6-230022150d55\") " pod="openshift-marketplace/community-operators-4knpg" Jan 29 16:19:00 crc kubenswrapper[5008]: I0129 16:19:00.401105 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5409ba7c-5123-492a-a8d6-230022150d55-utilities\") pod \"community-operators-4knpg\" (UID: \"5409ba7c-5123-492a-a8d6-230022150d55\") " pod="openshift-marketplace/community-operators-4knpg" Jan 29 16:19:00 crc kubenswrapper[5008]: I0129 16:19:00.403936 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-4knpg"] Jan 29 16:19:00 crc kubenswrapper[5008]: I0129 16:19:00.503448 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xbn7k\" (UniqueName: \"kubernetes.io/projected/5409ba7c-5123-492a-a8d6-230022150d55-kube-api-access-xbn7k\") pod \"community-operators-4knpg\" (UID: \"5409ba7c-5123-492a-a8d6-230022150d55\") " pod="openshift-marketplace/community-operators-4knpg" Jan 29 16:19:00 crc kubenswrapper[5008]: I0129 16:19:00.503546 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5409ba7c-5123-492a-a8d6-230022150d55-utilities\") pod \"community-operators-4knpg\" (UID: \"5409ba7c-5123-492a-a8d6-230022150d55\") " pod="openshift-marketplace/community-operators-4knpg" Jan 29 16:19:00 crc kubenswrapper[5008]: I0129 16:19:00.503654 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5409ba7c-5123-492a-a8d6-230022150d55-catalog-content\") pod \"community-operators-4knpg\" (UID: \"5409ba7c-5123-492a-a8d6-230022150d55\") " pod="openshift-marketplace/community-operators-4knpg" Jan 29 16:19:00 crc kubenswrapper[5008]: I0129 16:19:00.504148 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5409ba7c-5123-492a-a8d6-230022150d55-utilities\") pod \"community-operators-4knpg\" (UID: \"5409ba7c-5123-492a-a8d6-230022150d55\") " pod="openshift-marketplace/community-operators-4knpg" Jan 29 16:19:00 crc kubenswrapper[5008]: I0129 16:19:00.504158 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5409ba7c-5123-492a-a8d6-230022150d55-catalog-content\") pod \"community-operators-4knpg\" (UID: \"5409ba7c-5123-492a-a8d6-230022150d55\") " pod="openshift-marketplace/community-operators-4knpg" Jan 29 16:19:00 crc kubenswrapper[5008]: I0129 16:19:00.522681 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xbn7k\" (UniqueName: \"kubernetes.io/projected/5409ba7c-5123-492a-a8d6-230022150d55-kube-api-access-xbn7k\") pod \"community-operators-4knpg\" (UID: \"5409ba7c-5123-492a-a8d6-230022150d55\") " pod="openshift-marketplace/community-operators-4knpg" Jan 29 16:19:00 crc kubenswrapper[5008]: I0129 16:19:00.696109 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-4knpg" Jan 29 16:19:01 crc kubenswrapper[5008]: W0129 16:19:01.207917 5008 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5409ba7c_5123_492a_a8d6_230022150d55.slice/crio-604777d2fef66e9fcb2db8ebbf9e755a893c019f9d051a093e450298bdc86dfa WatchSource:0}: Error finding container 604777d2fef66e9fcb2db8ebbf9e755a893c019f9d051a093e450298bdc86dfa: Status 404 returned error can't find the container with id 604777d2fef66e9fcb2db8ebbf9e755a893c019f9d051a093e450298bdc86dfa Jan 29 16:19:01 crc kubenswrapper[5008]: I0129 16:19:01.210677 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-4knpg"] Jan 29 16:19:01 crc kubenswrapper[5008]: I0129 16:19:01.342760 5008 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0cf4cf5b-529f-49a9-900c-a94b840568d8" path="/var/lib/kubelet/pods/0cf4cf5b-529f-49a9-900c-a94b840568d8/volumes" Jan 29 16:19:01 crc kubenswrapper[5008]: I0129 16:19:01.344083 5008 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8288f5b4-361c-4f53-bcc9-5ec9a42464cb" path="/var/lib/kubelet/pods/8288f5b4-361c-4f53-bcc9-5ec9a42464cb/volumes" Jan 29 16:19:01 crc kubenswrapper[5008]: I0129 16:19:01.403663 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4knpg" event={"ID":"5409ba7c-5123-492a-a8d6-230022150d55","Type":"ContainerStarted","Data":"604777d2fef66e9fcb2db8ebbf9e755a893c019f9d051a093e450298bdc86dfa"} Jan 29 16:19:02 crc kubenswrapper[5008]: I0129 16:19:02.072156 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Jan 29 16:19:02 crc kubenswrapper[5008]: I0129 16:19:02.412299 5008 generic.go:334] "Generic (PLEG): container finished" podID="5409ba7c-5123-492a-a8d6-230022150d55" containerID="2b378a940467f0cbf6472d864710a671bc24ca70038bead4c823f0e0d9f2216e" exitCode=0 Jan 29 16:19:02 crc kubenswrapper[5008]: I0129 16:19:02.412353 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4knpg" event={"ID":"5409ba7c-5123-492a-a8d6-230022150d55","Type":"ContainerDied","Data":"2b378a940467f0cbf6472d864710a671bc24ca70038bead4c823f0e0d9f2216e"} Jan 29 16:19:02 crc kubenswrapper[5008]: I0129 16:19:02.561600 5008 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-78cbb6b69f-x9bx7_cf3d6df4-e07e-4d72-b2b6-20dcb29700d7/control-plane-machine-set-operator/0.log" Jan 29 16:19:02 crc kubenswrapper[5008]: I0129 16:19:02.776326 5008 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-fsx74_6db03bb1-4833-4d3f-82d5-08ec5710251f/kube-rbac-proxy/0.log" Jan 29 16:19:02 crc kubenswrapper[5008]: I0129 16:19:02.807203 5008 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-fsx74_6db03bb1-4833-4d3f-82d5-08ec5710251f/machine-api-operator/0.log" Jan 29 16:19:03 crc kubenswrapper[5008]: I0129 16:19:03.423765 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4knpg" event={"ID":"5409ba7c-5123-492a-a8d6-230022150d55","Type":"ContainerStarted","Data":"f1c77d927731adf7b001813ceae07bac5ab7c66d0cbd88037fe9806ab861479f"} Jan 29 16:19:04 crc kubenswrapper[5008]: I0129 16:19:04.438417 5008 generic.go:334] "Generic (PLEG): container finished" podID="5409ba7c-5123-492a-a8d6-230022150d55" containerID="f1c77d927731adf7b001813ceae07bac5ab7c66d0cbd88037fe9806ab861479f" exitCode=0 Jan 29 16:19:04 crc kubenswrapper[5008]: I0129 16:19:04.439011 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4knpg" event={"ID":"5409ba7c-5123-492a-a8d6-230022150d55","Type":"ContainerDied","Data":"f1c77d927731adf7b001813ceae07bac5ab7c66d0cbd88037fe9806ab861479f"} Jan 29 16:19:05 crc kubenswrapper[5008]: E0129 16:19:05.328953 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-fl9wc" podUID="66b503d3-cf12-4a89-90ca-27d7f941ed63" Jan 29 16:19:05 crc kubenswrapper[5008]: I0129 16:19:05.460480 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4knpg" event={"ID":"5409ba7c-5123-492a-a8d6-230022150d55","Type":"ContainerStarted","Data":"df92da691b7e09608589d7055d2a73ce0f2f45458c81ee524a84f0764a8a0ba9"} Jan 29 16:19:05 crc kubenswrapper[5008]: I0129 16:19:05.483402 5008 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-4knpg" podStartSLOduration=3.056245486 podStartE2EDuration="5.483384676s" podCreationTimestamp="2026-01-29 16:19:00 +0000 UTC" firstStartedPulling="2026-01-29 16:19:02.41490054 +0000 UTC m=+3086.087754777" lastFinishedPulling="2026-01-29 16:19:04.84203973 +0000 UTC m=+3088.514893967" observedRunningTime="2026-01-29 16:19:05.47941334 +0000 UTC m=+3089.152267597" watchObservedRunningTime="2026-01-29 16:19:05.483384676 +0000 UTC m=+3089.156238903" Jan 29 16:19:06 crc kubenswrapper[5008]: I0129 16:19:06.290618 5008 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 29 16:19:06 crc kubenswrapper[5008]: I0129 16:19:06.291749 5008 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/kube-state-metrics-0" podUID="2691fca5-fe1e-4796-bf43-7135e9d5a198" containerName="kube-state-metrics" containerID="cri-o://9e1a6f84d62e1a65b8306defe6e32b9e1a35b50bcd62a48cbe68e10cb95676c7" gracePeriod=30 Jan 29 16:19:06 crc kubenswrapper[5008]: I0129 16:19:06.472674 5008 generic.go:334] "Generic (PLEG): container finished" podID="2691fca5-fe1e-4796-bf43-7135e9d5a198" containerID="9e1a6f84d62e1a65b8306defe6e32b9e1a35b50bcd62a48cbe68e10cb95676c7" exitCode=2 Jan 29 16:19:06 crc kubenswrapper[5008]: I0129 16:19:06.472748 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"2691fca5-fe1e-4796-bf43-7135e9d5a198","Type":"ContainerDied","Data":"9e1a6f84d62e1a65b8306defe6e32b9e1a35b50bcd62a48cbe68e10cb95676c7"} Jan 29 16:19:06 crc kubenswrapper[5008]: I0129 16:19:06.782694 5008 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 29 16:19:06 crc kubenswrapper[5008]: I0129 16:19:06.863131 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hzp55\" (UniqueName: \"kubernetes.io/projected/2691fca5-fe1e-4796-bf43-7135e9d5a198-kube-api-access-hzp55\") pod \"2691fca5-fe1e-4796-bf43-7135e9d5a198\" (UID: \"2691fca5-fe1e-4796-bf43-7135e9d5a198\") " Jan 29 16:19:06 crc kubenswrapper[5008]: I0129 16:19:06.869340 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2691fca5-fe1e-4796-bf43-7135e9d5a198-kube-api-access-hzp55" (OuterVolumeSpecName: "kube-api-access-hzp55") pod "2691fca5-fe1e-4796-bf43-7135e9d5a198" (UID: "2691fca5-fe1e-4796-bf43-7135e9d5a198"). InnerVolumeSpecName "kube-api-access-hzp55". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:19:06 crc kubenswrapper[5008]: I0129 16:19:06.965055 5008 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hzp55\" (UniqueName: \"kubernetes.io/projected/2691fca5-fe1e-4796-bf43-7135e9d5a198-kube-api-access-hzp55\") on node \"crc\" DevicePath \"\"" Jan 29 16:19:07 crc kubenswrapper[5008]: I0129 16:19:07.482689 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"2691fca5-fe1e-4796-bf43-7135e9d5a198","Type":"ContainerDied","Data":"7986044eeb1cbc11c730082d941ee043dc7374de8a33bf15addb097a4c50eaac"} Jan 29 16:19:07 crc kubenswrapper[5008]: I0129 16:19:07.482745 5008 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 29 16:19:07 crc kubenswrapper[5008]: I0129 16:19:07.483048 5008 scope.go:117] "RemoveContainer" containerID="9e1a6f84d62e1a65b8306defe6e32b9e1a35b50bcd62a48cbe68e10cb95676c7" Jan 29 16:19:07 crc kubenswrapper[5008]: I0129 16:19:07.508841 5008 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 29 16:19:07 crc kubenswrapper[5008]: I0129 16:19:07.519868 5008 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 29 16:19:07 crc kubenswrapper[5008]: I0129 16:19:07.532355 5008 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Jan 29 16:19:07 crc kubenswrapper[5008]: E0129 16:19:07.532760 5008 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2691fca5-fe1e-4796-bf43-7135e9d5a198" containerName="kube-state-metrics" Jan 29 16:19:07 crc kubenswrapper[5008]: I0129 16:19:07.532792 5008 state_mem.go:107] "Deleted CPUSet assignment" podUID="2691fca5-fe1e-4796-bf43-7135e9d5a198" containerName="kube-state-metrics" Jan 29 16:19:07 crc kubenswrapper[5008]: I0129 16:19:07.533015 5008 memory_manager.go:354] "RemoveStaleState removing state" podUID="2691fca5-fe1e-4796-bf43-7135e9d5a198" containerName="kube-state-metrics" Jan 29 16:19:07 crc kubenswrapper[5008]: I0129 16:19:07.533823 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 29 16:19:07 crc kubenswrapper[5008]: I0129 16:19:07.535721 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"kube-state-metrics-tls-config" Jan 29 16:19:07 crc kubenswrapper[5008]: I0129 16:19:07.536225 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-kube-state-metrics-svc" Jan 29 16:19:07 crc kubenswrapper[5008]: I0129 16:19:07.545513 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 29 16:19:07 crc kubenswrapper[5008]: I0129 16:19:07.680275 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-szz2t\" (UniqueName: \"kubernetes.io/projected/deccddae-c37c-4d93-8591-9de86885520d-kube-api-access-szz2t\") pod \"kube-state-metrics-0\" (UID: \"deccddae-c37c-4d93-8591-9de86885520d\") " pod="openstack/kube-state-metrics-0" Jan 29 16:19:07 crc kubenswrapper[5008]: I0129 16:19:07.680551 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/deccddae-c37c-4d93-8591-9de86885520d-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"deccddae-c37c-4d93-8591-9de86885520d\") " pod="openstack/kube-state-metrics-0" Jan 29 16:19:07 crc kubenswrapper[5008]: I0129 16:19:07.680897 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/deccddae-c37c-4d93-8591-9de86885520d-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"deccddae-c37c-4d93-8591-9de86885520d\") " pod="openstack/kube-state-metrics-0" Jan 29 16:19:07 crc kubenswrapper[5008]: I0129 16:19:07.680979 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/deccddae-c37c-4d93-8591-9de86885520d-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"deccddae-c37c-4d93-8591-9de86885520d\") " pod="openstack/kube-state-metrics-0" Jan 29 16:19:07 crc kubenswrapper[5008]: I0129 16:19:07.782746 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/deccddae-c37c-4d93-8591-9de86885520d-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"deccddae-c37c-4d93-8591-9de86885520d\") " pod="openstack/kube-state-metrics-0" Jan 29 16:19:07 crc kubenswrapper[5008]: I0129 16:19:07.782893 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/deccddae-c37c-4d93-8591-9de86885520d-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"deccddae-c37c-4d93-8591-9de86885520d\") " pod="openstack/kube-state-metrics-0" Jan 29 16:19:07 crc kubenswrapper[5008]: I0129 16:19:07.782923 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/deccddae-c37c-4d93-8591-9de86885520d-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"deccddae-c37c-4d93-8591-9de86885520d\") " pod="openstack/kube-state-metrics-0" Jan 29 16:19:07 crc kubenswrapper[5008]: I0129 16:19:07.782971 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-szz2t\" (UniqueName: \"kubernetes.io/projected/deccddae-c37c-4d93-8591-9de86885520d-kube-api-access-szz2t\") pod \"kube-state-metrics-0\" (UID: \"deccddae-c37c-4d93-8591-9de86885520d\") " pod="openstack/kube-state-metrics-0" Jan 29 16:19:07 crc kubenswrapper[5008]: I0129 16:19:07.788857 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/deccddae-c37c-4d93-8591-9de86885520d-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"deccddae-c37c-4d93-8591-9de86885520d\") " pod="openstack/kube-state-metrics-0" Jan 29 16:19:07 crc kubenswrapper[5008]: I0129 16:19:07.789353 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/deccddae-c37c-4d93-8591-9de86885520d-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"deccddae-c37c-4d93-8591-9de86885520d\") " pod="openstack/kube-state-metrics-0" Jan 29 16:19:07 crc kubenswrapper[5008]: I0129 16:19:07.790471 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/deccddae-c37c-4d93-8591-9de86885520d-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"deccddae-c37c-4d93-8591-9de86885520d\") " pod="openstack/kube-state-metrics-0" Jan 29 16:19:07 crc kubenswrapper[5008]: I0129 16:19:07.800370 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-szz2t\" (UniqueName: \"kubernetes.io/projected/deccddae-c37c-4d93-8591-9de86885520d-kube-api-access-szz2t\") pod \"kube-state-metrics-0\" (UID: \"deccddae-c37c-4d93-8591-9de86885520d\") " pod="openstack/kube-state-metrics-0" Jan 29 16:19:07 crc kubenswrapper[5008]: I0129 16:19:07.856695 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 29 16:19:08 crc kubenswrapper[5008]: I0129 16:19:08.182455 5008 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 29 16:19:08 crc kubenswrapper[5008]: I0129 16:19:08.183114 5008 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="d40740f9-e8d8-4f46-b8b0-d913a6c33210" containerName="sg-core" containerID="cri-o://94c1a4df24e57801e6f811a20fbda55d2b2aa44f90464614f709fcc1c7771571" gracePeriod=30 Jan 29 16:19:08 crc kubenswrapper[5008]: I0129 16:19:08.183114 5008 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="d40740f9-e8d8-4f46-b8b0-d913a6c33210" containerName="proxy-httpd" containerID="cri-o://9e74ba55685ef91dc5c5fd4f75d0c04e6a02240db3ef22d23b01c38947545bf7" gracePeriod=30 Jan 29 16:19:08 crc kubenswrapper[5008]: I0129 16:19:08.183198 5008 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="d40740f9-e8d8-4f46-b8b0-d913a6c33210" containerName="ceilometer-notification-agent" containerID="cri-o://c4722e08cd543a7198136070e2b6ad5db84511db8bbbbb4f4cc49e9edd0c3d33" gracePeriod=30 Jan 29 16:19:08 crc kubenswrapper[5008]: I0129 16:19:08.185957 5008 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="d40740f9-e8d8-4f46-b8b0-d913a6c33210" containerName="ceilometer-central-agent" containerID="cri-o://cbbd1ae9f5180a48bfb6b0e06422201465dab2f80d3bcb0bb07d69614c78274c" gracePeriod=30 Jan 29 16:19:08 crc kubenswrapper[5008]: I0129 16:19:08.320701 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 29 16:19:08 crc kubenswrapper[5008]: I0129 16:19:08.493313 5008 generic.go:334] "Generic (PLEG): container finished" podID="d40740f9-e8d8-4f46-b8b0-d913a6c33210" containerID="9e74ba55685ef91dc5c5fd4f75d0c04e6a02240db3ef22d23b01c38947545bf7" exitCode=0 Jan 29 16:19:08 crc kubenswrapper[5008]: I0129 16:19:08.493579 5008 generic.go:334] "Generic (PLEG): container finished" podID="d40740f9-e8d8-4f46-b8b0-d913a6c33210" containerID="94c1a4df24e57801e6f811a20fbda55d2b2aa44f90464614f709fcc1c7771571" exitCode=2 Jan 29 16:19:08 crc kubenswrapper[5008]: I0129 16:19:08.493381 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d40740f9-e8d8-4f46-b8b0-d913a6c33210","Type":"ContainerDied","Data":"9e74ba55685ef91dc5c5fd4f75d0c04e6a02240db3ef22d23b01c38947545bf7"} Jan 29 16:19:08 crc kubenswrapper[5008]: I0129 16:19:08.493643 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d40740f9-e8d8-4f46-b8b0-d913a6c33210","Type":"ContainerDied","Data":"94c1a4df24e57801e6f811a20fbda55d2b2aa44f90464614f709fcc1c7771571"} Jan 29 16:19:08 crc kubenswrapper[5008]: I0129 16:19:08.496834 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"deccddae-c37c-4d93-8591-9de86885520d","Type":"ContainerStarted","Data":"9efb8b918abccd3b69c6ca6aa126d244e44c1f496ecaa07923b76f93590d77c9"} Jan 29 16:19:09 crc kubenswrapper[5008]: E0129 16:19:09.325166 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-7dqqz" podUID="4101eecf-8a4f-4ec9-9b3e-7dc1d9a34f55" Jan 29 16:19:09 crc kubenswrapper[5008]: I0129 16:19:09.341162 5008 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2691fca5-fe1e-4796-bf43-7135e9d5a198" path="/var/lib/kubelet/pods/2691fca5-fe1e-4796-bf43-7135e9d5a198/volumes" Jan 29 16:19:09 crc kubenswrapper[5008]: I0129 16:19:09.509444 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"deccddae-c37c-4d93-8591-9de86885520d","Type":"ContainerStarted","Data":"d8ee7814d4a4eda01787615126315da22cae7f8ac0db50c0a81034b82f401057"} Jan 29 16:19:09 crc kubenswrapper[5008]: I0129 16:19:09.509674 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Jan 29 16:19:09 crc kubenswrapper[5008]: I0129 16:19:09.512835 5008 generic.go:334] "Generic (PLEG): container finished" podID="d40740f9-e8d8-4f46-b8b0-d913a6c33210" containerID="c4722e08cd543a7198136070e2b6ad5db84511db8bbbbb4f4cc49e9edd0c3d33" exitCode=0 Jan 29 16:19:09 crc kubenswrapper[5008]: I0129 16:19:09.512872 5008 generic.go:334] "Generic (PLEG): container finished" podID="d40740f9-e8d8-4f46-b8b0-d913a6c33210" containerID="cbbd1ae9f5180a48bfb6b0e06422201465dab2f80d3bcb0bb07d69614c78274c" exitCode=0 Jan 29 16:19:09 crc kubenswrapper[5008]: I0129 16:19:09.512900 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d40740f9-e8d8-4f46-b8b0-d913a6c33210","Type":"ContainerDied","Data":"c4722e08cd543a7198136070e2b6ad5db84511db8bbbbb4f4cc49e9edd0c3d33"} Jan 29 16:19:09 crc kubenswrapper[5008]: I0129 16:19:09.512931 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d40740f9-e8d8-4f46-b8b0-d913a6c33210","Type":"ContainerDied","Data":"cbbd1ae9f5180a48bfb6b0e06422201465dab2f80d3bcb0bb07d69614c78274c"} Jan 29 16:19:09 crc kubenswrapper[5008]: I0129 16:19:09.540888 5008 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=1.778521366 podStartE2EDuration="2.540844172s" podCreationTimestamp="2026-01-29 16:19:07 +0000 UTC" firstStartedPulling="2026-01-29 16:19:08.331446138 +0000 UTC m=+3092.004300375" lastFinishedPulling="2026-01-29 16:19:09.093768944 +0000 UTC m=+3092.766623181" observedRunningTime="2026-01-29 16:19:09.526650057 +0000 UTC m=+3093.199504314" watchObservedRunningTime="2026-01-29 16:19:09.540844172 +0000 UTC m=+3093.213698439" Jan 29 16:19:09 crc kubenswrapper[5008]: I0129 16:19:09.617896 5008 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 29 16:19:09 crc kubenswrapper[5008]: I0129 16:19:09.723076 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4zk8n\" (UniqueName: \"kubernetes.io/projected/d40740f9-e8d8-4f46-b8b0-d913a6c33210-kube-api-access-4zk8n\") pod \"d40740f9-e8d8-4f46-b8b0-d913a6c33210\" (UID: \"d40740f9-e8d8-4f46-b8b0-d913a6c33210\") " Jan 29 16:19:09 crc kubenswrapper[5008]: I0129 16:19:09.723213 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d40740f9-e8d8-4f46-b8b0-d913a6c33210-log-httpd\") pod \"d40740f9-e8d8-4f46-b8b0-d913a6c33210\" (UID: \"d40740f9-e8d8-4f46-b8b0-d913a6c33210\") " Jan 29 16:19:09 crc kubenswrapper[5008]: I0129 16:19:09.723252 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/d40740f9-e8d8-4f46-b8b0-d913a6c33210-sg-core-conf-yaml\") pod \"d40740f9-e8d8-4f46-b8b0-d913a6c33210\" (UID: \"d40740f9-e8d8-4f46-b8b0-d913a6c33210\") " Jan 29 16:19:09 crc kubenswrapper[5008]: I0129 16:19:09.723314 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d40740f9-e8d8-4f46-b8b0-d913a6c33210-run-httpd\") pod \"d40740f9-e8d8-4f46-b8b0-d913a6c33210\" (UID: \"d40740f9-e8d8-4f46-b8b0-d913a6c33210\") " Jan 29 16:19:09 crc kubenswrapper[5008]: I0129 16:19:09.723375 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d40740f9-e8d8-4f46-b8b0-d913a6c33210-combined-ca-bundle\") pod \"d40740f9-e8d8-4f46-b8b0-d913a6c33210\" (UID: \"d40740f9-e8d8-4f46-b8b0-d913a6c33210\") " Jan 29 16:19:09 crc kubenswrapper[5008]: I0129 16:19:09.723420 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d40740f9-e8d8-4f46-b8b0-d913a6c33210-scripts\") pod \"d40740f9-e8d8-4f46-b8b0-d913a6c33210\" (UID: \"d40740f9-e8d8-4f46-b8b0-d913a6c33210\") " Jan 29 16:19:09 crc kubenswrapper[5008]: I0129 16:19:09.723489 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d40740f9-e8d8-4f46-b8b0-d913a6c33210-config-data\") pod \"d40740f9-e8d8-4f46-b8b0-d913a6c33210\" (UID: \"d40740f9-e8d8-4f46-b8b0-d913a6c33210\") " Jan 29 16:19:09 crc kubenswrapper[5008]: I0129 16:19:09.723843 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d40740f9-e8d8-4f46-b8b0-d913a6c33210-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "d40740f9-e8d8-4f46-b8b0-d913a6c33210" (UID: "d40740f9-e8d8-4f46-b8b0-d913a6c33210"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 16:19:09 crc kubenswrapper[5008]: I0129 16:19:09.724253 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d40740f9-e8d8-4f46-b8b0-d913a6c33210-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "d40740f9-e8d8-4f46-b8b0-d913a6c33210" (UID: "d40740f9-e8d8-4f46-b8b0-d913a6c33210"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 16:19:09 crc kubenswrapper[5008]: I0129 16:19:09.724423 5008 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d40740f9-e8d8-4f46-b8b0-d913a6c33210-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 29 16:19:09 crc kubenswrapper[5008]: I0129 16:19:09.724441 5008 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d40740f9-e8d8-4f46-b8b0-d913a6c33210-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 29 16:19:09 crc kubenswrapper[5008]: I0129 16:19:09.735627 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d40740f9-e8d8-4f46-b8b0-d913a6c33210-kube-api-access-4zk8n" (OuterVolumeSpecName: "kube-api-access-4zk8n") pod "d40740f9-e8d8-4f46-b8b0-d913a6c33210" (UID: "d40740f9-e8d8-4f46-b8b0-d913a6c33210"). InnerVolumeSpecName "kube-api-access-4zk8n". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:19:09 crc kubenswrapper[5008]: I0129 16:19:09.738439 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d40740f9-e8d8-4f46-b8b0-d913a6c33210-scripts" (OuterVolumeSpecName: "scripts") pod "d40740f9-e8d8-4f46-b8b0-d913a6c33210" (UID: "d40740f9-e8d8-4f46-b8b0-d913a6c33210"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:19:09 crc kubenswrapper[5008]: I0129 16:19:09.776706 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d40740f9-e8d8-4f46-b8b0-d913a6c33210-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "d40740f9-e8d8-4f46-b8b0-d913a6c33210" (UID: "d40740f9-e8d8-4f46-b8b0-d913a6c33210"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:19:09 crc kubenswrapper[5008]: I0129 16:19:09.826988 5008 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4zk8n\" (UniqueName: \"kubernetes.io/projected/d40740f9-e8d8-4f46-b8b0-d913a6c33210-kube-api-access-4zk8n\") on node \"crc\" DevicePath \"\"" Jan 29 16:19:09 crc kubenswrapper[5008]: I0129 16:19:09.827021 5008 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/d40740f9-e8d8-4f46-b8b0-d913a6c33210-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 29 16:19:09 crc kubenswrapper[5008]: I0129 16:19:09.827033 5008 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d40740f9-e8d8-4f46-b8b0-d913a6c33210-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 16:19:09 crc kubenswrapper[5008]: I0129 16:19:09.858052 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d40740f9-e8d8-4f46-b8b0-d913a6c33210-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d40740f9-e8d8-4f46-b8b0-d913a6c33210" (UID: "d40740f9-e8d8-4f46-b8b0-d913a6c33210"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:19:09 crc kubenswrapper[5008]: I0129 16:19:09.875306 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d40740f9-e8d8-4f46-b8b0-d913a6c33210-config-data" (OuterVolumeSpecName: "config-data") pod "d40740f9-e8d8-4f46-b8b0-d913a6c33210" (UID: "d40740f9-e8d8-4f46-b8b0-d913a6c33210"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:19:09 crc kubenswrapper[5008]: I0129 16:19:09.928431 5008 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d40740f9-e8d8-4f46-b8b0-d913a6c33210-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 16:19:09 crc kubenswrapper[5008]: I0129 16:19:09.928460 5008 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d40740f9-e8d8-4f46-b8b0-d913a6c33210-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 16:19:10 crc kubenswrapper[5008]: I0129 16:19:10.527262 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d40740f9-e8d8-4f46-b8b0-d913a6c33210","Type":"ContainerDied","Data":"c0e05b5105ed0e3757d467eff34631c34dcca13e2acddb3cd6556349dd4ddb10"} Jan 29 16:19:10 crc kubenswrapper[5008]: I0129 16:19:10.527330 5008 scope.go:117] "RemoveContainer" containerID="9e74ba55685ef91dc5c5fd4f75d0c04e6a02240db3ef22d23b01c38947545bf7" Jan 29 16:19:10 crc kubenswrapper[5008]: I0129 16:19:10.527359 5008 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 29 16:19:10 crc kubenswrapper[5008]: I0129 16:19:10.551127 5008 scope.go:117] "RemoveContainer" containerID="94c1a4df24e57801e6f811a20fbda55d2b2aa44f90464614f709fcc1c7771571" Jan 29 16:19:10 crc kubenswrapper[5008]: I0129 16:19:10.573807 5008 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 29 16:19:10 crc kubenswrapper[5008]: I0129 16:19:10.577676 5008 scope.go:117] "RemoveContainer" containerID="c4722e08cd543a7198136070e2b6ad5db84511db8bbbbb4f4cc49e9edd0c3d33" Jan 29 16:19:10 crc kubenswrapper[5008]: I0129 16:19:10.584540 5008 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 29 16:19:10 crc kubenswrapper[5008]: I0129 16:19:10.599330 5008 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 29 16:19:10 crc kubenswrapper[5008]: E0129 16:19:10.601802 5008 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d40740f9-e8d8-4f46-b8b0-d913a6c33210" containerName="ceilometer-central-agent" Jan 29 16:19:10 crc kubenswrapper[5008]: I0129 16:19:10.601924 5008 state_mem.go:107] "Deleted CPUSet assignment" podUID="d40740f9-e8d8-4f46-b8b0-d913a6c33210" containerName="ceilometer-central-agent" Jan 29 16:19:10 crc kubenswrapper[5008]: E0129 16:19:10.602014 5008 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d40740f9-e8d8-4f46-b8b0-d913a6c33210" containerName="ceilometer-notification-agent" Jan 29 16:19:10 crc kubenswrapper[5008]: I0129 16:19:10.602086 5008 state_mem.go:107] "Deleted CPUSet assignment" podUID="d40740f9-e8d8-4f46-b8b0-d913a6c33210" containerName="ceilometer-notification-agent" Jan 29 16:19:10 crc kubenswrapper[5008]: E0129 16:19:10.602350 5008 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d40740f9-e8d8-4f46-b8b0-d913a6c33210" containerName="proxy-httpd" Jan 29 16:19:10 crc kubenswrapper[5008]: I0129 16:19:10.602428 5008 state_mem.go:107] "Deleted CPUSet assignment" podUID="d40740f9-e8d8-4f46-b8b0-d913a6c33210" containerName="proxy-httpd" Jan 29 16:19:10 crc kubenswrapper[5008]: E0129 16:19:10.602561 5008 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d40740f9-e8d8-4f46-b8b0-d913a6c33210" containerName="sg-core" Jan 29 16:19:10 crc kubenswrapper[5008]: I0129 16:19:10.602649 5008 state_mem.go:107] "Deleted CPUSet assignment" podUID="d40740f9-e8d8-4f46-b8b0-d913a6c33210" containerName="sg-core" Jan 29 16:19:10 crc kubenswrapper[5008]: I0129 16:19:10.603103 5008 memory_manager.go:354] "RemoveStaleState removing state" podUID="d40740f9-e8d8-4f46-b8b0-d913a6c33210" containerName="sg-core" Jan 29 16:19:10 crc kubenswrapper[5008]: I0129 16:19:10.603208 5008 memory_manager.go:354] "RemoveStaleState removing state" podUID="d40740f9-e8d8-4f46-b8b0-d913a6c33210" containerName="ceilometer-central-agent" Jan 29 16:19:10 crc kubenswrapper[5008]: I0129 16:19:10.603332 5008 memory_manager.go:354] "RemoveStaleState removing state" podUID="d40740f9-e8d8-4f46-b8b0-d913a6c33210" containerName="ceilometer-notification-agent" Jan 29 16:19:10 crc kubenswrapper[5008]: I0129 16:19:10.603550 5008 memory_manager.go:354] "RemoveStaleState removing state" podUID="d40740f9-e8d8-4f46-b8b0-d913a6c33210" containerName="proxy-httpd" Jan 29 16:19:10 crc kubenswrapper[5008]: I0129 16:19:10.606079 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 29 16:19:10 crc kubenswrapper[5008]: I0129 16:19:10.610906 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Jan 29 16:19:10 crc kubenswrapper[5008]: I0129 16:19:10.613390 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 29 16:19:10 crc kubenswrapper[5008]: I0129 16:19:10.613583 5008 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 29 16:19:10 crc kubenswrapper[5008]: I0129 16:19:10.617224 5008 scope.go:117] "RemoveContainer" containerID="cbbd1ae9f5180a48bfb6b0e06422201465dab2f80d3bcb0bb07d69614c78274c" Jan 29 16:19:10 crc kubenswrapper[5008]: I0129 16:19:10.617393 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 29 16:19:10 crc kubenswrapper[5008]: I0129 16:19:10.641513 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qndgh\" (UniqueName: \"kubernetes.io/projected/555cfdd3-d86d-45e5-97d5-6f27537a4689-kube-api-access-qndgh\") pod \"ceilometer-0\" (UID: \"555cfdd3-d86d-45e5-97d5-6f27537a4689\") " pod="openstack/ceilometer-0" Jan 29 16:19:10 crc kubenswrapper[5008]: I0129 16:19:10.641850 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/555cfdd3-d86d-45e5-97d5-6f27537a4689-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"555cfdd3-d86d-45e5-97d5-6f27537a4689\") " pod="openstack/ceilometer-0" Jan 29 16:19:10 crc kubenswrapper[5008]: I0129 16:19:10.641950 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/555cfdd3-d86d-45e5-97d5-6f27537a4689-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"555cfdd3-d86d-45e5-97d5-6f27537a4689\") " pod="openstack/ceilometer-0" Jan 29 16:19:10 crc kubenswrapper[5008]: I0129 16:19:10.642050 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/555cfdd3-d86d-45e5-97d5-6f27537a4689-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"555cfdd3-d86d-45e5-97d5-6f27537a4689\") " pod="openstack/ceilometer-0" Jan 29 16:19:10 crc kubenswrapper[5008]: I0129 16:19:10.642118 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/555cfdd3-d86d-45e5-97d5-6f27537a4689-run-httpd\") pod \"ceilometer-0\" (UID: \"555cfdd3-d86d-45e5-97d5-6f27537a4689\") " pod="openstack/ceilometer-0" Jan 29 16:19:10 crc kubenswrapper[5008]: I0129 16:19:10.642305 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/555cfdd3-d86d-45e5-97d5-6f27537a4689-log-httpd\") pod \"ceilometer-0\" (UID: \"555cfdd3-d86d-45e5-97d5-6f27537a4689\") " pod="openstack/ceilometer-0" Jan 29 16:19:10 crc kubenswrapper[5008]: I0129 16:19:10.642467 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/555cfdd3-d86d-45e5-97d5-6f27537a4689-scripts\") pod \"ceilometer-0\" (UID: \"555cfdd3-d86d-45e5-97d5-6f27537a4689\") " pod="openstack/ceilometer-0" Jan 29 16:19:10 crc kubenswrapper[5008]: I0129 16:19:10.642541 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/555cfdd3-d86d-45e5-97d5-6f27537a4689-config-data\") pod \"ceilometer-0\" (UID: \"555cfdd3-d86d-45e5-97d5-6f27537a4689\") " pod="openstack/ceilometer-0" Jan 29 16:19:10 crc kubenswrapper[5008]: I0129 16:19:10.697079 5008 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-4knpg" Jan 29 16:19:10 crc kubenswrapper[5008]: I0129 16:19:10.698493 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-4knpg" Jan 29 16:19:10 crc kubenswrapper[5008]: I0129 16:19:10.743868 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/555cfdd3-d86d-45e5-97d5-6f27537a4689-scripts\") pod \"ceilometer-0\" (UID: \"555cfdd3-d86d-45e5-97d5-6f27537a4689\") " pod="openstack/ceilometer-0" Jan 29 16:19:10 crc kubenswrapper[5008]: I0129 16:19:10.744085 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/555cfdd3-d86d-45e5-97d5-6f27537a4689-config-data\") pod \"ceilometer-0\" (UID: \"555cfdd3-d86d-45e5-97d5-6f27537a4689\") " pod="openstack/ceilometer-0" Jan 29 16:19:10 crc kubenswrapper[5008]: I0129 16:19:10.744163 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qndgh\" (UniqueName: \"kubernetes.io/projected/555cfdd3-d86d-45e5-97d5-6f27537a4689-kube-api-access-qndgh\") pod \"ceilometer-0\" (UID: \"555cfdd3-d86d-45e5-97d5-6f27537a4689\") " pod="openstack/ceilometer-0" Jan 29 16:19:10 crc kubenswrapper[5008]: I0129 16:19:10.744309 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/555cfdd3-d86d-45e5-97d5-6f27537a4689-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"555cfdd3-d86d-45e5-97d5-6f27537a4689\") " pod="openstack/ceilometer-0" Jan 29 16:19:10 crc kubenswrapper[5008]: I0129 16:19:10.744400 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/555cfdd3-d86d-45e5-97d5-6f27537a4689-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"555cfdd3-d86d-45e5-97d5-6f27537a4689\") " pod="openstack/ceilometer-0" Jan 29 16:19:10 crc kubenswrapper[5008]: I0129 16:19:10.744463 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/555cfdd3-d86d-45e5-97d5-6f27537a4689-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"555cfdd3-d86d-45e5-97d5-6f27537a4689\") " pod="openstack/ceilometer-0" Jan 29 16:19:10 crc kubenswrapper[5008]: I0129 16:19:10.744530 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/555cfdd3-d86d-45e5-97d5-6f27537a4689-run-httpd\") pod \"ceilometer-0\" (UID: \"555cfdd3-d86d-45e5-97d5-6f27537a4689\") " pod="openstack/ceilometer-0" Jan 29 16:19:10 crc kubenswrapper[5008]: I0129 16:19:10.744596 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/555cfdd3-d86d-45e5-97d5-6f27537a4689-log-httpd\") pod \"ceilometer-0\" (UID: \"555cfdd3-d86d-45e5-97d5-6f27537a4689\") " pod="openstack/ceilometer-0" Jan 29 16:19:10 crc kubenswrapper[5008]: I0129 16:19:10.745022 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/555cfdd3-d86d-45e5-97d5-6f27537a4689-run-httpd\") pod \"ceilometer-0\" (UID: \"555cfdd3-d86d-45e5-97d5-6f27537a4689\") " pod="openstack/ceilometer-0" Jan 29 16:19:10 crc kubenswrapper[5008]: I0129 16:19:10.745120 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/555cfdd3-d86d-45e5-97d5-6f27537a4689-log-httpd\") pod \"ceilometer-0\" (UID: \"555cfdd3-d86d-45e5-97d5-6f27537a4689\") " pod="openstack/ceilometer-0" Jan 29 16:19:10 crc kubenswrapper[5008]: I0129 16:19:10.749011 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/555cfdd3-d86d-45e5-97d5-6f27537a4689-scripts\") pod \"ceilometer-0\" (UID: \"555cfdd3-d86d-45e5-97d5-6f27537a4689\") " pod="openstack/ceilometer-0" Jan 29 16:19:10 crc kubenswrapper[5008]: I0129 16:19:10.750562 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/555cfdd3-d86d-45e5-97d5-6f27537a4689-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"555cfdd3-d86d-45e5-97d5-6f27537a4689\") " pod="openstack/ceilometer-0" Jan 29 16:19:10 crc kubenswrapper[5008]: I0129 16:19:10.752341 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/555cfdd3-d86d-45e5-97d5-6f27537a4689-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"555cfdd3-d86d-45e5-97d5-6f27537a4689\") " pod="openstack/ceilometer-0" Jan 29 16:19:10 crc kubenswrapper[5008]: I0129 16:19:10.754653 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/555cfdd3-d86d-45e5-97d5-6f27537a4689-config-data\") pod \"ceilometer-0\" (UID: \"555cfdd3-d86d-45e5-97d5-6f27537a4689\") " pod="openstack/ceilometer-0" Jan 29 16:19:10 crc kubenswrapper[5008]: I0129 16:19:10.762208 5008 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-4knpg" Jan 29 16:19:10 crc kubenswrapper[5008]: I0129 16:19:10.762395 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/555cfdd3-d86d-45e5-97d5-6f27537a4689-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"555cfdd3-d86d-45e5-97d5-6f27537a4689\") " pod="openstack/ceilometer-0" Jan 29 16:19:10 crc kubenswrapper[5008]: I0129 16:19:10.764919 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qndgh\" (UniqueName: \"kubernetes.io/projected/555cfdd3-d86d-45e5-97d5-6f27537a4689-kube-api-access-qndgh\") pod \"ceilometer-0\" (UID: \"555cfdd3-d86d-45e5-97d5-6f27537a4689\") " pod="openstack/ceilometer-0" Jan 29 16:19:10 crc kubenswrapper[5008]: I0129 16:19:10.935992 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 29 16:19:11 crc kubenswrapper[5008]: I0129 16:19:11.335281 5008 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d40740f9-e8d8-4f46-b8b0-d913a6c33210" path="/var/lib/kubelet/pods/d40740f9-e8d8-4f46-b8b0-d913a6c33210/volumes" Jan 29 16:19:11 crc kubenswrapper[5008]: W0129 16:19:11.404526 5008 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod555cfdd3_d86d_45e5_97d5_6f27537a4689.slice/crio-314bcb8a04f50cb7e6dfb3e1789b3a53ab0ebdc035310f1078fece85bc42eabb WatchSource:0}: Error finding container 314bcb8a04f50cb7e6dfb3e1789b3a53ab0ebdc035310f1078fece85bc42eabb: Status 404 returned error can't find the container with id 314bcb8a04f50cb7e6dfb3e1789b3a53ab0ebdc035310f1078fece85bc42eabb Jan 29 16:19:11 crc kubenswrapper[5008]: I0129 16:19:11.409308 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 29 16:19:11 crc kubenswrapper[5008]: I0129 16:19:11.538031 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"555cfdd3-d86d-45e5-97d5-6f27537a4689","Type":"ContainerStarted","Data":"314bcb8a04f50cb7e6dfb3e1789b3a53ab0ebdc035310f1078fece85bc42eabb"} Jan 29 16:19:11 crc kubenswrapper[5008]: I0129 16:19:11.591245 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-4knpg" Jan 29 16:19:11 crc kubenswrapper[5008]: I0129 16:19:11.653752 5008 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-4knpg"] Jan 29 16:19:12 crc kubenswrapper[5008]: I0129 16:19:12.547393 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"555cfdd3-d86d-45e5-97d5-6f27537a4689","Type":"ContainerStarted","Data":"6be43b31ac910ae4ec1f4dba9656fa5c8c8e4239b7c47021892fcb2549bb6e77"} Jan 29 16:19:13 crc kubenswrapper[5008]: I0129 16:19:13.560846 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"555cfdd3-d86d-45e5-97d5-6f27537a4689","Type":"ContainerStarted","Data":"ef86ed1f511fa47159eefd44a4cb01541b31a4e53f09dfa6ad6a93886f3b3e3f"} Jan 29 16:19:13 crc kubenswrapper[5008]: I0129 16:19:13.561007 5008 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-4knpg" podUID="5409ba7c-5123-492a-a8d6-230022150d55" containerName="registry-server" containerID="cri-o://df92da691b7e09608589d7055d2a73ce0f2f45458c81ee524a84f0764a8a0ba9" gracePeriod=2 Jan 29 16:19:13 crc kubenswrapper[5008]: I0129 16:19:13.948586 5008 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-4knpg" Jan 29 16:19:13 crc kubenswrapper[5008]: I0129 16:19:13.990707 5008 patch_prober.go:28] interesting pod/machine-config-daemon-gk9q8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 16:19:13 crc kubenswrapper[5008]: I0129 16:19:13.990753 5008 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-gk9q8" podUID="ca0fcb2d-733d-4bde-9bbf-3f7082d0e244" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 16:19:14 crc kubenswrapper[5008]: I0129 16:19:14.009848 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5409ba7c-5123-492a-a8d6-230022150d55-catalog-content\") pod \"5409ba7c-5123-492a-a8d6-230022150d55\" (UID: \"5409ba7c-5123-492a-a8d6-230022150d55\") " Jan 29 16:19:14 crc kubenswrapper[5008]: I0129 16:19:14.010063 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xbn7k\" (UniqueName: \"kubernetes.io/projected/5409ba7c-5123-492a-a8d6-230022150d55-kube-api-access-xbn7k\") pod \"5409ba7c-5123-492a-a8d6-230022150d55\" (UID: \"5409ba7c-5123-492a-a8d6-230022150d55\") " Jan 29 16:19:14 crc kubenswrapper[5008]: I0129 16:19:14.010089 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5409ba7c-5123-492a-a8d6-230022150d55-utilities\") pod \"5409ba7c-5123-492a-a8d6-230022150d55\" (UID: \"5409ba7c-5123-492a-a8d6-230022150d55\") " Jan 29 16:19:14 crc kubenswrapper[5008]: I0129 16:19:14.011290 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5409ba7c-5123-492a-a8d6-230022150d55-utilities" (OuterVolumeSpecName: "utilities") pod "5409ba7c-5123-492a-a8d6-230022150d55" (UID: "5409ba7c-5123-492a-a8d6-230022150d55"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 16:19:14 crc kubenswrapper[5008]: I0129 16:19:14.017318 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5409ba7c-5123-492a-a8d6-230022150d55-kube-api-access-xbn7k" (OuterVolumeSpecName: "kube-api-access-xbn7k") pod "5409ba7c-5123-492a-a8d6-230022150d55" (UID: "5409ba7c-5123-492a-a8d6-230022150d55"). InnerVolumeSpecName "kube-api-access-xbn7k". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:19:14 crc kubenswrapper[5008]: I0129 16:19:14.074509 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5409ba7c-5123-492a-a8d6-230022150d55-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5409ba7c-5123-492a-a8d6-230022150d55" (UID: "5409ba7c-5123-492a-a8d6-230022150d55"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 16:19:14 crc kubenswrapper[5008]: I0129 16:19:14.112829 5008 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xbn7k\" (UniqueName: \"kubernetes.io/projected/5409ba7c-5123-492a-a8d6-230022150d55-kube-api-access-xbn7k\") on node \"crc\" DevicePath \"\"" Jan 29 16:19:14 crc kubenswrapper[5008]: I0129 16:19:14.112911 5008 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5409ba7c-5123-492a-a8d6-230022150d55-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 16:19:14 crc kubenswrapper[5008]: I0129 16:19:14.112928 5008 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5409ba7c-5123-492a-a8d6-230022150d55-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 16:19:14 crc kubenswrapper[5008]: I0129 16:19:14.571190 5008 generic.go:334] "Generic (PLEG): container finished" podID="5409ba7c-5123-492a-a8d6-230022150d55" containerID="df92da691b7e09608589d7055d2a73ce0f2f45458c81ee524a84f0764a8a0ba9" exitCode=0 Jan 29 16:19:14 crc kubenswrapper[5008]: I0129 16:19:14.571267 5008 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-4knpg" Jan 29 16:19:14 crc kubenswrapper[5008]: I0129 16:19:14.571276 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4knpg" event={"ID":"5409ba7c-5123-492a-a8d6-230022150d55","Type":"ContainerDied","Data":"df92da691b7e09608589d7055d2a73ce0f2f45458c81ee524a84f0764a8a0ba9"} Jan 29 16:19:14 crc kubenswrapper[5008]: I0129 16:19:14.571774 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4knpg" event={"ID":"5409ba7c-5123-492a-a8d6-230022150d55","Type":"ContainerDied","Data":"604777d2fef66e9fcb2db8ebbf9e755a893c019f9d051a093e450298bdc86dfa"} Jan 29 16:19:14 crc kubenswrapper[5008]: I0129 16:19:14.571845 5008 scope.go:117] "RemoveContainer" containerID="df92da691b7e09608589d7055d2a73ce0f2f45458c81ee524a84f0764a8a0ba9" Jan 29 16:19:14 crc kubenswrapper[5008]: I0129 16:19:14.576796 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"555cfdd3-d86d-45e5-97d5-6f27537a4689","Type":"ContainerStarted","Data":"24d093ac26006df6aeca5f3301dd74d900a848c05db60e157234b31aa6e5e9b9"} Jan 29 16:19:14 crc kubenswrapper[5008]: I0129 16:19:14.599309 5008 scope.go:117] "RemoveContainer" containerID="f1c77d927731adf7b001813ceae07bac5ab7c66d0cbd88037fe9806ab861479f" Jan 29 16:19:14 crc kubenswrapper[5008]: I0129 16:19:14.610718 5008 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-4knpg"] Jan 29 16:19:14 crc kubenswrapper[5008]: I0129 16:19:14.621115 5008 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-4knpg"] Jan 29 16:19:14 crc kubenswrapper[5008]: I0129 16:19:14.623962 5008 scope.go:117] "RemoveContainer" containerID="2b378a940467f0cbf6472d864710a671bc24ca70038bead4c823f0e0d9f2216e" Jan 29 16:19:14 crc kubenswrapper[5008]: I0129 16:19:14.663653 5008 scope.go:117] "RemoveContainer" containerID="df92da691b7e09608589d7055d2a73ce0f2f45458c81ee524a84f0764a8a0ba9" Jan 29 16:19:14 crc kubenswrapper[5008]: E0129 16:19:14.664065 5008 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"df92da691b7e09608589d7055d2a73ce0f2f45458c81ee524a84f0764a8a0ba9\": container with ID starting with df92da691b7e09608589d7055d2a73ce0f2f45458c81ee524a84f0764a8a0ba9 not found: ID does not exist" containerID="df92da691b7e09608589d7055d2a73ce0f2f45458c81ee524a84f0764a8a0ba9" Jan 29 16:19:14 crc kubenswrapper[5008]: I0129 16:19:14.664127 5008 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"df92da691b7e09608589d7055d2a73ce0f2f45458c81ee524a84f0764a8a0ba9"} err="failed to get container status \"df92da691b7e09608589d7055d2a73ce0f2f45458c81ee524a84f0764a8a0ba9\": rpc error: code = NotFound desc = could not find container \"df92da691b7e09608589d7055d2a73ce0f2f45458c81ee524a84f0764a8a0ba9\": container with ID starting with df92da691b7e09608589d7055d2a73ce0f2f45458c81ee524a84f0764a8a0ba9 not found: ID does not exist" Jan 29 16:19:14 crc kubenswrapper[5008]: I0129 16:19:14.664158 5008 scope.go:117] "RemoveContainer" containerID="f1c77d927731adf7b001813ceae07bac5ab7c66d0cbd88037fe9806ab861479f" Jan 29 16:19:14 crc kubenswrapper[5008]: E0129 16:19:14.664536 5008 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f1c77d927731adf7b001813ceae07bac5ab7c66d0cbd88037fe9806ab861479f\": container with ID starting with f1c77d927731adf7b001813ceae07bac5ab7c66d0cbd88037fe9806ab861479f not found: ID does not exist" containerID="f1c77d927731adf7b001813ceae07bac5ab7c66d0cbd88037fe9806ab861479f" Jan 29 16:19:14 crc kubenswrapper[5008]: I0129 16:19:14.664565 5008 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f1c77d927731adf7b001813ceae07bac5ab7c66d0cbd88037fe9806ab861479f"} err="failed to get container status \"f1c77d927731adf7b001813ceae07bac5ab7c66d0cbd88037fe9806ab861479f\": rpc error: code = NotFound desc = could not find container \"f1c77d927731adf7b001813ceae07bac5ab7c66d0cbd88037fe9806ab861479f\": container with ID starting with f1c77d927731adf7b001813ceae07bac5ab7c66d0cbd88037fe9806ab861479f not found: ID does not exist" Jan 29 16:19:14 crc kubenswrapper[5008]: I0129 16:19:14.664585 5008 scope.go:117] "RemoveContainer" containerID="2b378a940467f0cbf6472d864710a671bc24ca70038bead4c823f0e0d9f2216e" Jan 29 16:19:14 crc kubenswrapper[5008]: E0129 16:19:14.664899 5008 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2b378a940467f0cbf6472d864710a671bc24ca70038bead4c823f0e0d9f2216e\": container with ID starting with 2b378a940467f0cbf6472d864710a671bc24ca70038bead4c823f0e0d9f2216e not found: ID does not exist" containerID="2b378a940467f0cbf6472d864710a671bc24ca70038bead4c823f0e0d9f2216e" Jan 29 16:19:14 crc kubenswrapper[5008]: I0129 16:19:14.664934 5008 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2b378a940467f0cbf6472d864710a671bc24ca70038bead4c823f0e0d9f2216e"} err="failed to get container status \"2b378a940467f0cbf6472d864710a671bc24ca70038bead4c823f0e0d9f2216e\": rpc error: code = NotFound desc = could not find container \"2b378a940467f0cbf6472d864710a671bc24ca70038bead4c823f0e0d9f2216e\": container with ID starting with 2b378a940467f0cbf6472d864710a671bc24ca70038bead4c823f0e0d9f2216e not found: ID does not exist" Jan 29 16:19:15 crc kubenswrapper[5008]: I0129 16:19:15.335622 5008 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5409ba7c-5123-492a-a8d6-230022150d55" path="/var/lib/kubelet/pods/5409ba7c-5123-492a-a8d6-230022150d55/volumes" Jan 29 16:19:16 crc kubenswrapper[5008]: I0129 16:19:16.599107 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"555cfdd3-d86d-45e5-97d5-6f27537a4689","Type":"ContainerStarted","Data":"898801d18f7d47890091d3b9387543becdf9583287ef369259c6bb440c0ba97e"} Jan 29 16:19:16 crc kubenswrapper[5008]: I0129 16:19:16.599484 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 29 16:19:16 crc kubenswrapper[5008]: I0129 16:19:16.627089 5008 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.498151768 podStartE2EDuration="6.627071779s" podCreationTimestamp="2026-01-29 16:19:10 +0000 UTC" firstStartedPulling="2026-01-29 16:19:11.406906434 +0000 UTC m=+3095.079760671" lastFinishedPulling="2026-01-29 16:19:15.535826445 +0000 UTC m=+3099.208680682" observedRunningTime="2026-01-29 16:19:16.618645784 +0000 UTC m=+3100.291500031" watchObservedRunningTime="2026-01-29 16:19:16.627071779 +0000 UTC m=+3100.299926026" Jan 29 16:19:17 crc kubenswrapper[5008]: I0129 16:19:17.214558 5008 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-858654f9db-fbjsd_346fd378-8582-44af-8332-dad183bddf6e/cert-manager-controller/0.log" Jan 29 16:19:17 crc kubenswrapper[5008]: I0129 16:19:17.392769 5008 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-cf98fcc89-dvjtx_1217edcf-8ec1-4354-8fbe-a9325b564932/cert-manager-cainjector/0.log" Jan 29 16:19:17 crc kubenswrapper[5008]: I0129 16:19:17.493252 5008 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-687f57d79b-wvlhn_6111be19-5e01-42e4-b4cf-3728e3ee4a6f/cert-manager-webhook/0.log" Jan 29 16:19:17 crc kubenswrapper[5008]: I0129 16:19:17.866212 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Jan 29 16:19:18 crc kubenswrapper[5008]: I0129 16:19:18.622086 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fl9wc" event={"ID":"66b503d3-cf12-4a89-90ca-27d7f941ed63","Type":"ContainerStarted","Data":"e0632e7f9af8247b5a7a4f0953ccb4f15c83027061e3c28a71653287247f8042"} Jan 29 16:19:19 crc kubenswrapper[5008]: I0129 16:19:19.631035 5008 generic.go:334] "Generic (PLEG): container finished" podID="66b503d3-cf12-4a89-90ca-27d7f941ed63" containerID="e0632e7f9af8247b5a7a4f0953ccb4f15c83027061e3c28a71653287247f8042" exitCode=0 Jan 29 16:19:19 crc kubenswrapper[5008]: I0129 16:19:19.631075 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fl9wc" event={"ID":"66b503d3-cf12-4a89-90ca-27d7f941ed63","Type":"ContainerDied","Data":"e0632e7f9af8247b5a7a4f0953ccb4f15c83027061e3c28a71653287247f8042"} Jan 29 16:19:20 crc kubenswrapper[5008]: I0129 16:19:20.643482 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fl9wc" event={"ID":"66b503d3-cf12-4a89-90ca-27d7f941ed63","Type":"ContainerStarted","Data":"0b08f4327220f62f3c44b671e5c402a183896b42a585d257814182a9695bbf89"} Jan 29 16:19:20 crc kubenswrapper[5008]: I0129 16:19:20.667937 5008 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-fl9wc" podStartSLOduration=3.386235129 podStartE2EDuration="11m5.667919181s" podCreationTimestamp="2026-01-29 16:08:15 +0000 UTC" firstStartedPulling="2026-01-29 16:08:17.727404837 +0000 UTC m=+2441.400259074" lastFinishedPulling="2026-01-29 16:19:20.009088879 +0000 UTC m=+3103.681943126" observedRunningTime="2026-01-29 16:19:20.659469186 +0000 UTC m=+3104.332323433" watchObservedRunningTime="2026-01-29 16:19:20.667919181 +0000 UTC m=+3104.340773418" Jan 29 16:19:24 crc kubenswrapper[5008]: E0129 16:19:24.326170 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-7dqqz" podUID="4101eecf-8a4f-4ec9-9b3e-7dc1d9a34f55" Jan 29 16:19:25 crc kubenswrapper[5008]: I0129 16:19:25.914357 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-fl9wc" Jan 29 16:19:25 crc kubenswrapper[5008]: I0129 16:19:25.914729 5008 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-fl9wc" Jan 29 16:19:25 crc kubenswrapper[5008]: I0129 16:19:25.966471 5008 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-fl9wc" Jan 29 16:19:26 crc kubenswrapper[5008]: I0129 16:19:26.738794 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-fl9wc" Jan 29 16:19:26 crc kubenswrapper[5008]: I0129 16:19:26.784677 5008 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-fl9wc"] Jan 29 16:19:28 crc kubenswrapper[5008]: I0129 16:19:28.704994 5008 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-fl9wc" podUID="66b503d3-cf12-4a89-90ca-27d7f941ed63" containerName="registry-server" containerID="cri-o://0b08f4327220f62f3c44b671e5c402a183896b42a585d257814182a9695bbf89" gracePeriod=2 Jan 29 16:19:29 crc kubenswrapper[5008]: I0129 16:19:29.215087 5008 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-fl9wc" Jan 29 16:19:29 crc kubenswrapper[5008]: I0129 16:19:29.299372 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/66b503d3-cf12-4a89-90ca-27d7f941ed63-utilities\") pod \"66b503d3-cf12-4a89-90ca-27d7f941ed63\" (UID: \"66b503d3-cf12-4a89-90ca-27d7f941ed63\") " Jan 29 16:19:29 crc kubenswrapper[5008]: I0129 16:19:29.299549 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l8j5q\" (UniqueName: \"kubernetes.io/projected/66b503d3-cf12-4a89-90ca-27d7f941ed63-kube-api-access-l8j5q\") pod \"66b503d3-cf12-4a89-90ca-27d7f941ed63\" (UID: \"66b503d3-cf12-4a89-90ca-27d7f941ed63\") " Jan 29 16:19:29 crc kubenswrapper[5008]: I0129 16:19:29.299635 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/66b503d3-cf12-4a89-90ca-27d7f941ed63-catalog-content\") pod \"66b503d3-cf12-4a89-90ca-27d7f941ed63\" (UID: \"66b503d3-cf12-4a89-90ca-27d7f941ed63\") " Jan 29 16:19:29 crc kubenswrapper[5008]: I0129 16:19:29.301207 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/66b503d3-cf12-4a89-90ca-27d7f941ed63-utilities" (OuterVolumeSpecName: "utilities") pod "66b503d3-cf12-4a89-90ca-27d7f941ed63" (UID: "66b503d3-cf12-4a89-90ca-27d7f941ed63"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 16:19:29 crc kubenswrapper[5008]: I0129 16:19:29.326031 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/66b503d3-cf12-4a89-90ca-27d7f941ed63-kube-api-access-l8j5q" (OuterVolumeSpecName: "kube-api-access-l8j5q") pod "66b503d3-cf12-4a89-90ca-27d7f941ed63" (UID: "66b503d3-cf12-4a89-90ca-27d7f941ed63"). InnerVolumeSpecName "kube-api-access-l8j5q". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:19:29 crc kubenswrapper[5008]: I0129 16:19:29.351072 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/66b503d3-cf12-4a89-90ca-27d7f941ed63-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "66b503d3-cf12-4a89-90ca-27d7f941ed63" (UID: "66b503d3-cf12-4a89-90ca-27d7f941ed63"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 16:19:29 crc kubenswrapper[5008]: I0129 16:19:29.402235 5008 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l8j5q\" (UniqueName: \"kubernetes.io/projected/66b503d3-cf12-4a89-90ca-27d7f941ed63-kube-api-access-l8j5q\") on node \"crc\" DevicePath \"\"" Jan 29 16:19:29 crc kubenswrapper[5008]: I0129 16:19:29.402294 5008 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/66b503d3-cf12-4a89-90ca-27d7f941ed63-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 16:19:29 crc kubenswrapper[5008]: I0129 16:19:29.402391 5008 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/66b503d3-cf12-4a89-90ca-27d7f941ed63-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 16:19:29 crc kubenswrapper[5008]: I0129 16:19:29.721458 5008 generic.go:334] "Generic (PLEG): container finished" podID="66b503d3-cf12-4a89-90ca-27d7f941ed63" containerID="0b08f4327220f62f3c44b671e5c402a183896b42a585d257814182a9695bbf89" exitCode=0 Jan 29 16:19:29 crc kubenswrapper[5008]: I0129 16:19:29.721529 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fl9wc" event={"ID":"66b503d3-cf12-4a89-90ca-27d7f941ed63","Type":"ContainerDied","Data":"0b08f4327220f62f3c44b671e5c402a183896b42a585d257814182a9695bbf89"} Jan 29 16:19:29 crc kubenswrapper[5008]: I0129 16:19:29.721594 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fl9wc" event={"ID":"66b503d3-cf12-4a89-90ca-27d7f941ed63","Type":"ContainerDied","Data":"5b1b00bb2ae97cde561959176674c8591e6b4a491353c5009f561f79b72ee787"} Jan 29 16:19:29 crc kubenswrapper[5008]: I0129 16:19:29.721612 5008 scope.go:117] "RemoveContainer" containerID="0b08f4327220f62f3c44b671e5c402a183896b42a585d257814182a9695bbf89" Jan 29 16:19:29 crc kubenswrapper[5008]: I0129 16:19:29.721540 5008 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-fl9wc" Jan 29 16:19:29 crc kubenswrapper[5008]: I0129 16:19:29.744607 5008 scope.go:117] "RemoveContainer" containerID="e0632e7f9af8247b5a7a4f0953ccb4f15c83027061e3c28a71653287247f8042" Jan 29 16:19:29 crc kubenswrapper[5008]: I0129 16:19:29.775091 5008 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-fl9wc"] Jan 29 16:19:29 crc kubenswrapper[5008]: I0129 16:19:29.790035 5008 scope.go:117] "RemoveContainer" containerID="048187ee97fe863a1a9a27bcd1b80c7e899bb088322f45c86b7fe479870681b4" Jan 29 16:19:29 crc kubenswrapper[5008]: I0129 16:19:29.807016 5008 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-fl9wc"] Jan 29 16:19:29 crc kubenswrapper[5008]: I0129 16:19:29.831618 5008 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-thkns"] Jan 29 16:19:29 crc kubenswrapper[5008]: E0129 16:19:29.832137 5008 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5409ba7c-5123-492a-a8d6-230022150d55" containerName="extract-content" Jan 29 16:19:29 crc kubenswrapper[5008]: I0129 16:19:29.832162 5008 state_mem.go:107] "Deleted CPUSet assignment" podUID="5409ba7c-5123-492a-a8d6-230022150d55" containerName="extract-content" Jan 29 16:19:29 crc kubenswrapper[5008]: E0129 16:19:29.832190 5008 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5409ba7c-5123-492a-a8d6-230022150d55" containerName="registry-server" Jan 29 16:19:29 crc kubenswrapper[5008]: I0129 16:19:29.832200 5008 state_mem.go:107] "Deleted CPUSet assignment" podUID="5409ba7c-5123-492a-a8d6-230022150d55" containerName="registry-server" Jan 29 16:19:29 crc kubenswrapper[5008]: E0129 16:19:29.832222 5008 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="66b503d3-cf12-4a89-90ca-27d7f941ed63" containerName="registry-server" Jan 29 16:19:29 crc kubenswrapper[5008]: I0129 16:19:29.832231 5008 state_mem.go:107] "Deleted CPUSet assignment" podUID="66b503d3-cf12-4a89-90ca-27d7f941ed63" containerName="registry-server" Jan 29 16:19:29 crc kubenswrapper[5008]: E0129 16:19:29.832241 5008 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="66b503d3-cf12-4a89-90ca-27d7f941ed63" containerName="extract-content" Jan 29 16:19:29 crc kubenswrapper[5008]: I0129 16:19:29.832248 5008 state_mem.go:107] "Deleted CPUSet assignment" podUID="66b503d3-cf12-4a89-90ca-27d7f941ed63" containerName="extract-content" Jan 29 16:19:29 crc kubenswrapper[5008]: E0129 16:19:29.832271 5008 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5409ba7c-5123-492a-a8d6-230022150d55" containerName="extract-utilities" Jan 29 16:19:29 crc kubenswrapper[5008]: I0129 16:19:29.832279 5008 state_mem.go:107] "Deleted CPUSet assignment" podUID="5409ba7c-5123-492a-a8d6-230022150d55" containerName="extract-utilities" Jan 29 16:19:29 crc kubenswrapper[5008]: E0129 16:19:29.832295 5008 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="66b503d3-cf12-4a89-90ca-27d7f941ed63" containerName="extract-utilities" Jan 29 16:19:29 crc kubenswrapper[5008]: I0129 16:19:29.832302 5008 state_mem.go:107] "Deleted CPUSet assignment" podUID="66b503d3-cf12-4a89-90ca-27d7f941ed63" containerName="extract-utilities" Jan 29 16:19:29 crc kubenswrapper[5008]: I0129 16:19:29.832506 5008 memory_manager.go:354] "RemoveStaleState removing state" podUID="5409ba7c-5123-492a-a8d6-230022150d55" containerName="registry-server" Jan 29 16:19:29 crc kubenswrapper[5008]: I0129 16:19:29.832520 5008 memory_manager.go:354] "RemoveStaleState removing state" podUID="66b503d3-cf12-4a89-90ca-27d7f941ed63" containerName="registry-server" Jan 29 16:19:29 crc kubenswrapper[5008]: I0129 16:19:29.842474 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-thkns" Jan 29 16:19:29 crc kubenswrapper[5008]: I0129 16:19:29.877010 5008 scope.go:117] "RemoveContainer" containerID="0b08f4327220f62f3c44b671e5c402a183896b42a585d257814182a9695bbf89" Jan 29 16:19:29 crc kubenswrapper[5008]: E0129 16:19:29.877426 5008 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0b08f4327220f62f3c44b671e5c402a183896b42a585d257814182a9695bbf89\": container with ID starting with 0b08f4327220f62f3c44b671e5c402a183896b42a585d257814182a9695bbf89 not found: ID does not exist" containerID="0b08f4327220f62f3c44b671e5c402a183896b42a585d257814182a9695bbf89" Jan 29 16:19:29 crc kubenswrapper[5008]: I0129 16:19:29.877474 5008 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0b08f4327220f62f3c44b671e5c402a183896b42a585d257814182a9695bbf89"} err="failed to get container status \"0b08f4327220f62f3c44b671e5c402a183896b42a585d257814182a9695bbf89\": rpc error: code = NotFound desc = could not find container \"0b08f4327220f62f3c44b671e5c402a183896b42a585d257814182a9695bbf89\": container with ID starting with 0b08f4327220f62f3c44b671e5c402a183896b42a585d257814182a9695bbf89 not found: ID does not exist" Jan 29 16:19:29 crc kubenswrapper[5008]: I0129 16:19:29.877502 5008 scope.go:117] "RemoveContainer" containerID="e0632e7f9af8247b5a7a4f0953ccb4f15c83027061e3c28a71653287247f8042" Jan 29 16:19:29 crc kubenswrapper[5008]: E0129 16:19:29.877766 5008 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e0632e7f9af8247b5a7a4f0953ccb4f15c83027061e3c28a71653287247f8042\": container with ID starting with e0632e7f9af8247b5a7a4f0953ccb4f15c83027061e3c28a71653287247f8042 not found: ID does not exist" containerID="e0632e7f9af8247b5a7a4f0953ccb4f15c83027061e3c28a71653287247f8042" Jan 29 16:19:29 crc kubenswrapper[5008]: I0129 16:19:29.877806 5008 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e0632e7f9af8247b5a7a4f0953ccb4f15c83027061e3c28a71653287247f8042"} err="failed to get container status \"e0632e7f9af8247b5a7a4f0953ccb4f15c83027061e3c28a71653287247f8042\": rpc error: code = NotFound desc = could not find container \"e0632e7f9af8247b5a7a4f0953ccb4f15c83027061e3c28a71653287247f8042\": container with ID starting with e0632e7f9af8247b5a7a4f0953ccb4f15c83027061e3c28a71653287247f8042 not found: ID does not exist" Jan 29 16:19:29 crc kubenswrapper[5008]: I0129 16:19:29.877822 5008 scope.go:117] "RemoveContainer" containerID="048187ee97fe863a1a9a27bcd1b80c7e899bb088322f45c86b7fe479870681b4" Jan 29 16:19:29 crc kubenswrapper[5008]: E0129 16:19:29.878117 5008 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"048187ee97fe863a1a9a27bcd1b80c7e899bb088322f45c86b7fe479870681b4\": container with ID starting with 048187ee97fe863a1a9a27bcd1b80c7e899bb088322f45c86b7fe479870681b4 not found: ID does not exist" containerID="048187ee97fe863a1a9a27bcd1b80c7e899bb088322f45c86b7fe479870681b4" Jan 29 16:19:29 crc kubenswrapper[5008]: I0129 16:19:29.878220 5008 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"048187ee97fe863a1a9a27bcd1b80c7e899bb088322f45c86b7fe479870681b4"} err="failed to get container status \"048187ee97fe863a1a9a27bcd1b80c7e899bb088322f45c86b7fe479870681b4\": rpc error: code = NotFound desc = could not find container \"048187ee97fe863a1a9a27bcd1b80c7e899bb088322f45c86b7fe479870681b4\": container with ID starting with 048187ee97fe863a1a9a27bcd1b80c7e899bb088322f45c86b7fe479870681b4 not found: ID does not exist" Jan 29 16:19:29 crc kubenswrapper[5008]: I0129 16:19:29.878966 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-thkns"] Jan 29 16:19:29 crc kubenswrapper[5008]: I0129 16:19:29.916696 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/791ec4b8-9faf-4411-86e0-1cdbba387a54-catalog-content\") pod \"redhat-marketplace-thkns\" (UID: \"791ec4b8-9faf-4411-86e0-1cdbba387a54\") " pod="openshift-marketplace/redhat-marketplace-thkns" Jan 29 16:19:29 crc kubenswrapper[5008]: I0129 16:19:29.916858 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/791ec4b8-9faf-4411-86e0-1cdbba387a54-utilities\") pod \"redhat-marketplace-thkns\" (UID: \"791ec4b8-9faf-4411-86e0-1cdbba387a54\") " pod="openshift-marketplace/redhat-marketplace-thkns" Jan 29 16:19:29 crc kubenswrapper[5008]: I0129 16:19:29.916904 5008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lbb47\" (UniqueName: \"kubernetes.io/projected/791ec4b8-9faf-4411-86e0-1cdbba387a54-kube-api-access-lbb47\") pod \"redhat-marketplace-thkns\" (UID: \"791ec4b8-9faf-4411-86e0-1cdbba387a54\") " pod="openshift-marketplace/redhat-marketplace-thkns" Jan 29 16:19:30 crc kubenswrapper[5008]: I0129 16:19:30.018940 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/791ec4b8-9faf-4411-86e0-1cdbba387a54-catalog-content\") pod \"redhat-marketplace-thkns\" (UID: \"791ec4b8-9faf-4411-86e0-1cdbba387a54\") " pod="openshift-marketplace/redhat-marketplace-thkns" Jan 29 16:19:30 crc kubenswrapper[5008]: I0129 16:19:30.019066 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/791ec4b8-9faf-4411-86e0-1cdbba387a54-utilities\") pod \"redhat-marketplace-thkns\" (UID: \"791ec4b8-9faf-4411-86e0-1cdbba387a54\") " pod="openshift-marketplace/redhat-marketplace-thkns" Jan 29 16:19:30 crc kubenswrapper[5008]: I0129 16:19:30.019107 5008 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lbb47\" (UniqueName: \"kubernetes.io/projected/791ec4b8-9faf-4411-86e0-1cdbba387a54-kube-api-access-lbb47\") pod \"redhat-marketplace-thkns\" (UID: \"791ec4b8-9faf-4411-86e0-1cdbba387a54\") " pod="openshift-marketplace/redhat-marketplace-thkns" Jan 29 16:19:30 crc kubenswrapper[5008]: I0129 16:19:30.021010 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/791ec4b8-9faf-4411-86e0-1cdbba387a54-catalog-content\") pod \"redhat-marketplace-thkns\" (UID: \"791ec4b8-9faf-4411-86e0-1cdbba387a54\") " pod="openshift-marketplace/redhat-marketplace-thkns" Jan 29 16:19:30 crc kubenswrapper[5008]: I0129 16:19:30.021331 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/791ec4b8-9faf-4411-86e0-1cdbba387a54-utilities\") pod \"redhat-marketplace-thkns\" (UID: \"791ec4b8-9faf-4411-86e0-1cdbba387a54\") " pod="openshift-marketplace/redhat-marketplace-thkns" Jan 29 16:19:30 crc kubenswrapper[5008]: I0129 16:19:30.046337 5008 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lbb47\" (UniqueName: \"kubernetes.io/projected/791ec4b8-9faf-4411-86e0-1cdbba387a54-kube-api-access-lbb47\") pod \"redhat-marketplace-thkns\" (UID: \"791ec4b8-9faf-4411-86e0-1cdbba387a54\") " pod="openshift-marketplace/redhat-marketplace-thkns" Jan 29 16:19:30 crc kubenswrapper[5008]: I0129 16:19:30.178448 5008 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-thkns" Jan 29 16:19:30 crc kubenswrapper[5008]: I0129 16:19:30.205886 5008 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-console-plugin-7754f76f8b-dvn47_75f20405-b349-4e5f-ba1a-b6bf348766ce/nmstate-console-plugin/0.log" Jan 29 16:19:30 crc kubenswrapper[5008]: I0129 16:19:30.457383 5008 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-handler-8hxxx_beee9730-825d-4a7e-9ef1-d735b1bddd07/nmstate-handler/0.log" Jan 29 16:19:30 crc kubenswrapper[5008]: I0129 16:19:30.566098 5008 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-54757c584b-mtz4q_5379965a-18ce-41a4-8753-7a70ed4a5efc/kube-rbac-proxy/0.log" Jan 29 16:19:30 crc kubenswrapper[5008]: I0129 16:19:30.626934 5008 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-54757c584b-mtz4q_5379965a-18ce-41a4-8753-7a70ed4a5efc/nmstate-metrics/0.log" Jan 29 16:19:30 crc kubenswrapper[5008]: I0129 16:19:30.693617 5008 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-thkns"] Jan 29 16:19:30 crc kubenswrapper[5008]: W0129 16:19:30.713300 5008 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod791ec4b8_9faf_4411_86e0_1cdbba387a54.slice/crio-ef95452c63f936703e4e92dfd38c4937746f4249a5c6a8d24f349553df025930 WatchSource:0}: Error finding container ef95452c63f936703e4e92dfd38c4937746f4249a5c6a8d24f349553df025930: Status 404 returned error can't find the container with id ef95452c63f936703e4e92dfd38c4937746f4249a5c6a8d24f349553df025930 Jan 29 16:19:30 crc kubenswrapper[5008]: I0129 16:19:30.731210 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-thkns" event={"ID":"791ec4b8-9faf-4411-86e0-1cdbba387a54","Type":"ContainerStarted","Data":"ef95452c63f936703e4e92dfd38c4937746f4249a5c6a8d24f349553df025930"} Jan 29 16:19:30 crc kubenswrapper[5008]: I0129 16:19:30.836907 5008 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-operator-646758c888-dkpn2_5fab4312-8998-4667-af25-ba459fcb4a68/nmstate-operator/0.log" Jan 29 16:19:30 crc kubenswrapper[5008]: I0129 16:19:30.863656 5008 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-webhook-8474b5b9d8-qz5xs_6a7e5f12-26c5-4197-81ed-559569651fab/nmstate-webhook/0.log" Jan 29 16:19:31 crc kubenswrapper[5008]: I0129 16:19:31.333483 5008 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="66b503d3-cf12-4a89-90ca-27d7f941ed63" path="/var/lib/kubelet/pods/66b503d3-cf12-4a89-90ca-27d7f941ed63/volumes" Jan 29 16:19:31 crc kubenswrapper[5008]: I0129 16:19:31.739846 5008 generic.go:334] "Generic (PLEG): container finished" podID="791ec4b8-9faf-4411-86e0-1cdbba387a54" containerID="b8f592b71d71cdb9928adb23730813b71ff2d1af6494328a287094d242021d6f" exitCode=0 Jan 29 16:19:31 crc kubenswrapper[5008]: I0129 16:19:31.740082 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-thkns" event={"ID":"791ec4b8-9faf-4411-86e0-1cdbba387a54","Type":"ContainerDied","Data":"b8f592b71d71cdb9928adb23730813b71ff2d1af6494328a287094d242021d6f"} Jan 29 16:19:32 crc kubenswrapper[5008]: I0129 16:19:32.750491 5008 generic.go:334] "Generic (PLEG): container finished" podID="791ec4b8-9faf-4411-86e0-1cdbba387a54" containerID="8293e5dd624277694d4477c3103a59422d2e6feaf4246f99a36c62e17deae579" exitCode=0 Jan 29 16:19:32 crc kubenswrapper[5008]: I0129 16:19:32.750703 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-thkns" event={"ID":"791ec4b8-9faf-4411-86e0-1cdbba387a54","Type":"ContainerDied","Data":"8293e5dd624277694d4477c3103a59422d2e6feaf4246f99a36c62e17deae579"} Jan 29 16:19:33 crc kubenswrapper[5008]: I0129 16:19:33.770226 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-thkns" event={"ID":"791ec4b8-9faf-4411-86e0-1cdbba387a54","Type":"ContainerStarted","Data":"77ad8c9afa52cdbbc48c5d4e74be56127e8b25bc18eb739a8ad34180a84e32bd"} Jan 29 16:19:33 crc kubenswrapper[5008]: I0129 16:19:33.796329 5008 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-thkns" podStartSLOduration=3.351457325 podStartE2EDuration="4.796309897s" podCreationTimestamp="2026-01-29 16:19:29 +0000 UTC" firstStartedPulling="2026-01-29 16:19:31.74229197 +0000 UTC m=+3115.415146207" lastFinishedPulling="2026-01-29 16:19:33.187144532 +0000 UTC m=+3116.859998779" observedRunningTime="2026-01-29 16:19:33.791589682 +0000 UTC m=+3117.464443919" watchObservedRunningTime="2026-01-29 16:19:33.796309897 +0000 UTC m=+3117.469164134" Jan 29 16:19:38 crc kubenswrapper[5008]: E0129 16:19:38.325388 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-7dqqz" podUID="4101eecf-8a4f-4ec9-9b3e-7dc1d9a34f55" Jan 29 16:19:40 crc kubenswrapper[5008]: I0129 16:19:40.179591 5008 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-thkns" Jan 29 16:19:40 crc kubenswrapper[5008]: I0129 16:19:40.180012 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-thkns" Jan 29 16:19:40 crc kubenswrapper[5008]: I0129 16:19:40.238160 5008 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-thkns" Jan 29 16:19:40 crc kubenswrapper[5008]: I0129 16:19:40.872615 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-thkns" Jan 29 16:19:40 crc kubenswrapper[5008]: I0129 16:19:40.929747 5008 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-thkns"] Jan 29 16:19:40 crc kubenswrapper[5008]: I0129 16:19:40.947565 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Jan 29 16:19:42 crc kubenswrapper[5008]: I0129 16:19:42.862380 5008 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-thkns" podUID="791ec4b8-9faf-4411-86e0-1cdbba387a54" containerName="registry-server" containerID="cri-o://77ad8c9afa52cdbbc48c5d4e74be56127e8b25bc18eb739a8ad34180a84e32bd" gracePeriod=2 Jan 29 16:19:43 crc kubenswrapper[5008]: I0129 16:19:43.351163 5008 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-thkns" Jan 29 16:19:43 crc kubenswrapper[5008]: I0129 16:19:43.382602 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/791ec4b8-9faf-4411-86e0-1cdbba387a54-utilities\") pod \"791ec4b8-9faf-4411-86e0-1cdbba387a54\" (UID: \"791ec4b8-9faf-4411-86e0-1cdbba387a54\") " Jan 29 16:19:43 crc kubenswrapper[5008]: I0129 16:19:43.382742 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lbb47\" (UniqueName: \"kubernetes.io/projected/791ec4b8-9faf-4411-86e0-1cdbba387a54-kube-api-access-lbb47\") pod \"791ec4b8-9faf-4411-86e0-1cdbba387a54\" (UID: \"791ec4b8-9faf-4411-86e0-1cdbba387a54\") " Jan 29 16:19:43 crc kubenswrapper[5008]: I0129 16:19:43.382766 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/791ec4b8-9faf-4411-86e0-1cdbba387a54-catalog-content\") pod \"791ec4b8-9faf-4411-86e0-1cdbba387a54\" (UID: \"791ec4b8-9faf-4411-86e0-1cdbba387a54\") " Jan 29 16:19:43 crc kubenswrapper[5008]: I0129 16:19:43.386640 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/791ec4b8-9faf-4411-86e0-1cdbba387a54-utilities" (OuterVolumeSpecName: "utilities") pod "791ec4b8-9faf-4411-86e0-1cdbba387a54" (UID: "791ec4b8-9faf-4411-86e0-1cdbba387a54"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 16:19:43 crc kubenswrapper[5008]: I0129 16:19:43.392813 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/791ec4b8-9faf-4411-86e0-1cdbba387a54-kube-api-access-lbb47" (OuterVolumeSpecName: "kube-api-access-lbb47") pod "791ec4b8-9faf-4411-86e0-1cdbba387a54" (UID: "791ec4b8-9faf-4411-86e0-1cdbba387a54"). InnerVolumeSpecName "kube-api-access-lbb47". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:19:43 crc kubenswrapper[5008]: I0129 16:19:43.404095 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/791ec4b8-9faf-4411-86e0-1cdbba387a54-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "791ec4b8-9faf-4411-86e0-1cdbba387a54" (UID: "791ec4b8-9faf-4411-86e0-1cdbba387a54"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 16:19:43 crc kubenswrapper[5008]: I0129 16:19:43.484052 5008 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lbb47\" (UniqueName: \"kubernetes.io/projected/791ec4b8-9faf-4411-86e0-1cdbba387a54-kube-api-access-lbb47\") on node \"crc\" DevicePath \"\"" Jan 29 16:19:43 crc kubenswrapper[5008]: I0129 16:19:43.484079 5008 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/791ec4b8-9faf-4411-86e0-1cdbba387a54-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 16:19:43 crc kubenswrapper[5008]: I0129 16:19:43.484088 5008 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/791ec4b8-9faf-4411-86e0-1cdbba387a54-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 16:19:43 crc kubenswrapper[5008]: I0129 16:19:43.871714 5008 generic.go:334] "Generic (PLEG): container finished" podID="791ec4b8-9faf-4411-86e0-1cdbba387a54" containerID="77ad8c9afa52cdbbc48c5d4e74be56127e8b25bc18eb739a8ad34180a84e32bd" exitCode=0 Jan 29 16:19:43 crc kubenswrapper[5008]: I0129 16:19:43.871765 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-thkns" event={"ID":"791ec4b8-9faf-4411-86e0-1cdbba387a54","Type":"ContainerDied","Data":"77ad8c9afa52cdbbc48c5d4e74be56127e8b25bc18eb739a8ad34180a84e32bd"} Jan 29 16:19:43 crc kubenswrapper[5008]: I0129 16:19:43.871817 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-thkns" event={"ID":"791ec4b8-9faf-4411-86e0-1cdbba387a54","Type":"ContainerDied","Data":"ef95452c63f936703e4e92dfd38c4937746f4249a5c6a8d24f349553df025930"} Jan 29 16:19:43 crc kubenswrapper[5008]: I0129 16:19:43.871841 5008 scope.go:117] "RemoveContainer" containerID="77ad8c9afa52cdbbc48c5d4e74be56127e8b25bc18eb739a8ad34180a84e32bd" Jan 29 16:19:43 crc kubenswrapper[5008]: I0129 16:19:43.871852 5008 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-thkns" Jan 29 16:19:43 crc kubenswrapper[5008]: I0129 16:19:43.899009 5008 scope.go:117] "RemoveContainer" containerID="8293e5dd624277694d4477c3103a59422d2e6feaf4246f99a36c62e17deae579" Jan 29 16:19:43 crc kubenswrapper[5008]: I0129 16:19:43.910339 5008 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-thkns"] Jan 29 16:19:43 crc kubenswrapper[5008]: I0129 16:19:43.917849 5008 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-thkns"] Jan 29 16:19:43 crc kubenswrapper[5008]: I0129 16:19:43.921864 5008 scope.go:117] "RemoveContainer" containerID="b8f592b71d71cdb9928adb23730813b71ff2d1af6494328a287094d242021d6f" Jan 29 16:19:43 crc kubenswrapper[5008]: I0129 16:19:43.963459 5008 scope.go:117] "RemoveContainer" containerID="77ad8c9afa52cdbbc48c5d4e74be56127e8b25bc18eb739a8ad34180a84e32bd" Jan 29 16:19:43 crc kubenswrapper[5008]: E0129 16:19:43.964059 5008 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"77ad8c9afa52cdbbc48c5d4e74be56127e8b25bc18eb739a8ad34180a84e32bd\": container with ID starting with 77ad8c9afa52cdbbc48c5d4e74be56127e8b25bc18eb739a8ad34180a84e32bd not found: ID does not exist" containerID="77ad8c9afa52cdbbc48c5d4e74be56127e8b25bc18eb739a8ad34180a84e32bd" Jan 29 16:19:43 crc kubenswrapper[5008]: I0129 16:19:43.964105 5008 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"77ad8c9afa52cdbbc48c5d4e74be56127e8b25bc18eb739a8ad34180a84e32bd"} err="failed to get container status \"77ad8c9afa52cdbbc48c5d4e74be56127e8b25bc18eb739a8ad34180a84e32bd\": rpc error: code = NotFound desc = could not find container \"77ad8c9afa52cdbbc48c5d4e74be56127e8b25bc18eb739a8ad34180a84e32bd\": container with ID starting with 77ad8c9afa52cdbbc48c5d4e74be56127e8b25bc18eb739a8ad34180a84e32bd not found: ID does not exist" Jan 29 16:19:43 crc kubenswrapper[5008]: I0129 16:19:43.964131 5008 scope.go:117] "RemoveContainer" containerID="8293e5dd624277694d4477c3103a59422d2e6feaf4246f99a36c62e17deae579" Jan 29 16:19:43 crc kubenswrapper[5008]: E0129 16:19:43.964450 5008 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8293e5dd624277694d4477c3103a59422d2e6feaf4246f99a36c62e17deae579\": container with ID starting with 8293e5dd624277694d4477c3103a59422d2e6feaf4246f99a36c62e17deae579 not found: ID does not exist" containerID="8293e5dd624277694d4477c3103a59422d2e6feaf4246f99a36c62e17deae579" Jan 29 16:19:43 crc kubenswrapper[5008]: I0129 16:19:43.964481 5008 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8293e5dd624277694d4477c3103a59422d2e6feaf4246f99a36c62e17deae579"} err="failed to get container status \"8293e5dd624277694d4477c3103a59422d2e6feaf4246f99a36c62e17deae579\": rpc error: code = NotFound desc = could not find container \"8293e5dd624277694d4477c3103a59422d2e6feaf4246f99a36c62e17deae579\": container with ID starting with 8293e5dd624277694d4477c3103a59422d2e6feaf4246f99a36c62e17deae579 not found: ID does not exist" Jan 29 16:19:43 crc kubenswrapper[5008]: I0129 16:19:43.964523 5008 scope.go:117] "RemoveContainer" containerID="b8f592b71d71cdb9928adb23730813b71ff2d1af6494328a287094d242021d6f" Jan 29 16:19:43 crc kubenswrapper[5008]: E0129 16:19:43.964853 5008 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b8f592b71d71cdb9928adb23730813b71ff2d1af6494328a287094d242021d6f\": container with ID starting with b8f592b71d71cdb9928adb23730813b71ff2d1af6494328a287094d242021d6f not found: ID does not exist" containerID="b8f592b71d71cdb9928adb23730813b71ff2d1af6494328a287094d242021d6f" Jan 29 16:19:43 crc kubenswrapper[5008]: I0129 16:19:43.964879 5008 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b8f592b71d71cdb9928adb23730813b71ff2d1af6494328a287094d242021d6f"} err="failed to get container status \"b8f592b71d71cdb9928adb23730813b71ff2d1af6494328a287094d242021d6f\": rpc error: code = NotFound desc = could not find container \"b8f592b71d71cdb9928adb23730813b71ff2d1af6494328a287094d242021d6f\": container with ID starting with b8f592b71d71cdb9928adb23730813b71ff2d1af6494328a287094d242021d6f not found: ID does not exist" Jan 29 16:19:43 crc kubenswrapper[5008]: I0129 16:19:43.990358 5008 patch_prober.go:28] interesting pod/machine-config-daemon-gk9q8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 16:19:43 crc kubenswrapper[5008]: I0129 16:19:43.990409 5008 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-gk9q8" podUID="ca0fcb2d-733d-4bde-9bbf-3f7082d0e244" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 16:19:45 crc kubenswrapper[5008]: I0129 16:19:45.336465 5008 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="791ec4b8-9faf-4411-86e0-1cdbba387a54" path="/var/lib/kubelet/pods/791ec4b8-9faf-4411-86e0-1cdbba387a54/volumes" Jan 29 16:19:52 crc kubenswrapper[5008]: I0129 16:19:52.956544 5008 generic.go:334] "Generic (PLEG): container finished" podID="4101eecf-8a4f-4ec9-9b3e-7dc1d9a34f55" containerID="da32733de0e082104c5258a0d60c3c5480c31ba14a4975ee94ebb9467ffa7232" exitCode=0 Jan 29 16:19:52 crc kubenswrapper[5008]: I0129 16:19:52.956734 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7dqqz" event={"ID":"4101eecf-8a4f-4ec9-9b3e-7dc1d9a34f55","Type":"ContainerDied","Data":"da32733de0e082104c5258a0d60c3c5480c31ba14a4975ee94ebb9467ffa7232"} Jan 29 16:19:53 crc kubenswrapper[5008]: I0129 16:19:53.966588 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7dqqz" event={"ID":"4101eecf-8a4f-4ec9-9b3e-7dc1d9a34f55","Type":"ContainerStarted","Data":"99992b523341b200d0e645e25f4067588907da760923a85c6858cf8885593845"} Jan 29 16:19:53 crc kubenswrapper[5008]: I0129 16:19:53.994893 5008 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-7dqqz" podStartSLOduration=2.586754501 podStartE2EDuration="5m38.994863878s" podCreationTimestamp="2026-01-29 16:14:15 +0000 UTC" firstStartedPulling="2026-01-29 16:14:16.93862721 +0000 UTC m=+2800.611481457" lastFinishedPulling="2026-01-29 16:19:53.346736597 +0000 UTC m=+3137.019590834" observedRunningTime="2026-01-29 16:19:53.985509271 +0000 UTC m=+3137.658363508" watchObservedRunningTime="2026-01-29 16:19:53.994863878 +0000 UTC m=+3137.667718135" Jan 29 16:19:55 crc kubenswrapper[5008]: I0129 16:19:55.963401 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-7dqqz" Jan 29 16:19:55 crc kubenswrapper[5008]: I0129 16:19:55.963813 5008 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-7dqqz" Jan 29 16:19:56 crc kubenswrapper[5008]: I0129 16:19:56.122942 5008 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6968d8fdc4-bzslg_88b3b62b-8ee9-4541-a109-c52f195f55c2/kube-rbac-proxy/0.log" Jan 29 16:19:56 crc kubenswrapper[5008]: I0129 16:19:56.234743 5008 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6968d8fdc4-bzslg_88b3b62b-8ee9-4541-a109-c52f195f55c2/controller/0.log" Jan 29 16:19:56 crc kubenswrapper[5008]: I0129 16:19:56.337276 5008 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-95tm6_17fc1fa7-5758-4768-a6f5-5b63b63d0948/cp-frr-files/0.log" Jan 29 16:19:56 crc kubenswrapper[5008]: I0129 16:19:56.536859 5008 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-95tm6_17fc1fa7-5758-4768-a6f5-5b63b63d0948/cp-frr-files/0.log" Jan 29 16:19:56 crc kubenswrapper[5008]: I0129 16:19:56.560386 5008 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-95tm6_17fc1fa7-5758-4768-a6f5-5b63b63d0948/cp-metrics/0.log" Jan 29 16:19:56 crc kubenswrapper[5008]: I0129 16:19:56.599077 5008 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-95tm6_17fc1fa7-5758-4768-a6f5-5b63b63d0948/cp-reloader/0.log" Jan 29 16:19:56 crc kubenswrapper[5008]: I0129 16:19:56.666863 5008 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-95tm6_17fc1fa7-5758-4768-a6f5-5b63b63d0948/cp-reloader/0.log" Jan 29 16:19:56 crc kubenswrapper[5008]: I0129 16:19:56.820651 5008 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-95tm6_17fc1fa7-5758-4768-a6f5-5b63b63d0948/cp-frr-files/0.log" Jan 29 16:19:56 crc kubenswrapper[5008]: I0129 16:19:56.825949 5008 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-95tm6_17fc1fa7-5758-4768-a6f5-5b63b63d0948/cp-reloader/0.log" Jan 29 16:19:56 crc kubenswrapper[5008]: I0129 16:19:56.872190 5008 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-95tm6_17fc1fa7-5758-4768-a6f5-5b63b63d0948/cp-metrics/0.log" Jan 29 16:19:56 crc kubenswrapper[5008]: I0129 16:19:56.943051 5008 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-95tm6_17fc1fa7-5758-4768-a6f5-5b63b63d0948/cp-metrics/0.log" Jan 29 16:19:57 crc kubenswrapper[5008]: I0129 16:19:57.020166 5008 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-7dqqz" podUID="4101eecf-8a4f-4ec9-9b3e-7dc1d9a34f55" containerName="registry-server" probeResult="failure" output=< Jan 29 16:19:57 crc kubenswrapper[5008]: timeout: failed to connect service ":50051" within 1s Jan 29 16:19:57 crc kubenswrapper[5008]: > Jan 29 16:19:57 crc kubenswrapper[5008]: I0129 16:19:57.115497 5008 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-95tm6_17fc1fa7-5758-4768-a6f5-5b63b63d0948/cp-frr-files/0.log" Jan 29 16:19:57 crc kubenswrapper[5008]: I0129 16:19:57.136512 5008 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-95tm6_17fc1fa7-5758-4768-a6f5-5b63b63d0948/cp-reloader/0.log" Jan 29 16:19:57 crc kubenswrapper[5008]: I0129 16:19:57.136512 5008 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-95tm6_17fc1fa7-5758-4768-a6f5-5b63b63d0948/cp-metrics/0.log" Jan 29 16:19:57 crc kubenswrapper[5008]: I0129 16:19:57.148676 5008 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-95tm6_17fc1fa7-5758-4768-a6f5-5b63b63d0948/controller/0.log" Jan 29 16:19:57 crc kubenswrapper[5008]: I0129 16:19:57.334202 5008 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-95tm6_17fc1fa7-5758-4768-a6f5-5b63b63d0948/frr-metrics/0.log" Jan 29 16:19:57 crc kubenswrapper[5008]: I0129 16:19:57.347119 5008 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-95tm6_17fc1fa7-5758-4768-a6f5-5b63b63d0948/kube-rbac-proxy-frr/0.log" Jan 29 16:19:57 crc kubenswrapper[5008]: I0129 16:19:57.430186 5008 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-95tm6_17fc1fa7-5758-4768-a6f5-5b63b63d0948/kube-rbac-proxy/0.log" Jan 29 16:19:57 crc kubenswrapper[5008]: I0129 16:19:57.621091 5008 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-95tm6_17fc1fa7-5758-4768-a6f5-5b63b63d0948/reloader/0.log" Jan 29 16:19:57 crc kubenswrapper[5008]: I0129 16:19:57.683740 5008 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-webhook-server-7df86c4f6c-4l5h6_fc07e8e0-7de8-4d7a-96f9-8ccdd7180f07/frr-k8s-webhook-server/0.log" Jan 29 16:19:57 crc kubenswrapper[5008]: I0129 16:19:57.939832 5008 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-8644cb7465-xww64_65797f8d-98da-4cbc-a7df-cd6d00fda635/manager/0.log" Jan 29 16:19:58 crc kubenswrapper[5008]: I0129 16:19:58.052259 5008 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-webhook-server-6b97546cb-r5lk9_42235713-405f-4dc1-9e60-3b1615ec49a2/webhook-server/0.log" Jan 29 16:19:58 crc kubenswrapper[5008]: I0129 16:19:58.214399 5008 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-dmtw7_8927915f-8333-415c-82e1-47d948a6e8ad/kube-rbac-proxy/0.log" Jan 29 16:19:58 crc kubenswrapper[5008]: I0129 16:19:58.738735 5008 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-dmtw7_8927915f-8333-415c-82e1-47d948a6e8ad/speaker/0.log" Jan 29 16:19:58 crc kubenswrapper[5008]: I0129 16:19:58.794941 5008 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-95tm6_17fc1fa7-5758-4768-a6f5-5b63b63d0948/frr/0.log" Jan 29 16:20:06 crc kubenswrapper[5008]: I0129 16:20:06.027662 5008 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-7dqqz" Jan 29 16:20:06 crc kubenswrapper[5008]: I0129 16:20:06.094822 5008 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-7dqqz" Jan 29 16:20:06 crc kubenswrapper[5008]: I0129 16:20:06.266426 5008 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-7dqqz"] Jan 29 16:20:07 crc kubenswrapper[5008]: I0129 16:20:07.075544 5008 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-7dqqz" podUID="4101eecf-8a4f-4ec9-9b3e-7dc1d9a34f55" containerName="registry-server" containerID="cri-o://99992b523341b200d0e645e25f4067588907da760923a85c6858cf8885593845" gracePeriod=2 Jan 29 16:20:07 crc kubenswrapper[5008]: I0129 16:20:07.519861 5008 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-7dqqz" Jan 29 16:20:07 crc kubenswrapper[5008]: I0129 16:20:07.545671 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4101eecf-8a4f-4ec9-9b3e-7dc1d9a34f55-catalog-content\") pod \"4101eecf-8a4f-4ec9-9b3e-7dc1d9a34f55\" (UID: \"4101eecf-8a4f-4ec9-9b3e-7dc1d9a34f55\") " Jan 29 16:20:07 crc kubenswrapper[5008]: I0129 16:20:07.545776 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4101eecf-8a4f-4ec9-9b3e-7dc1d9a34f55-utilities\") pod \"4101eecf-8a4f-4ec9-9b3e-7dc1d9a34f55\" (UID: \"4101eecf-8a4f-4ec9-9b3e-7dc1d9a34f55\") " Jan 29 16:20:07 crc kubenswrapper[5008]: I0129 16:20:07.545841 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bl7kv\" (UniqueName: \"kubernetes.io/projected/4101eecf-8a4f-4ec9-9b3e-7dc1d9a34f55-kube-api-access-bl7kv\") pod \"4101eecf-8a4f-4ec9-9b3e-7dc1d9a34f55\" (UID: \"4101eecf-8a4f-4ec9-9b3e-7dc1d9a34f55\") " Jan 29 16:20:07 crc kubenswrapper[5008]: I0129 16:20:07.552018 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4101eecf-8a4f-4ec9-9b3e-7dc1d9a34f55-utilities" (OuterVolumeSpecName: "utilities") pod "4101eecf-8a4f-4ec9-9b3e-7dc1d9a34f55" (UID: "4101eecf-8a4f-4ec9-9b3e-7dc1d9a34f55"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 16:20:07 crc kubenswrapper[5008]: I0129 16:20:07.553976 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4101eecf-8a4f-4ec9-9b3e-7dc1d9a34f55-kube-api-access-bl7kv" (OuterVolumeSpecName: "kube-api-access-bl7kv") pod "4101eecf-8a4f-4ec9-9b3e-7dc1d9a34f55" (UID: "4101eecf-8a4f-4ec9-9b3e-7dc1d9a34f55"). InnerVolumeSpecName "kube-api-access-bl7kv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:20:07 crc kubenswrapper[5008]: I0129 16:20:07.649247 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4101eecf-8a4f-4ec9-9b3e-7dc1d9a34f55-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "4101eecf-8a4f-4ec9-9b3e-7dc1d9a34f55" (UID: "4101eecf-8a4f-4ec9-9b3e-7dc1d9a34f55"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 16:20:07 crc kubenswrapper[5008]: I0129 16:20:07.649945 5008 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4101eecf-8a4f-4ec9-9b3e-7dc1d9a34f55-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 16:20:07 crc kubenswrapper[5008]: I0129 16:20:07.649964 5008 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4101eecf-8a4f-4ec9-9b3e-7dc1d9a34f55-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 16:20:07 crc kubenswrapper[5008]: I0129 16:20:07.649973 5008 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bl7kv\" (UniqueName: \"kubernetes.io/projected/4101eecf-8a4f-4ec9-9b3e-7dc1d9a34f55-kube-api-access-bl7kv\") on node \"crc\" DevicePath \"\"" Jan 29 16:20:08 crc kubenswrapper[5008]: I0129 16:20:08.085458 5008 generic.go:334] "Generic (PLEG): container finished" podID="4101eecf-8a4f-4ec9-9b3e-7dc1d9a34f55" containerID="99992b523341b200d0e645e25f4067588907da760923a85c6858cf8885593845" exitCode=0 Jan 29 16:20:08 crc kubenswrapper[5008]: I0129 16:20:08.085510 5008 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-7dqqz" Jan 29 16:20:08 crc kubenswrapper[5008]: I0129 16:20:08.085524 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7dqqz" event={"ID":"4101eecf-8a4f-4ec9-9b3e-7dc1d9a34f55","Type":"ContainerDied","Data":"99992b523341b200d0e645e25f4067588907da760923a85c6858cf8885593845"} Jan 29 16:20:08 crc kubenswrapper[5008]: I0129 16:20:08.085597 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7dqqz" event={"ID":"4101eecf-8a4f-4ec9-9b3e-7dc1d9a34f55","Type":"ContainerDied","Data":"4bc8d674639c663e12f180fa6c89b4e70c92f8b3fda66ccac4d3e879acdf15cc"} Jan 29 16:20:08 crc kubenswrapper[5008]: I0129 16:20:08.085637 5008 scope.go:117] "RemoveContainer" containerID="99992b523341b200d0e645e25f4067588907da760923a85c6858cf8885593845" Jan 29 16:20:08 crc kubenswrapper[5008]: I0129 16:20:08.107726 5008 scope.go:117] "RemoveContainer" containerID="da32733de0e082104c5258a0d60c3c5480c31ba14a4975ee94ebb9467ffa7232" Jan 29 16:20:08 crc kubenswrapper[5008]: I0129 16:20:08.124587 5008 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-7dqqz"] Jan 29 16:20:08 crc kubenswrapper[5008]: I0129 16:20:08.136390 5008 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-7dqqz"] Jan 29 16:20:08 crc kubenswrapper[5008]: I0129 16:20:08.137641 5008 scope.go:117] "RemoveContainer" containerID="5afd0a214b8f8d22e6164362eafb7f99729ea9d22bade9b4d16142746c8240a6" Jan 29 16:20:08 crc kubenswrapper[5008]: I0129 16:20:08.189720 5008 scope.go:117] "RemoveContainer" containerID="99992b523341b200d0e645e25f4067588907da760923a85c6858cf8885593845" Jan 29 16:20:08 crc kubenswrapper[5008]: E0129 16:20:08.190600 5008 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"99992b523341b200d0e645e25f4067588907da760923a85c6858cf8885593845\": container with ID starting with 99992b523341b200d0e645e25f4067588907da760923a85c6858cf8885593845 not found: ID does not exist" containerID="99992b523341b200d0e645e25f4067588907da760923a85c6858cf8885593845" Jan 29 16:20:08 crc kubenswrapper[5008]: I0129 16:20:08.190665 5008 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"99992b523341b200d0e645e25f4067588907da760923a85c6858cf8885593845"} err="failed to get container status \"99992b523341b200d0e645e25f4067588907da760923a85c6858cf8885593845\": rpc error: code = NotFound desc = could not find container \"99992b523341b200d0e645e25f4067588907da760923a85c6858cf8885593845\": container with ID starting with 99992b523341b200d0e645e25f4067588907da760923a85c6858cf8885593845 not found: ID does not exist" Jan 29 16:20:08 crc kubenswrapper[5008]: I0129 16:20:08.190700 5008 scope.go:117] "RemoveContainer" containerID="da32733de0e082104c5258a0d60c3c5480c31ba14a4975ee94ebb9467ffa7232" Jan 29 16:20:08 crc kubenswrapper[5008]: E0129 16:20:08.191069 5008 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"da32733de0e082104c5258a0d60c3c5480c31ba14a4975ee94ebb9467ffa7232\": container with ID starting with da32733de0e082104c5258a0d60c3c5480c31ba14a4975ee94ebb9467ffa7232 not found: ID does not exist" containerID="da32733de0e082104c5258a0d60c3c5480c31ba14a4975ee94ebb9467ffa7232" Jan 29 16:20:08 crc kubenswrapper[5008]: I0129 16:20:08.191103 5008 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"da32733de0e082104c5258a0d60c3c5480c31ba14a4975ee94ebb9467ffa7232"} err="failed to get container status \"da32733de0e082104c5258a0d60c3c5480c31ba14a4975ee94ebb9467ffa7232\": rpc error: code = NotFound desc = could not find container \"da32733de0e082104c5258a0d60c3c5480c31ba14a4975ee94ebb9467ffa7232\": container with ID starting with da32733de0e082104c5258a0d60c3c5480c31ba14a4975ee94ebb9467ffa7232 not found: ID does not exist" Jan 29 16:20:08 crc kubenswrapper[5008]: I0129 16:20:08.191120 5008 scope.go:117] "RemoveContainer" containerID="5afd0a214b8f8d22e6164362eafb7f99729ea9d22bade9b4d16142746c8240a6" Jan 29 16:20:08 crc kubenswrapper[5008]: E0129 16:20:08.191503 5008 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5afd0a214b8f8d22e6164362eafb7f99729ea9d22bade9b4d16142746c8240a6\": container with ID starting with 5afd0a214b8f8d22e6164362eafb7f99729ea9d22bade9b4d16142746c8240a6 not found: ID does not exist" containerID="5afd0a214b8f8d22e6164362eafb7f99729ea9d22bade9b4d16142746c8240a6" Jan 29 16:20:08 crc kubenswrapper[5008]: I0129 16:20:08.191529 5008 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5afd0a214b8f8d22e6164362eafb7f99729ea9d22bade9b4d16142746c8240a6"} err="failed to get container status \"5afd0a214b8f8d22e6164362eafb7f99729ea9d22bade9b4d16142746c8240a6\": rpc error: code = NotFound desc = could not find container \"5afd0a214b8f8d22e6164362eafb7f99729ea9d22bade9b4d16142746c8240a6\": container with ID starting with 5afd0a214b8f8d22e6164362eafb7f99729ea9d22bade9b4d16142746c8240a6 not found: ID does not exist" Jan 29 16:20:09 crc kubenswrapper[5008]: I0129 16:20:09.335038 5008 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4101eecf-8a4f-4ec9-9b3e-7dc1d9a34f55" path="/var/lib/kubelet/pods/4101eecf-8a4f-4ec9-9b3e-7dc1d9a34f55/volumes" Jan 29 16:20:10 crc kubenswrapper[5008]: I0129 16:20:10.679649 5008 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc5tkrx_451500d6-673a-42ac-84b5-75d3b9d46998/util/0.log" Jan 29 16:20:10 crc kubenswrapper[5008]: I0129 16:20:10.803873 5008 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc5tkrx_451500d6-673a-42ac-84b5-75d3b9d46998/util/0.log" Jan 29 16:20:10 crc kubenswrapper[5008]: I0129 16:20:10.839507 5008 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc5tkrx_451500d6-673a-42ac-84b5-75d3b9d46998/pull/0.log" Jan 29 16:20:10 crc kubenswrapper[5008]: I0129 16:20:10.840181 5008 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc5tkrx_451500d6-673a-42ac-84b5-75d3b9d46998/pull/0.log" Jan 29 16:20:11 crc kubenswrapper[5008]: I0129 16:20:11.047940 5008 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc5tkrx_451500d6-673a-42ac-84b5-75d3b9d46998/util/0.log" Jan 29 16:20:11 crc kubenswrapper[5008]: I0129 16:20:11.060772 5008 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc5tkrx_451500d6-673a-42ac-84b5-75d3b9d46998/pull/0.log" Jan 29 16:20:11 crc kubenswrapper[5008]: I0129 16:20:11.090555 5008 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc5tkrx_451500d6-673a-42ac-84b5-75d3b9d46998/extract/0.log" Jan 29 16:20:11 crc kubenswrapper[5008]: I0129 16:20:11.225084 5008 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713n8j4s_d4466921-85af-471c-956d-71f6576ca8f1/util/0.log" Jan 29 16:20:11 crc kubenswrapper[5008]: I0129 16:20:11.374126 5008 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713n8j4s_d4466921-85af-471c-956d-71f6576ca8f1/pull/0.log" Jan 29 16:20:11 crc kubenswrapper[5008]: I0129 16:20:11.378075 5008 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713n8j4s_d4466921-85af-471c-956d-71f6576ca8f1/util/0.log" Jan 29 16:20:11 crc kubenswrapper[5008]: I0129 16:20:11.396503 5008 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713n8j4s_d4466921-85af-471c-956d-71f6576ca8f1/pull/0.log" Jan 29 16:20:11 crc kubenswrapper[5008]: I0129 16:20:11.515336 5008 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713n8j4s_d4466921-85af-471c-956d-71f6576ca8f1/util/0.log" Jan 29 16:20:11 crc kubenswrapper[5008]: I0129 16:20:11.542217 5008 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713n8j4s_d4466921-85af-471c-956d-71f6576ca8f1/extract/0.log" Jan 29 16:20:11 crc kubenswrapper[5008]: I0129 16:20:11.544811 5008 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713n8j4s_d4466921-85af-471c-956d-71f6576ca8f1/pull/0.log" Jan 29 16:20:11 crc kubenswrapper[5008]: I0129 16:20:11.687823 5008 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-l2shr_6263e09b-1d9a-4833-851b-1cb8c8132dfe/extract-utilities/0.log" Jan 29 16:20:11 crc kubenswrapper[5008]: I0129 16:20:11.839582 5008 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-l2shr_6263e09b-1d9a-4833-851b-1cb8c8132dfe/extract-content/0.log" Jan 29 16:20:11 crc kubenswrapper[5008]: I0129 16:20:11.846106 5008 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-l2shr_6263e09b-1d9a-4833-851b-1cb8c8132dfe/extract-utilities/0.log" Jan 29 16:20:11 crc kubenswrapper[5008]: I0129 16:20:11.858111 5008 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-l2shr_6263e09b-1d9a-4833-851b-1cb8c8132dfe/extract-content/0.log" Jan 29 16:20:12 crc kubenswrapper[5008]: I0129 16:20:12.001796 5008 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-l2shr_6263e09b-1d9a-4833-851b-1cb8c8132dfe/extract-utilities/0.log" Jan 29 16:20:12 crc kubenswrapper[5008]: I0129 16:20:12.049057 5008 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-l2shr_6263e09b-1d9a-4833-851b-1cb8c8132dfe/extract-content/0.log" Jan 29 16:20:12 crc kubenswrapper[5008]: I0129 16:20:12.252173 5008 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-5br4h_b4517208-d057-4652-a3c2-fb8374a45a04/extract-utilities/0.log" Jan 29 16:20:12 crc kubenswrapper[5008]: I0129 16:20:12.416815 5008 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-l2shr_6263e09b-1d9a-4833-851b-1cb8c8132dfe/registry-server/0.log" Jan 29 16:20:12 crc kubenswrapper[5008]: I0129 16:20:12.425458 5008 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-5br4h_b4517208-d057-4652-a3c2-fb8374a45a04/extract-utilities/0.log" Jan 29 16:20:12 crc kubenswrapper[5008]: I0129 16:20:12.506834 5008 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-5br4h_b4517208-d057-4652-a3c2-fb8374a45a04/extract-content/0.log" Jan 29 16:20:12 crc kubenswrapper[5008]: I0129 16:20:12.511490 5008 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-5br4h_b4517208-d057-4652-a3c2-fb8374a45a04/extract-content/0.log" Jan 29 16:20:12 crc kubenswrapper[5008]: I0129 16:20:12.665852 5008 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-5br4h_b4517208-d057-4652-a3c2-fb8374a45a04/extract-content/0.log" Jan 29 16:20:12 crc kubenswrapper[5008]: I0129 16:20:12.671347 5008 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-5br4h_b4517208-d057-4652-a3c2-fb8374a45a04/extract-utilities/0.log" Jan 29 16:20:12 crc kubenswrapper[5008]: I0129 16:20:12.872743 5008 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-pz9kz_077a9343-695d-4180-9255-41f1eaeb58a3/marketplace-operator/0.log" Jan 29 16:20:12 crc kubenswrapper[5008]: I0129 16:20:12.957752 5008 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-nd64n_1babb539-12b9-4532-b9c3-bc165829c40e/extract-utilities/0.log" Jan 29 16:20:13 crc kubenswrapper[5008]: I0129 16:20:13.191571 5008 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-nd64n_1babb539-12b9-4532-b9c3-bc165829c40e/extract-utilities/0.log" Jan 29 16:20:13 crc kubenswrapper[5008]: I0129 16:20:13.191615 5008 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-nd64n_1babb539-12b9-4532-b9c3-bc165829c40e/extract-content/0.log" Jan 29 16:20:13 crc kubenswrapper[5008]: I0129 16:20:13.248235 5008 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-nd64n_1babb539-12b9-4532-b9c3-bc165829c40e/extract-content/0.log" Jan 29 16:20:13 crc kubenswrapper[5008]: I0129 16:20:13.262790 5008 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-5br4h_b4517208-d057-4652-a3c2-fb8374a45a04/registry-server/0.log" Jan 29 16:20:13 crc kubenswrapper[5008]: I0129 16:20:13.384843 5008 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-nd64n_1babb539-12b9-4532-b9c3-bc165829c40e/extract-content/0.log" Jan 29 16:20:13 crc kubenswrapper[5008]: I0129 16:20:13.387293 5008 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-nd64n_1babb539-12b9-4532-b9c3-bc165829c40e/extract-utilities/0.log" Jan 29 16:20:13 crc kubenswrapper[5008]: I0129 16:20:13.529275 5008 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-nd64n_1babb539-12b9-4532-b9c3-bc165829c40e/registry-server/0.log" Jan 29 16:20:13 crc kubenswrapper[5008]: I0129 16:20:13.596684 5008 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-5g5wg_5fbd5270-4a24-47ba-a0cf-0c3382a833c0/extract-utilities/0.log" Jan 29 16:20:13 crc kubenswrapper[5008]: I0129 16:20:13.711151 5008 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-5g5wg_5fbd5270-4a24-47ba-a0cf-0c3382a833c0/extract-utilities/0.log" Jan 29 16:20:13 crc kubenswrapper[5008]: I0129 16:20:13.734362 5008 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-5g5wg_5fbd5270-4a24-47ba-a0cf-0c3382a833c0/extract-content/0.log" Jan 29 16:20:13 crc kubenswrapper[5008]: I0129 16:20:13.735709 5008 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-5g5wg_5fbd5270-4a24-47ba-a0cf-0c3382a833c0/extract-content/0.log" Jan 29 16:20:13 crc kubenswrapper[5008]: I0129 16:20:13.886984 5008 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-5g5wg_5fbd5270-4a24-47ba-a0cf-0c3382a833c0/extract-utilities/0.log" Jan 29 16:20:13 crc kubenswrapper[5008]: I0129 16:20:13.937421 5008 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-5g5wg_5fbd5270-4a24-47ba-a0cf-0c3382a833c0/extract-content/0.log" Jan 29 16:20:13 crc kubenswrapper[5008]: I0129 16:20:13.990842 5008 patch_prober.go:28] interesting pod/machine-config-daemon-gk9q8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 16:20:13 crc kubenswrapper[5008]: I0129 16:20:13.990909 5008 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-gk9q8" podUID="ca0fcb2d-733d-4bde-9bbf-3f7082d0e244" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 16:20:13 crc kubenswrapper[5008]: I0129 16:20:13.990958 5008 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-gk9q8" Jan 29 16:20:13 crc kubenswrapper[5008]: I0129 16:20:13.991806 5008 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"4869b8ff7292689d034b462eb087eeb3d660872c7c7ec7e800ab22acc04bbfec"} pod="openshift-machine-config-operator/machine-config-daemon-gk9q8" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 29 16:20:13 crc kubenswrapper[5008]: I0129 16:20:13.991886 5008 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-gk9q8" podUID="ca0fcb2d-733d-4bde-9bbf-3f7082d0e244" containerName="machine-config-daemon" containerID="cri-o://4869b8ff7292689d034b462eb087eeb3d660872c7c7ec7e800ab22acc04bbfec" gracePeriod=600 Jan 29 16:20:14 crc kubenswrapper[5008]: I0129 16:20:14.680842 5008 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-5g5wg_5fbd5270-4a24-47ba-a0cf-0c3382a833c0/registry-server/0.log" Jan 29 16:20:14 crc kubenswrapper[5008]: E0129 16:20:14.685176 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gk9q8_openshift-machine-config-operator(ca0fcb2d-733d-4bde-9bbf-3f7082d0e244)\"" pod="openshift-machine-config-operator/machine-config-daemon-gk9q8" podUID="ca0fcb2d-733d-4bde-9bbf-3f7082d0e244" Jan 29 16:20:15 crc kubenswrapper[5008]: I0129 16:20:15.145976 5008 generic.go:334] "Generic (PLEG): container finished" podID="ca0fcb2d-733d-4bde-9bbf-3f7082d0e244" containerID="4869b8ff7292689d034b462eb087eeb3d660872c7c7ec7e800ab22acc04bbfec" exitCode=0 Jan 29 16:20:15 crc kubenswrapper[5008]: I0129 16:20:15.146274 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-gk9q8" event={"ID":"ca0fcb2d-733d-4bde-9bbf-3f7082d0e244","Type":"ContainerDied","Data":"4869b8ff7292689d034b462eb087eeb3d660872c7c7ec7e800ab22acc04bbfec"} Jan 29 16:20:15 crc kubenswrapper[5008]: I0129 16:20:15.146315 5008 scope.go:117] "RemoveContainer" containerID="b700e8418443771845187d679243e192744c1e88425ed21d7245867ce870d957" Jan 29 16:20:15 crc kubenswrapper[5008]: I0129 16:20:15.147001 5008 scope.go:117] "RemoveContainer" containerID="4869b8ff7292689d034b462eb087eeb3d660872c7c7ec7e800ab22acc04bbfec" Jan 29 16:20:15 crc kubenswrapper[5008]: E0129 16:20:15.147340 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gk9q8_openshift-machine-config-operator(ca0fcb2d-733d-4bde-9bbf-3f7082d0e244)\"" pod="openshift-machine-config-operator/machine-config-daemon-gk9q8" podUID="ca0fcb2d-733d-4bde-9bbf-3f7082d0e244" Jan 29 16:20:28 crc kubenswrapper[5008]: I0129 16:20:28.324016 5008 scope.go:117] "RemoveContainer" containerID="4869b8ff7292689d034b462eb087eeb3d660872c7c7ec7e800ab22acc04bbfec" Jan 29 16:20:28 crc kubenswrapper[5008]: E0129 16:20:28.324770 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gk9q8_openshift-machine-config-operator(ca0fcb2d-733d-4bde-9bbf-3f7082d0e244)\"" pod="openshift-machine-config-operator/machine-config-daemon-gk9q8" podUID="ca0fcb2d-733d-4bde-9bbf-3f7082d0e244" Jan 29 16:20:37 crc kubenswrapper[5008]: E0129 16:20:37.822595 5008 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 38.102.83.50:52936->38.102.83.50:37791: write tcp 38.102.83.50:52936->38.102.83.50:37791: write: broken pipe Jan 29 16:20:43 crc kubenswrapper[5008]: I0129 16:20:43.324412 5008 scope.go:117] "RemoveContainer" containerID="4869b8ff7292689d034b462eb087eeb3d660872c7c7ec7e800ab22acc04bbfec" Jan 29 16:20:43 crc kubenswrapper[5008]: E0129 16:20:43.325343 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gk9q8_openshift-machine-config-operator(ca0fcb2d-733d-4bde-9bbf-3f7082d0e244)\"" pod="openshift-machine-config-operator/machine-config-daemon-gk9q8" podUID="ca0fcb2d-733d-4bde-9bbf-3f7082d0e244" Jan 29 16:20:44 crc kubenswrapper[5008]: I0129 16:20:44.114015 5008 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/swift-proxy-5c6fbdb57f-zvhpz" podUID="64c08f63-12a2-4dfb-b96d-0a12e9725021" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 502" Jan 29 16:20:55 crc kubenswrapper[5008]: I0129 16:20:55.331885 5008 scope.go:117] "RemoveContainer" containerID="4869b8ff7292689d034b462eb087eeb3d660872c7c7ec7e800ab22acc04bbfec" Jan 29 16:20:55 crc kubenswrapper[5008]: E0129 16:20:55.333616 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gk9q8_openshift-machine-config-operator(ca0fcb2d-733d-4bde-9bbf-3f7082d0e244)\"" pod="openshift-machine-config-operator/machine-config-daemon-gk9q8" podUID="ca0fcb2d-733d-4bde-9bbf-3f7082d0e244" Jan 29 16:21:07 crc kubenswrapper[5008]: I0129 16:21:07.331986 5008 scope.go:117] "RemoveContainer" containerID="4869b8ff7292689d034b462eb087eeb3d660872c7c7ec7e800ab22acc04bbfec" Jan 29 16:21:07 crc kubenswrapper[5008]: E0129 16:21:07.332881 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gk9q8_openshift-machine-config-operator(ca0fcb2d-733d-4bde-9bbf-3f7082d0e244)\"" pod="openshift-machine-config-operator/machine-config-daemon-gk9q8" podUID="ca0fcb2d-733d-4bde-9bbf-3f7082d0e244" Jan 29 16:21:20 crc kubenswrapper[5008]: I0129 16:21:20.324291 5008 scope.go:117] "RemoveContainer" containerID="4869b8ff7292689d034b462eb087eeb3d660872c7c7ec7e800ab22acc04bbfec" Jan 29 16:21:20 crc kubenswrapper[5008]: E0129 16:21:20.325377 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gk9q8_openshift-machine-config-operator(ca0fcb2d-733d-4bde-9bbf-3f7082d0e244)\"" pod="openshift-machine-config-operator/machine-config-daemon-gk9q8" podUID="ca0fcb2d-733d-4bde-9bbf-3f7082d0e244" Jan 29 16:21:35 crc kubenswrapper[5008]: I0129 16:21:35.324315 5008 scope.go:117] "RemoveContainer" containerID="4869b8ff7292689d034b462eb087eeb3d660872c7c7ec7e800ab22acc04bbfec" Jan 29 16:21:35 crc kubenswrapper[5008]: E0129 16:21:35.325623 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gk9q8_openshift-machine-config-operator(ca0fcb2d-733d-4bde-9bbf-3f7082d0e244)\"" pod="openshift-machine-config-operator/machine-config-daemon-gk9q8" podUID="ca0fcb2d-733d-4bde-9bbf-3f7082d0e244" Jan 29 16:21:50 crc kubenswrapper[5008]: I0129 16:21:50.324531 5008 scope.go:117] "RemoveContainer" containerID="4869b8ff7292689d034b462eb087eeb3d660872c7c7ec7e800ab22acc04bbfec" Jan 29 16:21:50 crc kubenswrapper[5008]: E0129 16:21:50.327132 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gk9q8_openshift-machine-config-operator(ca0fcb2d-733d-4bde-9bbf-3f7082d0e244)\"" pod="openshift-machine-config-operator/machine-config-daemon-gk9q8" podUID="ca0fcb2d-733d-4bde-9bbf-3f7082d0e244" Jan 29 16:21:53 crc kubenswrapper[5008]: I0129 16:21:53.077208 5008 generic.go:334] "Generic (PLEG): container finished" podID="d320dd2e-14dc-4c54-86bf-25b5abd30dae" containerID="3327cb68737f553fc5a657c32f672ee7fa9a240ba24d843df1220fe098f622fe" exitCode=0 Jan 29 16:21:53 crc kubenswrapper[5008]: I0129 16:21:53.077305 5008 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-nvrh2/must-gather-f7qvt" event={"ID":"d320dd2e-14dc-4c54-86bf-25b5abd30dae","Type":"ContainerDied","Data":"3327cb68737f553fc5a657c32f672ee7fa9a240ba24d843df1220fe098f622fe"} Jan 29 16:21:53 crc kubenswrapper[5008]: I0129 16:21:53.078013 5008 scope.go:117] "RemoveContainer" containerID="3327cb68737f553fc5a657c32f672ee7fa9a240ba24d843df1220fe098f622fe" Jan 29 16:21:53 crc kubenswrapper[5008]: I0129 16:21:53.980439 5008 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-nvrh2_must-gather-f7qvt_d320dd2e-14dc-4c54-86bf-25b5abd30dae/gather/0.log" Jan 29 16:22:02 crc kubenswrapper[5008]: I0129 16:22:02.614263 5008 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-nvrh2/must-gather-f7qvt"] Jan 29 16:22:02 crc kubenswrapper[5008]: I0129 16:22:02.615138 5008 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-must-gather-nvrh2/must-gather-f7qvt" podUID="d320dd2e-14dc-4c54-86bf-25b5abd30dae" containerName="copy" containerID="cri-o://ea23d1b8036291fc45a3f31fc97e29dc32fd1ff69a4590d0e2497457df3a82ce" gracePeriod=2 Jan 29 16:22:02 crc kubenswrapper[5008]: I0129 16:22:02.622952 5008 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-nvrh2/must-gather-f7qvt"] Jan 29 16:22:03 crc kubenswrapper[5008]: I0129 16:22:03.092796 5008 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-nvrh2_must-gather-f7qvt_d320dd2e-14dc-4c54-86bf-25b5abd30dae/copy/0.log" Jan 29 16:22:03 crc kubenswrapper[5008]: I0129 16:22:03.093645 5008 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-nvrh2/must-gather-f7qvt" Jan 29 16:22:03 crc kubenswrapper[5008]: I0129 16:22:03.174935 5008 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-nvrh2_must-gather-f7qvt_d320dd2e-14dc-4c54-86bf-25b5abd30dae/copy/0.log" Jan 29 16:22:03 crc kubenswrapper[5008]: I0129 16:22:03.175284 5008 generic.go:334] "Generic (PLEG): container finished" podID="d320dd2e-14dc-4c54-86bf-25b5abd30dae" containerID="ea23d1b8036291fc45a3f31fc97e29dc32fd1ff69a4590d0e2497457df3a82ce" exitCode=143 Jan 29 16:22:03 crc kubenswrapper[5008]: I0129 16:22:03.175334 5008 scope.go:117] "RemoveContainer" containerID="ea23d1b8036291fc45a3f31fc97e29dc32fd1ff69a4590d0e2497457df3a82ce" Jan 29 16:22:03 crc kubenswrapper[5008]: I0129 16:22:03.175398 5008 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-nvrh2/must-gather-f7qvt" Jan 29 16:22:03 crc kubenswrapper[5008]: I0129 16:22:03.196844 5008 scope.go:117] "RemoveContainer" containerID="3327cb68737f553fc5a657c32f672ee7fa9a240ba24d843df1220fe098f622fe" Jan 29 16:22:03 crc kubenswrapper[5008]: I0129 16:22:03.211154 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/d320dd2e-14dc-4c54-86bf-25b5abd30dae-must-gather-output\") pod \"d320dd2e-14dc-4c54-86bf-25b5abd30dae\" (UID: \"d320dd2e-14dc-4c54-86bf-25b5abd30dae\") " Jan 29 16:22:03 crc kubenswrapper[5008]: I0129 16:22:03.211221 5008 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p5tbc\" (UniqueName: \"kubernetes.io/projected/d320dd2e-14dc-4c54-86bf-25b5abd30dae-kube-api-access-p5tbc\") pod \"d320dd2e-14dc-4c54-86bf-25b5abd30dae\" (UID: \"d320dd2e-14dc-4c54-86bf-25b5abd30dae\") " Jan 29 16:22:03 crc kubenswrapper[5008]: I0129 16:22:03.217386 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d320dd2e-14dc-4c54-86bf-25b5abd30dae-kube-api-access-p5tbc" (OuterVolumeSpecName: "kube-api-access-p5tbc") pod "d320dd2e-14dc-4c54-86bf-25b5abd30dae" (UID: "d320dd2e-14dc-4c54-86bf-25b5abd30dae"). InnerVolumeSpecName "kube-api-access-p5tbc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:22:03 crc kubenswrapper[5008]: I0129 16:22:03.268883 5008 scope.go:117] "RemoveContainer" containerID="ea23d1b8036291fc45a3f31fc97e29dc32fd1ff69a4590d0e2497457df3a82ce" Jan 29 16:22:03 crc kubenswrapper[5008]: E0129 16:22:03.269449 5008 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ea23d1b8036291fc45a3f31fc97e29dc32fd1ff69a4590d0e2497457df3a82ce\": container with ID starting with ea23d1b8036291fc45a3f31fc97e29dc32fd1ff69a4590d0e2497457df3a82ce not found: ID does not exist" containerID="ea23d1b8036291fc45a3f31fc97e29dc32fd1ff69a4590d0e2497457df3a82ce" Jan 29 16:22:03 crc kubenswrapper[5008]: I0129 16:22:03.269493 5008 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ea23d1b8036291fc45a3f31fc97e29dc32fd1ff69a4590d0e2497457df3a82ce"} err="failed to get container status \"ea23d1b8036291fc45a3f31fc97e29dc32fd1ff69a4590d0e2497457df3a82ce\": rpc error: code = NotFound desc = could not find container \"ea23d1b8036291fc45a3f31fc97e29dc32fd1ff69a4590d0e2497457df3a82ce\": container with ID starting with ea23d1b8036291fc45a3f31fc97e29dc32fd1ff69a4590d0e2497457df3a82ce not found: ID does not exist" Jan 29 16:22:03 crc kubenswrapper[5008]: I0129 16:22:03.269527 5008 scope.go:117] "RemoveContainer" containerID="3327cb68737f553fc5a657c32f672ee7fa9a240ba24d843df1220fe098f622fe" Jan 29 16:22:03 crc kubenswrapper[5008]: E0129 16:22:03.270730 5008 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3327cb68737f553fc5a657c32f672ee7fa9a240ba24d843df1220fe098f622fe\": container with ID starting with 3327cb68737f553fc5a657c32f672ee7fa9a240ba24d843df1220fe098f622fe not found: ID does not exist" containerID="3327cb68737f553fc5a657c32f672ee7fa9a240ba24d843df1220fe098f622fe" Jan 29 16:22:03 crc kubenswrapper[5008]: I0129 16:22:03.270854 5008 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3327cb68737f553fc5a657c32f672ee7fa9a240ba24d843df1220fe098f622fe"} err="failed to get container status \"3327cb68737f553fc5a657c32f672ee7fa9a240ba24d843df1220fe098f622fe\": rpc error: code = NotFound desc = could not find container \"3327cb68737f553fc5a657c32f672ee7fa9a240ba24d843df1220fe098f622fe\": container with ID starting with 3327cb68737f553fc5a657c32f672ee7fa9a240ba24d843df1220fe098f622fe not found: ID does not exist" Jan 29 16:22:03 crc kubenswrapper[5008]: I0129 16:22:03.313978 5008 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p5tbc\" (UniqueName: \"kubernetes.io/projected/d320dd2e-14dc-4c54-86bf-25b5abd30dae-kube-api-access-p5tbc\") on node \"crc\" DevicePath \"\"" Jan 29 16:22:03 crc kubenswrapper[5008]: I0129 16:22:03.336075 5008 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d320dd2e-14dc-4c54-86bf-25b5abd30dae-must-gather-output" (OuterVolumeSpecName: "must-gather-output") pod "d320dd2e-14dc-4c54-86bf-25b5abd30dae" (UID: "d320dd2e-14dc-4c54-86bf-25b5abd30dae"). InnerVolumeSpecName "must-gather-output". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 16:22:03 crc kubenswrapper[5008]: I0129 16:22:03.416619 5008 reconciler_common.go:293] "Volume detached for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/d320dd2e-14dc-4c54-86bf-25b5abd30dae-must-gather-output\") on node \"crc\" DevicePath \"\"" Jan 29 16:22:04 crc kubenswrapper[5008]: I0129 16:22:04.323632 5008 scope.go:117] "RemoveContainer" containerID="4869b8ff7292689d034b462eb087eeb3d660872c7c7ec7e800ab22acc04bbfec" Jan 29 16:22:04 crc kubenswrapper[5008]: E0129 16:22:04.324143 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gk9q8_openshift-machine-config-operator(ca0fcb2d-733d-4bde-9bbf-3f7082d0e244)\"" pod="openshift-machine-config-operator/machine-config-daemon-gk9q8" podUID="ca0fcb2d-733d-4bde-9bbf-3f7082d0e244" Jan 29 16:22:05 crc kubenswrapper[5008]: I0129 16:22:05.343281 5008 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d320dd2e-14dc-4c54-86bf-25b5abd30dae" path="/var/lib/kubelet/pods/d320dd2e-14dc-4c54-86bf-25b5abd30dae/volumes" Jan 29 16:22:18 crc kubenswrapper[5008]: I0129 16:22:18.323292 5008 scope.go:117] "RemoveContainer" containerID="4869b8ff7292689d034b462eb087eeb3d660872c7c7ec7e800ab22acc04bbfec" Jan 29 16:22:18 crc kubenswrapper[5008]: E0129 16:22:18.323959 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gk9q8_openshift-machine-config-operator(ca0fcb2d-733d-4bde-9bbf-3f7082d0e244)\"" pod="openshift-machine-config-operator/machine-config-daemon-gk9q8" podUID="ca0fcb2d-733d-4bde-9bbf-3f7082d0e244" Jan 29 16:22:33 crc kubenswrapper[5008]: I0129 16:22:33.323737 5008 scope.go:117] "RemoveContainer" containerID="4869b8ff7292689d034b462eb087eeb3d660872c7c7ec7e800ab22acc04bbfec" Jan 29 16:22:33 crc kubenswrapper[5008]: E0129 16:22:33.324552 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gk9q8_openshift-machine-config-operator(ca0fcb2d-733d-4bde-9bbf-3f7082d0e244)\"" pod="openshift-machine-config-operator/machine-config-daemon-gk9q8" podUID="ca0fcb2d-733d-4bde-9bbf-3f7082d0e244" Jan 29 16:22:48 crc kubenswrapper[5008]: I0129 16:22:48.323866 5008 scope.go:117] "RemoveContainer" containerID="4869b8ff7292689d034b462eb087eeb3d660872c7c7ec7e800ab22acc04bbfec" Jan 29 16:22:48 crc kubenswrapper[5008]: E0129 16:22:48.324689 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gk9q8_openshift-machine-config-operator(ca0fcb2d-733d-4bde-9bbf-3f7082d0e244)\"" pod="openshift-machine-config-operator/machine-config-daemon-gk9q8" podUID="ca0fcb2d-733d-4bde-9bbf-3f7082d0e244" Jan 29 16:23:01 crc kubenswrapper[5008]: I0129 16:23:01.323862 5008 scope.go:117] "RemoveContainer" containerID="4869b8ff7292689d034b462eb087eeb3d660872c7c7ec7e800ab22acc04bbfec" Jan 29 16:23:01 crc kubenswrapper[5008]: E0129 16:23:01.324615 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gk9q8_openshift-machine-config-operator(ca0fcb2d-733d-4bde-9bbf-3f7082d0e244)\"" pod="openshift-machine-config-operator/machine-config-daemon-gk9q8" podUID="ca0fcb2d-733d-4bde-9bbf-3f7082d0e244" Jan 29 16:23:15 crc kubenswrapper[5008]: I0129 16:23:15.323885 5008 scope.go:117] "RemoveContainer" containerID="4869b8ff7292689d034b462eb087eeb3d660872c7c7ec7e800ab22acc04bbfec" Jan 29 16:23:15 crc kubenswrapper[5008]: E0129 16:23:15.324767 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gk9q8_openshift-machine-config-operator(ca0fcb2d-733d-4bde-9bbf-3f7082d0e244)\"" pod="openshift-machine-config-operator/machine-config-daemon-gk9q8" podUID="ca0fcb2d-733d-4bde-9bbf-3f7082d0e244" Jan 29 16:23:26 crc kubenswrapper[5008]: I0129 16:23:26.324265 5008 scope.go:117] "RemoveContainer" containerID="4869b8ff7292689d034b462eb087eeb3d660872c7c7ec7e800ab22acc04bbfec" Jan 29 16:23:26 crc kubenswrapper[5008]: E0129 16:23:26.325256 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gk9q8_openshift-machine-config-operator(ca0fcb2d-733d-4bde-9bbf-3f7082d0e244)\"" pod="openshift-machine-config-operator/machine-config-daemon-gk9q8" podUID="ca0fcb2d-733d-4bde-9bbf-3f7082d0e244" Jan 29 16:23:40 crc kubenswrapper[5008]: I0129 16:23:40.323827 5008 scope.go:117] "RemoveContainer" containerID="4869b8ff7292689d034b462eb087eeb3d660872c7c7ec7e800ab22acc04bbfec" Jan 29 16:23:40 crc kubenswrapper[5008]: E0129 16:23:40.324633 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gk9q8_openshift-machine-config-operator(ca0fcb2d-733d-4bde-9bbf-3f7082d0e244)\"" pod="openshift-machine-config-operator/machine-config-daemon-gk9q8" podUID="ca0fcb2d-733d-4bde-9bbf-3f7082d0e244" Jan 29 16:23:54 crc kubenswrapper[5008]: I0129 16:23:54.324366 5008 scope.go:117] "RemoveContainer" containerID="4869b8ff7292689d034b462eb087eeb3d660872c7c7ec7e800ab22acc04bbfec" Jan 29 16:23:54 crc kubenswrapper[5008]: E0129 16:23:54.325381 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gk9q8_openshift-machine-config-operator(ca0fcb2d-733d-4bde-9bbf-3f7082d0e244)\"" pod="openshift-machine-config-operator/machine-config-daemon-gk9q8" podUID="ca0fcb2d-733d-4bde-9bbf-3f7082d0e244" Jan 29 16:24:05 crc kubenswrapper[5008]: I0129 16:24:05.324359 5008 scope.go:117] "RemoveContainer" containerID="4869b8ff7292689d034b462eb087eeb3d660872c7c7ec7e800ab22acc04bbfec" Jan 29 16:24:05 crc kubenswrapper[5008]: E0129 16:24:05.326081 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gk9q8_openshift-machine-config-operator(ca0fcb2d-733d-4bde-9bbf-3f7082d0e244)\"" pod="openshift-machine-config-operator/machine-config-daemon-gk9q8" podUID="ca0fcb2d-733d-4bde-9bbf-3f7082d0e244" Jan 29 16:24:19 crc kubenswrapper[5008]: I0129 16:24:19.324563 5008 scope.go:117] "RemoveContainer" containerID="4869b8ff7292689d034b462eb087eeb3d660872c7c7ec7e800ab22acc04bbfec" Jan 29 16:24:19 crc kubenswrapper[5008]: E0129 16:24:19.325812 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gk9q8_openshift-machine-config-operator(ca0fcb2d-733d-4bde-9bbf-3f7082d0e244)\"" pod="openshift-machine-config-operator/machine-config-daemon-gk9q8" podUID="ca0fcb2d-733d-4bde-9bbf-3f7082d0e244" Jan 29 16:24:32 crc kubenswrapper[5008]: I0129 16:24:32.323651 5008 scope.go:117] "RemoveContainer" containerID="4869b8ff7292689d034b462eb087eeb3d660872c7c7ec7e800ab22acc04bbfec" Jan 29 16:24:32 crc kubenswrapper[5008]: E0129 16:24:32.324456 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gk9q8_openshift-machine-config-operator(ca0fcb2d-733d-4bde-9bbf-3f7082d0e244)\"" pod="openshift-machine-config-operator/machine-config-daemon-gk9q8" podUID="ca0fcb2d-733d-4bde-9bbf-3f7082d0e244" Jan 29 16:24:47 crc kubenswrapper[5008]: I0129 16:24:47.332170 5008 scope.go:117] "RemoveContainer" containerID="4869b8ff7292689d034b462eb087eeb3d660872c7c7ec7e800ab22acc04bbfec" Jan 29 16:24:47 crc kubenswrapper[5008]: E0129 16:24:47.333063 5008 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gk9q8_openshift-machine-config-operator(ca0fcb2d-733d-4bde-9bbf-3f7082d0e244)\"" pod="openshift-machine-config-operator/machine-config-daemon-gk9q8" podUID="ca0fcb2d-733d-4bde-9bbf-3f7082d0e244"